#srlinux #networkprogramming #cli
After a series of blog posts explaining how to build a data center fabric with the Containerlab, it’s time to explore some programming capabilities of SR Linux.
I was reading about the CLI Plugins and it gave me an idea that I can use in my containerized data center fabric, or basically in any Leaf/Spine topology.
In this post, you’ll read about CLI programming and see how to deploy a custom show command which I named ‘show fabric’.
If you want to jump to the command output…
What is a CLI Plugin in SR Linux?
So, let’s first talk about what CLI Plugin is and what it offers.
As you may know, SR Linux CLI is Python-based and open, meaning that you can view the source code of the CLI commands. Thanks to that, you can write your own show, tools, or global CLI command in Python, which is called a CLI Plugin.
A CLI plugin is typically a Python file copied to a certain path in the SR Linux. We’ll see how to do this, but first, let’s figure out how to build it.
How to create a CLI Plugin?
SR Linux provides a Python framework to write your custom CLI commands. Understanding the basics of this framework would be a good start.
I was also able to learn a lot from the source code of the native commands. If that sounds good to you, take a look at this path in your SR Linux:
# bash $ cd /opt/srlinux/python/virtual-env/lib/python3.6/site-packages/srlinux $ cd mgmt/cli/plugins/reports $ ls
Here you have the SR Linux show commands that can guide you for your custom command. Especially beneficial for those who like to learn by example.
A custom command: ‘show fabric’
The ‘show fabric’ command aims to give you the most common things you check on a Leaf router. The platform information, uplink statistics, eBGP and iBGP status, the number of learned routes, etc. in just one command.
Worth to mention that this show command is meant for a Leaf router in a typical IP fabric. So yes, it wouldn’t make much sense to run it in a standalone box.
If you haven’t got an SR Linux-based IP fabric in your lab yet, please follow the earlier posts to get it.
When you run the ‘show fabric’, it searches for the uplinks, eBGP peerings to Spines, and iBGP peerings possibly with a route-reflector, Besides these, it also detects LLDP sessions between the Leaf and Spine routers to collect more information. But that’s rather optional.
After all, your setup would be like this:
Get ‘show fabric’ in your setup!
It is actually quite straightforward but can also be complicated depending on your host networking 🙂 So, check these three points before you get it;
- Try configuring NAT and accept rules in your containerlab host to get internet access for the containers.
- If the mgmt0 interface is configured with an IPv6 address that doesn’t reach the internet, delete it.
- Your SR Linux will need to be configured with a DNS server to resolve the link.
- If you use a proxy, you may need to add the option to curl/wget.
- If none of these work, you can just download them here.
All right! Let’s download it to the SR Linux:
# bash $ bash <(curl -sL https://raw.githubusercontent.com/aaakpinar/srlinux/main/show-fabric/get-show-fabric.sh)
When you run this command in your SR Linux bash, it start downloading the CLI plugin (fabric.py) under the ‘/etc/opt/srlinux/cli/plugins/’ directory and some scripts to ‘show-fabric’ folder under the current path.
The scripts perform get, set, or delete operations. The get and set scripts are executed right after you download the plugin.
Get and delete scripts are obvious but let me clarify the set script:
The set script automatically runs right after the get script or when you run the set script individually. It allows you to define some parameters which are:
- Uplink Subinterfaces Description Pattern: The show command discovers the uplink subinterfaces from a unique description pattern. You can use ‘spine’ or ‘uplink’ keyword that is only used for the uplinks subinterface. Note that this must be a subinterface description, not the interface description.
- eBGP Group Name: The group name of eBGP peerings to the Spines. The number of neighbors defined in this peer group must be equal to the number of uplink subinterfaces.
- iBGP EVPN Group Name: BGP peer group that is created for route-reflectors, possible Spines again.
- Network Instance: Typically the ‘default’ network instance where we build the underlay network. Change only if it’s different in your case.
The script will put you in SR Linux CLI after you define these variables.
If you’re interested in the coding details, please see the next post!
See you there!