
#containerlab #dcfabric #network #datacenter
The containerlab is a tool that you can create your own topologies with the nodes that have containerized network operating systems(NOS) such as cEOS, SRLinux, cRPD etc. .
In this post, we’ll see how the containerlab builds a datacenter fabric network with the containerized Nokia SRLinux nodes. You’ll also find the requirements and how to get your environment ready for the containerlab here.
If you prefer, you can watch the video that I went through this post and implement the steps from top to bottom.

We’ll create the topology in this diagram at the end of this post. But first, let’s focus on the requirements.
Since the containerlab is a Linux package, we need a Linux system like Centos/RHEL/WSL. The Linux user should have ‘sudo’ privileges to avoid any permission issues.
(MAC OS is also supported but it’s another story. Check out the containerlab website.)
If you already logged in to your CentOS/RHEL machine, go ahead and install the docker. That’s fundamental as the containerlab builds the nodes in containers. There are different installation methods for various Linux OSes. Please check here to follow the right procedure for your docker installation.
If everything goes well, you can see your docker version:
# docker -v

Now we can install the containerlab. There are various ways if you want to explore,
or just do:
# bash -c "$(curl -sL https://get-clab.srlinux.dev)"

And that’s it! Your environment is ready to build topologies with containerlab now.
Let’s explore available containerlab commands now:
# containerlab

The main commands we use here are ‘deploy’ and ‘destroy’ but you can take a look at the other commands like ‘graph‘ or ‘generate‘ after you build your DC Fabric.
Containerlab builds a topology based on the inputs you give and those inputs are given in a YAML file as you may guess. The topology definition parameters are explained with simple examples on the containerlab website. The topology YAML file of a 3-tier DC Fabric is explained below;
name: my_dc1
topology:
defaults:
kind: srl
kinds:
srl:
type: ixrd3
image: ghcr.io/nokia/srlinux:latest
# license: {if-needed}
linux:
image: docker.io/akpinar/alpine
In the first part of the YAML file, the name and some topology parameters are defined. The name is simply to distinguish multiple topologies from each other. Therefore, it must be unique.
The topology part has ‘defaults‘ and ‘kinds‘ in this part but also has ‘nodes’ and ‘links’ that will follow in the 2nd part below. The ‘defaults’ part has the global parameters and is used if something is not defined under the node(2nd part).
The ‘kinds’ define the kind of nodes such as Nokia SRLinux, Juniper cRPD, Arista cEOS, or Linux(hosts). There is a list of supported ‘kinds‘ in this link and only one of those names can be defined as kind. It is ‘srl‘ in our case’.
In the ‘srl‘ kind, we should define the type of the node; IXR D3. Also, the docker image and path to the license. We’ll see how to get them below. Now, let’s see the 2nd part of the topology YAML file.
nodes:
bl1:
kind: srl
mgmt_ipv4: 172.20.0.101
bl2:
kind: srl
mgmt_ipv4: 172.20.0.102
s1:
kind: srl
type: ixr6
mgmt_ipv4: 172.20.0.1
s2:
kind: srl
type: ixr6
mgmt_ipv4: 172.20.0.2
l1:
kind: srl
mgmt_ipv4: 172.20.0.11
l2:
kind: srl
mgmt_ipv4: 172.20.0.12
l3:
kind: srl
mgmt_ipv4: 172.20.0.13
l4:
kind: srl
mgmt_ipv4: 172.20.0.14
h1:
kind: linux
h2:
kind: linux
h3:
kind: linux
h4:
kind: linux
In this part, we see the list of the ‘nodes‘. As we saw in our topology diagram, there is 3-tier, Border Leaf(bl), Spine(s), and Leaf(l). In addition, there are hosts(h) that are connected to each Leaf to generate traffic on the fabric.
Every node has a ‘kind’ that defines at least their image name and license if necessary. Since there are multiple flavors of some kinds, such as SRL(ixrd2, ixrd3, ixr6, etc.), the kind also defines the type of the node. But we can overwrite it as we see in the Spine nodes here. They are going to use the same NOS image defined in the ‘srl’ kind with Leaf nodes, but the type is overwritten in the node definition as IXR6.
The nodes can get an IP address via DHCP from the management network that will be defined at the bottom. But also, a static IP address can be defined under the ‘srl’ kind of nodes with ‘mgmt_ipv4‘ in this example.
Now, let’s see the last part of our topology YAML file:
links:
- endpoints: ["s1:e1-33", "bl1:e1-3"]
- endpoints: ["s1:e1-34", "bl2:e1-3"]
- endpoints: ["s2:e1-33", "bl1:e1-4"]
- endpoints: ["s2:e1-34", "bl2:e1-4"]
- endpoints: ["s1:e1-1", "l1:e1-3"]
- endpoints: ["s1:e1-2", "l2:e1-3"]
- endpoints: ["s1:e1-3", "l3:e1-3"]
- endpoints: ["s1:e1-4", "l4:e1-3"]
- endpoints: ["s2:e1-1", "l1:e1-4"]
- endpoints: ["s2:e1-2", "l2:e1-4"]
- endpoints: ["s2:e1-3", "l3:e1-4"]
- endpoints: ["s2:e1-4", "l4:e1-4"]
- endpoints: ["l1:e1-10", "h1:eth1"]
- endpoints: ["l2:e1-10", "h2:eth1"]
- endpoints: ["l3:e1-10", "h3:eth1"]
- endpoints: ["l4:e1-10", "h4:eth1"]
mgmt:
network: srl-mgmt
ipv4_subnet: 172.20.0.0/24
ipv6_subnet: 2001:172:20::/80
Here we have the last element of the topology object, ‘links‘. It simply creates links between the nodes. We need to be aware of the syntax of the interface names per kind (e1-1 for ‘srl’, eth1 for ‘linux’ etc.).
And finally, at the bottom, the management network…
It creates a Linux bridge that is used to connect the nodes via SSH. We can consider it as an out-of-band management switch. The ‘mgmt’ interfaces of SRLinux nodes and the first interface of the Linux machines are going to be connected to that bridge.
And yes, we have our topology YAML file ready now!
At this stage, you can actually see what containerlab will build with this topology file with the ‘graph‘ command.
One more thing and we are ready to build our DC Fabric. If you remember the images we defined for the nodes, SRLinux and Linux, it’s time to pull them:
# containerlab graph --topo my_dc1.yaml --dot
This generates a ‘.dot‘ file that you can visualize your topology online like below.

Check this out for more options.
# docker pull ghcr.io/nokia/srlinux

# docker pull akpinar/alpine:latest

We are now very close to the moment we create our own data center fabric network in our Linux machine. Let’s do it:
# containerlab deploy --topo my_dc1.yaml

Here you go! It creates the containers and virtual wires for the links. In the end we have a list of nodes that we can see the node names and the management IP addresses of them.

Now we can connect to the nodes using either the name or IP address.
# ssh admin@clab-my_dc1-l1

If that doesn’t work somehow, you can also connect to the SRLinux CLI with:
# docker exec -ti clab-my_dc1-l1 sr_cli
You can disconnect either with ‘Ctrl+D’ or the docker way ‘Ctrl+P and Ctrl+Q’.
Default credentials are the same for both SRLinunx and Alpine host: admin/admin
Congratulations!
In this post, we make our Linux machine ready for a containerlab deployment, installed containerlab, built a topology YAML file, downloaded the images, and finally deployed our own data center fabric with a containerized NOS, SRLinux.
We’ll configure these nodes in the next post and possibly add new elements later on.
Blogumu Takip Edin
Yeni içerik doğrudan gelen kutunuza iletilsin.