Configuring SRLinux Nodes in a 3-Tier Data Center Fabric – Part I

#srlinux #datacenter #network #configuration

We built a 3-tier data center network with containerized SRLinux nodes in our Linux machine in the earlier post. In this one, we’ll configure those nodes to get a fully connected data center fabric network.

If you prefer, you can watch the video that I went through this post and implement the steps from top to bottom.

When you log in to the SRLinux nodes for the first time, you’ll see that there is already some configuration such as DHCP under the management interface, hostname, enabled interfaces that we defined in the topology, etc. Those are some initial configurations from the containerlab.

You can check what’s already configured:

info flat

In this case, we can start by configuring the system IP address. But first, we should know how to get into the configuration mode in SRLinux CLI:

enter candidate

Once we’re in candidate mode, we can start configuring it. One of the first things that we need to configure is the default ‘network-instance’.

The network instance is actually a VRF (virtual routing and forwarding) instance in SRLinux terminology. You can typically create two types of network-instance; ‘ip-vrf and ‘mac-vrf’. And most importantly, you need to have a default network-instance to be used as a base routing instance, GRT or default VRF in other terms.

set / network-instance default
set / network-instance default type default
set / network-instance default admin-state enable
set / network-instance default description "Default VRF"

Now, since we have a network-instance, we can configure interfaces and put them under this network-instance. First, the configuration of the system interface:

set / interface system0
set / interface system0 description "System Loopback"
set / interface system0 admin-state enable
set / interface system0 subinterface 0
set / interface system0 subinterface 0 admin-state enable
set / interface system0 subinterface 0 ipv4
set / interface system0 subinterface 0 ipv4 address 1.1.1.1/32
set / network-instance default interface system0.0

This is typically a loopback address, but it is important to configure it since it’ll be used as a transport address for some protocols.

Next, the interfaces between Spine and Leaf:

set / interface ethernet-1/3
set / interface ethernet-1/3 description "to spine1"
set / interface ethernet-1/3 admin-state enable
set / interface ethernet-1/3 subinterface 1
set / interface ethernet-1/3 subinterface 1 admin-state enable
set / interface ethernet-1/3 subinterface 1 ip-mtu 9000
set / interface ethernet-1/3 subinterface 1 ipv4
set / interface ethernet-1/3 subinterface 1 ipv4 address 100.64.1.1/31
set / network-instance default interface ethernet-1/3.1

At this point, we need to remember the ‘links’ part of the topology YAML file and configure the interfaces as per the mappings defined in there.

One thing to mention; we configured the interfaces with a subinterface, even for the system loopback. That’s something SRLinux requires to configure an IP addresses.

All the system loopbacks and point-to-point(p2p) interfaces of whole datacenter fabric network can be configured and added to the default network-instance in this way.

The default network-instance will be the underlying network for EVPN overlay networks. Its function is to provide reachability between iBGP EVPN peers and VTEPs (Leaf, Border Leaf) in our case. To accomplish that, we need a routing protocol to get all the system IP addresses advertised through the fabric. And that’s the eBGP.

I can already hear questions like ‘Why the eBGP?’. If so, please see the RFC7938, or particularly the reasons that we chose eBGP instead of an IGP as a primary routing protocol here. But of course, ISIS or OSPF would also do the job.

Below the topology represented with the routing protocols.

Let’s see how we configure the basics of the BGP in our default network instance:

set / network-instance default protocols
set / network-instance default protocols bgp
set / network-instance default protocols bgp admin-state enable
set / network-instance default protocols bgp autonomous-system 4200000001
set / network-instance default protocols bgp router-id 1.1.1.1

To create a BGP group for eBGP peerings:

set / network-instance default protocols bgp group spine
set / network-instance default protocols bgp group spine admin-state enable
set / network-instance default protocols bgp group spine description "BGP to spines"
set / network-instance default protocols bgp group spine peer-as 4200000101
set / network-instance default protocols bgp group spine local-as 4200000001

P.S.: In Spines, configure the peer-as under the neighbor configuration.

Now it’s time to configure the BGP neighbors. First, we establish our eBGP sessions on the links that will function as an IGP.

set / network-instance default protocols bgp neighbor 100.64.1.0
set / network-instance default protocols bgp neighbor 100.64.1.0 admin-state enable
set / network-instance default protocols bgp neighbor 100.64.1.0 description "Spine1"
set / network-instance default protocols bgp neighbor 100.64.1.0 peer-group spine

When we configure all the nodes, we would see the point-to-point(p2p) eBGP sessions as established.

To list the neighbors, you can use the show command from the main context:

show network-instance default protocols bgp neighbor

Or, if you are already at the [network-instance default protocols bgp] context,
just run:

If you don’t see it, check if you did commit your changes 🙂
We haven’t mentioned it yet, but as with many other network OSes, SRLinux needs a commit before applying the changes done in the candidate mode.

commit stay

Now, it’s applied. You can check all the links and eBGP sessions before proceeding the next step.

If all is fine, you may already notice that we receive or advertise no prefixes from the other peers. That’s the moment we think about the ‘routing-policies’.

First, we define the prefixes with ‘prefix-set’ to advertise the p2p interface and system IP addresses:

set / routing-policy prefix-set system
set / routing-policy prefix-set system prefix 1.1.0.0/16 mask-length-range 32..32
set / routing-policy prefix-set p2p
set / routing-policy prefix-set p2p prefix 100.64.0.0/16 mask-length-range 16..24

Then, we allow them in the routing policies:

set / routing-policy policy policy-accept
set / routing-policy policy policy-accept statement 1
set / routing-policy policy policy-accept statement 1 match
set / routing-policy policy policy-accept statement 1 match prefix-set system
set / routing-policy policy policy-accept statement 1 action
set / routing-policy policy policy-accept statement 1 action accept
set / routing-policy policy policy-accept statement 2
set / routing-policy policy policy-accept statement 2 match
set / routing-policy policy policy-accept statement 2 match prefix-set p2p
set / routing-policy policy policy-accept statement 2 action
set / routing-policy policy policy-accept statement 2 action accept

And the last, applying the routing policy in the BGP group:

set / network-instance default protocols bgp group spine export-policy policy-accept
set / network-instance default protocols bgp group spine import-policy policy-accept

Let’s show the eBGP neighbor again.

This time, we see some prefixes are received and active. But most of the prefixes learned via Spine2 are not active. That’s because ECMP is not enabled in our network yet. So, only the system loopback of Spine2 is active among the received routes.

To load balance the traffic, enable the ECMP(8):

set / network-instance default protocols bgp ipv4-unicast multipath max-paths-level-1 8
set / network-instance default protocols bgp ipv4-unicast multipath max-paths-level-2 8

And now, we got them active and used in the routing table.

If we check the BGP routing table of one of the Leaf nodes, we would see all the system IP addresses.

And that enables us to establish iBGP EVPN sessions between the Leaf and Spine routers. Here we configure our BGP group for EVPN:

set / network-instance default protocols bgp group EVPN
set / network-instance default protocols bgp group EVPN admin-state enable
set / network-instance default protocols bgp group EVPN description "BGP-EVPN"
set / network-instance default protocols bgp group EVPN peer-as 4200065000
set / network-instance default protocols bgp group EVPN evpn
set / network-instance default protocols bgp group EVPN evpn admin-state enable
set / network-instance default protocols bgp group EVPN local-as 4200065000
set / network-instance default protocols bgp group EVPN transport local-address 1.1.1.1

It is the same ASN in every router since it’s iBGP.

And one more thing to configure, but only for our Spines (route reflectors), is:

set / network-instance default protocols bgp group EVPN route-reflector
set / network-instance default protocols bgp group EVPN route-reflector client true
set / network-instance default protocols bgp group EVPN route-reflector cluster-id 1.1.1.101

Finally, configuring our iBGP EVPN sessions that will be advertising our workload MAC/IP addresses through the data center fabric network.

set / network-instance default protocols bgp neighbor 1.1.1.101
set / network-instance default protocols bgp neighbor 1.1.1.101 admin-state enable
set / network-instance default protocols bgp neighbor 1.1.1.101 description RR-Spine1
set / network-instance default protocols bgp neighbor 1.1.1.101 peer-group EVPN
set / network-instance default protocols bgp neighbor 1.1.1.102
set / network-instance default protocols bgp neighbor 1.1.1.102 admin-state enable
set / network-instance default protocols bgp neighbor 1.1.1.102 description RR-Spine2
set / network-instance default protocols bgp neighbor 1.1.1.102 peer-group EVPN

If we check a Leaf, we should see 4 BGP sessions, 2 for eBGP to each Spine interface and 2 for iBGP EVPN to Spines'(RR) system loopbacks.

And, congrats! Now, you have a data center fabric with EVPN.

Next time, we will create some IP/MAC VRFs to be consumed by workloads. We’ll see how to use VXLAN for the tunneling for L2/L3 EVPN services.