
#srlinux #datacenter #network #evpn
Hi! Welcome back to the second part of the Configuring SR Linux Nodes in a 3-Tier Data Center Fabric blog post. In this one, we will see how to configure EVPN overlay networks on top of the fabric network we built in the first part.
Let’s first take a look at the networks we will create for our workloads.

This time, we focus on the network instances configured with EVPN, so-called overlay or tenant networks.
These network instances can be a mac-vrf which provides Layer 2 switching or ip-vrf which provides Layer 3 routing to the hosts.
In our example, we will connect host1 to the mac-vrf1 which is bonded to ip-vrf1. That means we have an irb interface that is configured with a gateway IP address in ip-vrf1 as well as mac-vrf1. The hosts connected to the mac-vrf1 can send/receive frames at the L2 level but they can also reach other networks within ip-vrf1 via the gateway(irb) associated with mac-vrf1.
The situation with mac-vrf2 is a bit different than mac-vrf1. It has no IRB interface. In this case, it functions as a broadcast domain with no gateway. The hosts connected to this mac-vrf will only be reaching each other, not the other networks.
Host 3, on the other hand, is not connected to any mac-vrf but only to the ip-vrf1. That means the subinterface with host3 is a routed type of an interface whereas the others that are connected to a mac-vrf are called bridged type.
Now, let’s create our first VRF in Leaf1:
network-instance mac-vrf1
And configure it, but in JSON format this time:
type mac-vrf
description mac-vrf1
interface ethernet-1/10.1 {
}
interface irb1.100 {
}
vxlan-interface vxlan1.100 {
}
protocols {
bgp-evpn {
bgp-instance 1 {
admin-state enable
vxlan-interface vxlan1.100
evi 100
ecmp 2
}
}
bgp-vpn {
bgp-instance 1 {
route-target {
export-rt target:1:100
import-rt target:1:100
}
}
}
}
The mac-vrfs in Leaf2 and Leaf4 can be configured in the same way but without an irb interface. Note that the evi and route-target values would be unique for different services.
The irb interface is used as a gateway and an attachment point to the ip-vrf in Leaf1:

interface irb1 {
subinterface 100 {
ipv4 {
address 192.168.1.1/24 {
}
}
}
}
The interface to the host:
interface ethernet-1/10 {
admin-state enable
subinterface 1 {
type bridged
admin-state enable
}
}
In Leaf1, there is no need to configure a VXLAN tunnel interface for mac-vrf1 since it’s local (only in Leaf1) in the current overlay topology. Still, in case we want to have the mac-vrf1 in the other VTEPs in the future:
tunnel-interface vxlan1 {
vxlan-interface 100 {
type bridged
ingress {
vni 100
}
}
}
Same way, Leaf2, and Leaf4 need to be configured with a VXLAN tunnel interface for mac-vrf2 to consume. Note that vni value would be unique per service.
At this point, you can commit and validate your configuration changes.
So far, we created a mac-vrf and connected the host port to it.
Now let’s create the ip-vrf1:
network-instance ip-vrf1 {
type ip-vrf
admin-state enable
interface irb1.100 {
}
vxlan-interface vxlan1.1 {
}
protocols {
bgp-evpn {
bgp-instance 1 {
admin-state enable
vxlan-interface vxlan1.1
evi 10
ecmp 2
}
}
bgp-vpn {
bgp-instance 1 {
route-target {
export-rt target:1:10
import-rt target:1:10
}
}
}
}
}
This part is to be configured in both Leaf1 and Leaf3 (with a little difference).
Similar to the mac-vrf1, we created the ip-vrf1 but with a unique evi, route-target, and tunnel interface. The irb1.100 is the one that binds mac-vrf1 to ip-vrf1.
Before you commit, configure the VXLAN tunnel interface of ip-vrf1:
tunnel-interface vxlan1 {
vxlan-interface 1 {
type routed
ingress {
vni 10
}
}
}
As you see, the VNI in this tunnel interface is different than the one in mac-vrf. So, mac-vrf uses it’s own VXLAN tunnel (vxlan1.100) for non-routed traffic goes to a remote VTEP while vxlan1.1 is used if it is a routed traffic ending up in remote VTEP.
However, in Leaf3 we don’t have any mac-vrf so host3 is directly connected to the ip-vrf. That’s another way of connecting workloads which is done via interface type routed.
interface ethernet-1/10 {
admin-state enable
subinterface 1 {
type routed
admin-state enable
ipv4 {
address 192.168.3.1/30 {
}
}
}
}
With this, host3 will be connected to the ip-vrf and it’s IP address will be advertised within an RT5 update.
After this, the ip-vrf configuration in Leaf3 would look like this:
network-instance ip-vrf1 {
type ip-vrf
admin-state enable
interface ethernet-1/10.1 {
}
vxlan-interface vxlan1.1 {
}
...
Now it is time to configure our hosts with an IP address.
Connect to the hosts:
# docker exec -ti clab-my_dc1-h3 bash
Don’t forget to leave with ‘ctrl + p’ followed by ‘ctrl + q’.
If you used the Alpine image that I’ve pointed out in the containerlab post you can even ssh (admin/admin) into it from your host. Also benefit the tools like nc, iperf, tcpdump etc.. which are already coming with it.

If you’re in, configure the IP addresses in the hosts.
h1:~# sudo ip addr add 192.168.1.11/24 dev eth1
h2:~# sudo ip addr add 172.16.1.12/24 dev eth1
h3:~# sudo ip addr add 192.168.3.2/30 dev eth1
h4:~# sudo ip addr add 172.16.1.14/24 dev eth1
Configure the eth1 interfaces since we mapped them to the Leaf.
And finally, you have host2 and host4 in the same broadcast domain (mac-vrf2) and they’ll reach each other via the IP addresses from the same subnet. Host1 is connected to ip-vrf1 via mac-vrf1 and it can be routed to host3 which is directly connected to the ip-vrf1.
You can send traffic between your hosts, check the EVPN routes, traffic counters etc. to validate your configuration and better understand how your IP fabric works.
Have fun!