Friday, 16 March 2012

Junos Incongruent Unicast and Multicast routing

Continuing the break from IOS for the time being, this post is covering Multicast on Junos.  Specifically I am playing around with Olive's which do not seem to support multicast on ethernet interfaces directly, however GRE tunnels support it fine.


This is the unicast topology description we're starting with:
  • R1 connects to R2 and R3
  • R2 connects to R1 and R3
  • R3 connects to R1, R2 and R4
  • R4 connects to R3

R1, R2 and R3 are using OSPF as their IGP, while R4 has a static default route pointing to R3 (R3's interface facing R4 is passively added to OSPF)

R1
root@R1> show configuration | display set
set system host-name R1
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.1/24
set interfaces em1 unit 13 vlan-id 13
set interfaces em1 unit 13 family inet address 10.1.13.1/24
set interfaces gre unit 12 tunnel source 10.1.12.1
set interfaces gre unit 12 tunnel destination 10.1.12.2
set interfaces gre unit 12 family inet address 10.11.12.1/24
set interfaces gre unit 13 tunnel source 10.1.13.1
set interfaces gre unit 13 tunnel destination 10.1.13.3
set interfaces gre unit 13 family inet address 10.11.13.1/24
set interfaces lo0 unit 0 family inet address 1.1.1.1/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.12
set protocols ospf area 0.0.0.0 interface gre.13


R2
root@R2> show configuration | display set
set system host-name R2
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.2/24
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.2/24
set interfaces gre unit 12 tunnel source 10.1.12.2
set interfaces gre unit 12 tunnel destination 10.1.12.1
set interfaces gre unit 12 family inet address 10.11.12.2/24
set interfaces gre unit 23 tunnel source 10.1.23.2
set interfaces gre unit 23 tunnel destination 10.1.23.3
set interfaces gre unit 23 family inet address 10.11.23.2/24
set interfaces lo0 unit 0 family inet address 2.2.2.2/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.12
set protocols ospf area 0.0.0.0 interface gre.23

R3
root@R3> show configuration | display set
set system host-name R3
set interfaces em1 vlan-tagging
set interfaces em1 unit 13 vlan-id 13
set interfaces em1 unit 13 family inet address 10.1.13.3/24
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.3/24
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.3/24
set interfaces gre unit 13 tunnel source 10.1.13.3
set interfaces gre unit 13 tunnel destination 10.1.13.1
set interfaces gre unit 13 family inet address 10.11.13.3/24
set interfaces gre unit 23 tunnel source 10.1.23.3
set interfaces gre unit 23 tunnel destination 10.1.23.2
set interfaces gre unit 23 family inet address 10.11.23.3/24
set interfaces gre unit 34 tunnel source 10.1.34.3
set interfaces gre unit 34 tunnel destination 10.1.34.4
set interfaces gre unit 34 family inet address 10.11.34.3/24
set interfaces lo0 unit 0 family inet address 3.3.3.3/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.13
set protocols ospf area 0.0.0.0 interface gre.23
set protocols ospf area 0.0.0.0 interface gre.34 passive

R4
root@R4> show configuration | display set
set system host-name R4
set interfaces em1 vlan-tagging
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.4/24
set interfaces gre unit 34 tunnel source 10.1.34.4
set interfaces gre unit 34 tunnel destination 10.1.34.3
set interfaces gre unit 34 family inet address 10.11.34.4/24
set routing-options static route 0.0.0.0/0 next-hop 10.11.34.3

R4 will be our multicast source in this example, so lets enable PIM-SM on R1/R2/R3

root@R1> configure
Entering configuration mode

[edit]
root@R1# set protocols pim interface lo0.0 mode sparse
root@R1# set protocols pim interface gre.12 mode sparse
root@R1# set protocols pim interface gre.13
root@R1# commit
commit complete

root@R2> configure
Entering configuration mode

[edit]
root@R2# set protocols pim interface lo0.0 mode sparse
root@R2# set protocols pim interface gre.12 mode sparse
root@R2# set protocols pim interface gre.23 mode sparse
root@R2# commit
commit complete

root@R3> configure
Entering configuration mode

[edit]
root@R3# set protocols pim interface lo0.0 mode sparse
root@R3# set protocols pim interface gre.13 mode sparse
root@R3# set protocols pim interface gre.23 mode sparse
root@R3# set protocols pim interface gre.34 mode sparse
root@R3# commit
commit complete

Let's make sure R1/R2/R3 see each other

root@R1# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.12               4 2             HPLG      00:03:28 10.11.12.2
gre.13               4 2             HPLG      00:01:33 10.11.13.3

root@R2# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.12               4 2             HPLG      00:02:47 10.11.12.1
gre.23               4 2             HPLG      00:00:52 10.11.23.3

root@R3# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.13               4 2             HPLG      00:00:57 10.11.13.1
gre.23               4 2             HPLG      00:00:56 10.11.23.2

Fine, so let's make this a PIM-ASM topology with R2 as our RP.  We'll use BSR in this example though we could use Auto-RP (It's not just for Cisco) and static RPs

root@R2# set protocols pim rp bootstrap family inet priority 1





root@R2# commit




root@R1# run show pim bootstrap
Instance: PIM.master

BSR                     Pri Local address           Pri State      Timeout
2.2.2.2                   1 1.1.1.1                   0 InEligible      77
None                      0 (null)                    0                  0

[edit]
root@R1# run show pim rps
Instance: PIM.master
Address family INET

Address family INET6
So we have our BSR sorted, we just need to set up the cRP.  In Junos this is configured as if we were setting a static RP

root@R2# set protocols pim rp local address 2.2.2.2
root@R2# commit
commit complete
root@R2# run show pim rps
Instance: PIM.master
Address family INET
RP address               Type        Holdtime Timeout Groups Group prefixes
2.2.2.2                  bootstrap        150    None      0 224.0.0.0/4
2.2.2.2                  static             0    None      0 224.0.0.0/4

As we can see, we can see R2 is the cRP on R1 through BSR

root@R1# run show pim rps
Instance: PIM.master
Address family INET
RP address               Type        Holdtime Timeout Groups Group prefixes
2.2.2.2                  bootstrap        150     143      0 224.0.0.0/4

Now, setting up a Juniper router to join a multicast group is unfortunately not as straightforward as it is with an IOS based device (ip igmp join-group a.b.c.d) however we can take advantage of SAP and make the Junos device become a SAP listener which is nearly as useful.

root@R1# set protocols sap listen 239.0.0.1
root@R1# commit
commit complete

root@R1# run show pim join
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 224.2.127.254
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Instance: PIM.master Family: INET6
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

When you enable SAP listener, the router joins the group 224.2.127.254 global scope group plus anything elese you specifically request.

R4 will now ping 239.0.0.1, and we should recieve a unicast response to the ping:

root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=6.852 ms
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=2.854 ms
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=3.180 ms
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=2.673 ms
64 bytes from 10.11.13.1: icmp_seq=4 ttl=63 time=2.984 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.673/3.709/6.852/1.580 ms

We should be able to see that once R1 identified the source for 239.0.0.1 (10.11.34.4) , it established a SPT to it, which in this case is a different path than the wild card entry

root@R1# run show pim join inet
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 224.2.127.254
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: 10.11.34.4
    Flags: sparse,spt
    Upstream interface: gre.13

To simply demonstrate the value of multicast, lets make R2 and R3 also set up SAP listeners for 239.0.0.1 and then get R4 to ping the multicast address

root@R2# set protocols sap listen 239.0.0.1
root@R2# commit

root@R3# set protocols sap listen 239.0.0.1
root@R3# commit


root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.34.3: icmp_seq=0 ttl=64 time=5.850 ms
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=6.765 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=0 ttl=63 time=6.853 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=1 ttl=64 time=2.408 ms
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=3.983 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=1 ttl=63 time=4.158 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=2 ttl=64 time=2.443 ms
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=4.377 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=2 ttl=63 time=4.767 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=3 ttl=64 time=4.849 ms
64 bytes from 10.11.23.2: icmp_seq=3 ttl=63 time=8.193 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=8.320 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=4 ttl=64 time=2.570 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, +8 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 2.408/5.041/8.320/1.958 ms

Okay, this is all pretty straight forward.  Things start to get more complicated when we look at making our multicast topology different (incongruent) to our unicast topology.

If we make the link between R2 and R3 (gre.23) unicast only we are going to have to make some changes to our multicast RPF topology, which up until now is using the unicast (inet.0) table.

First lets disable pim between R2 and R3

root@R2# delete protocols pim interface gre.23
root@R2# commit
commit complete

root@R3# delete protocols pim interface gre.23
root@R3# commit
commit complete



If we recall, R4 (10.11.34.4) is our multicast source and R2 (2.2.2.2) is our RP.

For R3 to reach the RP via multicast, we need to go via R1
For R2 to recieve multicast traffic from R4 we need to go via R1

Right now OSPF will utilise the direct link between R2 and R3, so static multicast routes are what we will use to resolve this. We will use the inet.2 routing table to store our RPF information which will import the unicast routing table (inet.0) but be overriden where necessary.

Firstly we'll crate a rib-group called if-rib that will be used to associate direct and local interface based routes into inet.2 (as we are changing the default behaviour we need to also include importing into inet.0 so we don't break our unicast routing)

[edit]
root@R2# set routing-options rib-groups if-rib import-rib [ inet.0 inet.2 ]
root@R2# set routing-options interface-routes rib-group inet if-rib

then we need to work with OSPF to populate inet.2 with its dynamic routes which is configured under traffic engineering

[edit]
root@R2# set protocols ospf traffic-engineering multicast-rpf-routes
root@R2# set protocols ospf traffic-engineering shortcuts

root@R2# run show route table inet.2 10.11.34.0

inet.2: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.0/24      *[OSPF/10] 00:29:57, metric 2
                    > via gre.23
 
root@R2# run show multicast rpf 10.11.34.0
Multicast RPF table: inet.0 , 17 entries

10.11.34.0/24
    Protocol: OSPF
    Interface: gre.23
    Neighbor: (null)


Now that we have populated inet.2, we need to get PIM to use it instead of inet.0 for its RPF checks

[edit]
root@R2# set routing-options rib-groups mcast-rib export-rib inet.2
root@R2# set routing-options rib-groups mcast-rib import-rib [ inet.2 inet.0 ]
root@R2# set protocols pim rib-group inet mcast-rib

root@R2# run show multicast rpf 10.11.34.0
Multicast RPF table: inet.2 , 13 entries

10.11.34.0/24
    Protocol: OSPF
    Interface: gre.23
    Neighbor: (null)


Finally we can configure our static-route in inet.2 to override the default next hop used for the RPF check (currently learnt by OSPF)

root@R2# set routing-options rib inet.2 static route 10.11.34.0/24 next-hop 10.11.12.1
root@R2# commit and-quit
commit complete
Exiting configuration mode


root@R2> show route table inet.0 10.11.34.4

inet.0: 17 destinations, 19 routes (17 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.0/24      *[OSPF/10] 00:14:39, metric 2
                    > via gre.23

root@R2> show route table inet.2 10.11.34.4

inet.2: 14 destinations, 16 routes (14 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.4/32      *[Static/5] 00:04:32
                    > to 10.11.12.1 via gre.12



root@R2> show multicast rpf 10.11.34.0
Multicast RPF table: inet.2 , 13 entries

10.11.34.0/24
    Protocol: Static
    Interface: gre.12
    Neighbor: 10.11.12.1

R2 now uses the interface facing R1 for its RPF check for R4



This is half the battle,  R3 needs to be able to resolve its RPF check for R2, so that the BSR and RP RPF checks pass

root@R3# set routing-options rib-groups if-rib import-rib [ inet.0 inet.2 ]
root@R3# set routing-options interface-routes rib-group inet if-rib
root@R3# set routing-options rib inet.2 static route 2.2.2.2/32 next-hop 10.11.13.1
root@R3# set routing-options rib-groups mcast-rib export-rib inet.2
root@R3# set routing-options rib-groups mcast-rib export-rib inet.2
root@R3# set protocols pim rib-group inet mcast-rib
root@R3# set protocols ospf traffic-engineering multicast-rpf-routes
root@R3# set protocols ospf traffic-engineering shortcuts
root@R3# commit and-quit
commit complete
Exiting configuration mode


root@R3> show route 2.2.2.2

inet.0: 20 destinations, 23 routes (20 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32         *[OSPF/10] 00:25:51, metric 1
                    > via gre.23

inet.2: 16 destinations, 20 routes (16 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32         *[Static/5] 00:25:51
                    > to 10.11.13.1 via gre.13
                    [OSPF/10] 00:25:51, metric 1
                    > via gre.23


root@R3> show multicast rpf 2.2.2.2
Multicast RPF table: inet.2 , 16 entries

2.2.2.2/32
    Protocol: Static
    Interface: gre.13
    Neighbor: 10.11.13.1
 
R1 doesn't require any changes to its configuration as it has optimal routes to both R2 and R3 via it's direct connections.

Lets verify our multicast flows:

root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.34.3: icmp_seq=0 ttl=64 time=11.228 ms
64 bytes from 10.11.23.2: icmp_seq=0 ttl=63 time=15.762 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=15.944 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=1 ttl=64 time=3.023 ms
64 bytes from 10.11.23.2: icmp_seq=1 ttl=63 time=4.673 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=4.829 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=2 ttl=64 time=4.700 ms
64 bytes from 10.11.23.2: icmp_seq=2 ttl=63 time=6.255 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=6.407 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=3 ttl=64 time=2.658 ms
64 bytes from 10.11.23.2: icmp_seq=3 ttl=63 time=4.857 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=4.891 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=4 ttl=64 time=2.191 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, +8 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 2.191/6.724/15.944/4.445 ms

Lets verify the multicast path from R4 to R2 using the mtrace command:

root@R2> mtrace from-source source 10.11.34.4 group 239.0.0.1
Mtrace from 10.11.34.4 to 10.11.12.2 via group 239.0.0.1
Querying full reverse path... * *
  0  ? (10.11.12.2)
 -1  ? (10.11.12.1)  PIM  thresh^ 1
 -2  ? (10.11.13.3)  PIM  thresh^ 1
 -3  ? (10.11.34.4)
Round trip time 5 ms; total ttl of 2 required.

Waiting to accumulate statistics...Results after 10 seconds:

  Source        Response Dest    Overall     Packet Statistics For Traffic From
10.11.34.4      10.11.12.2       Packet      10.11.34.4 To 239.0.0.1
     v       __/  rtt    3 ms     Rate       Lost/Sent = Pct  Rate
10.11.34.3
10.11.13.3      ?
     v     ^      ttl    2                      0/0    = --    0 pps
10.11.13.1
10.11.12.1      ?
     v      \__   ttl    3                      ?/0            0 pps
10.11.12.2      10.11.12.2
  Receiver      Query Source

The Multicast path goes R4 -> R3 -> R1 -> R2

root@R4> traceroute 2.2.2.2
traceroute to 2.2.2.2 (2.2.2.2), 30 hops max, 40 byte packets
 1  10.11.34.3 (10.11.34.3)  4.967 ms  4.494 ms  1.837 ms
 2  2.2.2.2 (2.2.2.2)  0.912 ms  0.997 ms  1.119 ms

While the unicast path goes R4 -> R3 -> R2

No comments:

Post a Comment