Wednesday 28 March 2012

Taking a break from Junos

Playing around with Junos was a nice diversion and all but now that I have passed the JNCIP-SP exam, unless there's a project that demands it, I'll be giving it a break for the time being.

In the olden days, the JNCIP-M was a lab exam with the JNCIS-M being the roughly the equivalent of the CCIE written but recognised as a certification in of itself.  The JNCIE-M was the pinnacle of the M/T series track with another lab exam in of itself.  With the new SP track to only the JNCIE-SP exam has a hands on lab component which meant that when it came time to renewing my JNCIS-M I could either go for the JNCIS-SP or upgrade to the JNCIP-SP (and I chose the latter)

I'm not quite back on the CCIE train yet - through work I have been exposed to Fortinet firewalls and have a Fortinet certification exam (FCNSA) coming up soon and there's one last Alcatel-Lucent exam (Triple Play Services) to knock over in the next month or so and then it's diving back into IOS. 

I'm not sure if I feel like blogging about those topics at the moment, so it may be quiet here until the end of April...

So I still like playing with routers, just not IOS routers right now...

Friday 16 March 2012

Junos Incongruent Unicast and Multicast routing

Continuing the break from IOS for the time being, this post is covering Multicast on Junos.  Specifically I am playing around with Olive's which do not seem to support multicast on ethernet interfaces directly, however GRE tunnels support it fine.


This is the unicast topology description we're starting with:
  • R1 connects to R2 and R3
  • R2 connects to R1 and R3
  • R3 connects to R1, R2 and R4
  • R4 connects to R3

R1, R2 and R3 are using OSPF as their IGP, while R4 has a static default route pointing to R3 (R3's interface facing R4 is passively added to OSPF)

R1
root@R1> show configuration | display set
set system host-name R1
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.1/24
set interfaces em1 unit 13 vlan-id 13
set interfaces em1 unit 13 family inet address 10.1.13.1/24
set interfaces gre unit 12 tunnel source 10.1.12.1
set interfaces gre unit 12 tunnel destination 10.1.12.2
set interfaces gre unit 12 family inet address 10.11.12.1/24
set interfaces gre unit 13 tunnel source 10.1.13.1
set interfaces gre unit 13 tunnel destination 10.1.13.3
set interfaces gre unit 13 family inet address 10.11.13.1/24
set interfaces lo0 unit 0 family inet address 1.1.1.1/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.12
set protocols ospf area 0.0.0.0 interface gre.13


R2
root@R2> show configuration | display set
set system host-name R2
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.2/24
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.2/24
set interfaces gre unit 12 tunnel source 10.1.12.2
set interfaces gre unit 12 tunnel destination 10.1.12.1
set interfaces gre unit 12 family inet address 10.11.12.2/24
set interfaces gre unit 23 tunnel source 10.1.23.2
set interfaces gre unit 23 tunnel destination 10.1.23.3
set interfaces gre unit 23 family inet address 10.11.23.2/24
set interfaces lo0 unit 0 family inet address 2.2.2.2/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.12
set protocols ospf area 0.0.0.0 interface gre.23

R3
root@R3> show configuration | display set
set system host-name R3
set interfaces em1 vlan-tagging
set interfaces em1 unit 13 vlan-id 13
set interfaces em1 unit 13 family inet address 10.1.13.3/24
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.3/24
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.3/24
set interfaces gre unit 13 tunnel source 10.1.13.3
set interfaces gre unit 13 tunnel destination 10.1.13.1
set interfaces gre unit 13 family inet address 10.11.13.3/24
set interfaces gre unit 23 tunnel source 10.1.23.3
set interfaces gre unit 23 tunnel destination 10.1.23.2
set interfaces gre unit 23 family inet address 10.11.23.3/24
set interfaces gre unit 34 tunnel source 10.1.34.3
set interfaces gre unit 34 tunnel destination 10.1.34.4
set interfaces gre unit 34 family inet address 10.11.34.3/24
set interfaces lo0 unit 0 family inet address 3.3.3.3/32
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface gre.13
set protocols ospf area 0.0.0.0 interface gre.23
set protocols ospf area 0.0.0.0 interface gre.34 passive

R4
root@R4> show configuration | display set
set system host-name R4
set interfaces em1 vlan-tagging
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.4/24
set interfaces gre unit 34 tunnel source 10.1.34.4
set interfaces gre unit 34 tunnel destination 10.1.34.3
set interfaces gre unit 34 family inet address 10.11.34.4/24
set routing-options static route 0.0.0.0/0 next-hop 10.11.34.3

R4 will be our multicast source in this example, so lets enable PIM-SM on R1/R2/R3

root@R1> configure
Entering configuration mode

[edit]
root@R1# set protocols pim interface lo0.0 mode sparse
root@R1# set protocols pim interface gre.12 mode sparse
root@R1# set protocols pim interface gre.13
root@R1# commit
commit complete

root@R2> configure
Entering configuration mode

[edit]
root@R2# set protocols pim interface lo0.0 mode sparse
root@R2# set protocols pim interface gre.12 mode sparse
root@R2# set protocols pim interface gre.23 mode sparse
root@R2# commit
commit complete

root@R3> configure
Entering configuration mode

[edit]
root@R3# set protocols pim interface lo0.0 mode sparse
root@R3# set protocols pim interface gre.13 mode sparse
root@R3# set protocols pim interface gre.23 mode sparse
root@R3# set protocols pim interface gre.34 mode sparse
root@R3# commit
commit complete

Let's make sure R1/R2/R3 see each other

root@R1# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.12               4 2             HPLG      00:03:28 10.11.12.2
gre.13               4 2             HPLG      00:01:33 10.11.13.3

root@R2# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.12               4 2             HPLG      00:02:47 10.11.12.1
gre.23               4 2             HPLG      00:00:52 10.11.23.3

root@R3# run show pim neighbors
Instance: PIM.master
B = Bidirectional Capable, G = Generation Identifier,
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority

Interface           IP V Mode        Option      Uptime Neighbor addr
gre.13               4 2             HPLG      00:00:57 10.11.13.1
gre.23               4 2             HPLG      00:00:56 10.11.23.2

Fine, so let's make this a PIM-ASM topology with R2 as our RP.  We'll use BSR in this example though we could use Auto-RP (It's not just for Cisco) and static RPs

root@R2# set protocols pim rp bootstrap family inet priority 1





root@R2# commit




root@R1# run show pim bootstrap
Instance: PIM.master

BSR                     Pri Local address           Pri State      Timeout
2.2.2.2                   1 1.1.1.1                   0 InEligible      77
None                      0 (null)                    0                  0

[edit]
root@R1# run show pim rps
Instance: PIM.master
Address family INET

Address family INET6
So we have our BSR sorted, we just need to set up the cRP.  In Junos this is configured as if we were setting a static RP

root@R2# set protocols pim rp local address 2.2.2.2
root@R2# commit
commit complete
root@R2# run show pim rps
Instance: PIM.master
Address family INET
RP address               Type        Holdtime Timeout Groups Group prefixes
2.2.2.2                  bootstrap        150    None      0 224.0.0.0/4
2.2.2.2                  static             0    None      0 224.0.0.0/4

As we can see, we can see R2 is the cRP on R1 through BSR

root@R1# run show pim rps
Instance: PIM.master
Address family INET
RP address               Type        Holdtime Timeout Groups Group prefixes
2.2.2.2                  bootstrap        150     143      0 224.0.0.0/4

Now, setting up a Juniper router to join a multicast group is unfortunately not as straightforward as it is with an IOS based device (ip igmp join-group a.b.c.d) however we can take advantage of SAP and make the Junos device become a SAP listener which is nearly as useful.

root@R1# set protocols sap listen 239.0.0.1
root@R1# commit
commit complete

root@R1# run show pim join
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 224.2.127.254
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Instance: PIM.master Family: INET6
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

When you enable SAP listener, the router joins the group 224.2.127.254 global scope group plus anything elese you specifically request.

R4 will now ping 239.0.0.1, and we should recieve a unicast response to the ping:

root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=6.852 ms
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=2.854 ms
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=3.180 ms
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=2.673 ms
64 bytes from 10.11.13.1: icmp_seq=4 ttl=63 time=2.984 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.673/3.709/6.852/1.580 ms

We should be able to see that once R1 identified the source for 239.0.0.1 (10.11.34.4) , it established a SPT to it, which in this case is a different path than the wild card entry

root@R1# run show pim join inet
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 224.2.127.254
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: *
    RP: 2.2.2.2
    Flags: sparse,rptree,wildcard
    Upstream interface: gre.12

Group: 239.0.0.1
    Source: 10.11.34.4
    Flags: sparse,spt
    Upstream interface: gre.13

To simply demonstrate the value of multicast, lets make R2 and R3 also set up SAP listeners for 239.0.0.1 and then get R4 to ping the multicast address

root@R2# set protocols sap listen 239.0.0.1
root@R2# commit

root@R3# set protocols sap listen 239.0.0.1
root@R3# commit


root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.34.3: icmp_seq=0 ttl=64 time=5.850 ms
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=6.765 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=0 ttl=63 time=6.853 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=1 ttl=64 time=2.408 ms
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=3.983 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=1 ttl=63 time=4.158 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=2 ttl=64 time=2.443 ms
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=4.377 ms (DUP!)
64 bytes from 10.11.23.2: icmp_seq=2 ttl=63 time=4.767 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=3 ttl=64 time=4.849 ms
64 bytes from 10.11.23.2: icmp_seq=3 ttl=63 time=8.193 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=8.320 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=4 ttl=64 time=2.570 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, +8 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 2.408/5.041/8.320/1.958 ms

Okay, this is all pretty straight forward.  Things start to get more complicated when we look at making our multicast topology different (incongruent) to our unicast topology.

If we make the link between R2 and R3 (gre.23) unicast only we are going to have to make some changes to our multicast RPF topology, which up until now is using the unicast (inet.0) table.

First lets disable pim between R2 and R3

root@R2# delete protocols pim interface gre.23
root@R2# commit
commit complete

root@R3# delete protocols pim interface gre.23
root@R3# commit
commit complete



If we recall, R4 (10.11.34.4) is our multicast source and R2 (2.2.2.2) is our RP.

For R3 to reach the RP via multicast, we need to go via R1
For R2 to recieve multicast traffic from R4 we need to go via R1

Right now OSPF will utilise the direct link between R2 and R3, so static multicast routes are what we will use to resolve this. We will use the inet.2 routing table to store our RPF information which will import the unicast routing table (inet.0) but be overriden where necessary.

Firstly we'll crate a rib-group called if-rib that will be used to associate direct and local interface based routes into inet.2 (as we are changing the default behaviour we need to also include importing into inet.0 so we don't break our unicast routing)

[edit]
root@R2# set routing-options rib-groups if-rib import-rib [ inet.0 inet.2 ]
root@R2# set routing-options interface-routes rib-group inet if-rib

then we need to work with OSPF to populate inet.2 with its dynamic routes which is configured under traffic engineering

[edit]
root@R2# set protocols ospf traffic-engineering multicast-rpf-routes
root@R2# set protocols ospf traffic-engineering shortcuts

root@R2# run show route table inet.2 10.11.34.0

inet.2: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.0/24      *[OSPF/10] 00:29:57, metric 2
                    > via gre.23
 
root@R2# run show multicast rpf 10.11.34.0
Multicast RPF table: inet.0 , 17 entries

10.11.34.0/24
    Protocol: OSPF
    Interface: gre.23
    Neighbor: (null)


Now that we have populated inet.2, we need to get PIM to use it instead of inet.0 for its RPF checks

[edit]
root@R2# set routing-options rib-groups mcast-rib export-rib inet.2
root@R2# set routing-options rib-groups mcast-rib import-rib [ inet.2 inet.0 ]
root@R2# set protocols pim rib-group inet mcast-rib

root@R2# run show multicast rpf 10.11.34.0
Multicast RPF table: inet.2 , 13 entries

10.11.34.0/24
    Protocol: OSPF
    Interface: gre.23
    Neighbor: (null)


Finally we can configure our static-route in inet.2 to override the default next hop used for the RPF check (currently learnt by OSPF)

root@R2# set routing-options rib inet.2 static route 10.11.34.0/24 next-hop 10.11.12.1
root@R2# commit and-quit
commit complete
Exiting configuration mode


root@R2> show route table inet.0 10.11.34.4

inet.0: 17 destinations, 19 routes (17 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.0/24      *[OSPF/10] 00:14:39, metric 2
                    > via gre.23

root@R2> show route table inet.2 10.11.34.4

inet.2: 14 destinations, 16 routes (14 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.11.34.4/32      *[Static/5] 00:04:32
                    > to 10.11.12.1 via gre.12



root@R2> show multicast rpf 10.11.34.0
Multicast RPF table: inet.2 , 13 entries

10.11.34.0/24
    Protocol: Static
    Interface: gre.12
    Neighbor: 10.11.12.1

R2 now uses the interface facing R1 for its RPF check for R4



This is half the battle,  R3 needs to be able to resolve its RPF check for R2, so that the BSR and RP RPF checks pass

root@R3# set routing-options rib-groups if-rib import-rib [ inet.0 inet.2 ]
root@R3# set routing-options interface-routes rib-group inet if-rib
root@R3# set routing-options rib inet.2 static route 2.2.2.2/32 next-hop 10.11.13.1
root@R3# set routing-options rib-groups mcast-rib export-rib inet.2
root@R3# set routing-options rib-groups mcast-rib export-rib inet.2
root@R3# set protocols pim rib-group inet mcast-rib
root@R3# set protocols ospf traffic-engineering multicast-rpf-routes
root@R3# set protocols ospf traffic-engineering shortcuts
root@R3# commit and-quit
commit complete
Exiting configuration mode


root@R3> show route 2.2.2.2

inet.0: 20 destinations, 23 routes (20 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32         *[OSPF/10] 00:25:51, metric 1
                    > via gre.23

inet.2: 16 destinations, 20 routes (16 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32         *[Static/5] 00:25:51
                    > to 10.11.13.1 via gre.13
                    [OSPF/10] 00:25:51, metric 1
                    > via gre.23


root@R3> show multicast rpf 2.2.2.2
Multicast RPF table: inet.2 , 16 entries

2.2.2.2/32
    Protocol: Static
    Interface: gre.13
    Neighbor: 10.11.13.1
 
R1 doesn't require any changes to its configuration as it has optimal routes to both R2 and R3 via it's direct connections.

Lets verify our multicast flows:

root@R4> ping 239.0.0.1 bypass-routing interface gre.34 ttl 5 count 5
PING 239.0.0.1 (239.0.0.1): 56 data bytes
64 bytes from 10.11.34.3: icmp_seq=0 ttl=64 time=11.228 ms
64 bytes from 10.11.23.2: icmp_seq=0 ttl=63 time=15.762 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=0 ttl=63 time=15.944 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=1 ttl=64 time=3.023 ms
64 bytes from 10.11.23.2: icmp_seq=1 ttl=63 time=4.673 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=1 ttl=63 time=4.829 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=2 ttl=64 time=4.700 ms
64 bytes from 10.11.23.2: icmp_seq=2 ttl=63 time=6.255 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=2 ttl=63 time=6.407 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=3 ttl=64 time=2.658 ms
64 bytes from 10.11.23.2: icmp_seq=3 ttl=63 time=4.857 ms (DUP!)
64 bytes from 10.11.13.1: icmp_seq=3 ttl=63 time=4.891 ms (DUP!)
64 bytes from 10.11.34.3: icmp_seq=4 ttl=64 time=2.191 ms

--- 239.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, +8 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 2.191/6.724/15.944/4.445 ms

Lets verify the multicast path from R4 to R2 using the mtrace command:

root@R2> mtrace from-source source 10.11.34.4 group 239.0.0.1
Mtrace from 10.11.34.4 to 10.11.12.2 via group 239.0.0.1
Querying full reverse path... * *
  0  ? (10.11.12.2)
 -1  ? (10.11.12.1)  PIM  thresh^ 1
 -2  ? (10.11.13.3)  PIM  thresh^ 1
 -3  ? (10.11.34.4)
Round trip time 5 ms; total ttl of 2 required.

Waiting to accumulate statistics...Results after 10 seconds:

  Source        Response Dest    Overall     Packet Statistics For Traffic From
10.11.34.4      10.11.12.2       Packet      10.11.34.4 To 239.0.0.1
     v       __/  rtt    3 ms     Rate       Lost/Sent = Pct  Rate
10.11.34.3
10.11.13.3      ?
     v     ^      ttl    2                      0/0    = --    0 pps
10.11.13.1
10.11.12.1      ?
     v      \__   ttl    3                      ?/0            0 pps
10.11.12.2      10.11.12.2
  Receiver      Query Source

The Multicast path goes R4 -> R3 -> R1 -> R2

root@R4> traceroute 2.2.2.2
traceroute to 2.2.2.2 (2.2.2.2), 30 hops max, 40 byte packets
 1  10.11.34.3 (10.11.34.3)  4.967 ms  4.494 ms  1.837 ms
 2  2.2.2.2 (2.2.2.2)  0.912 ms  0.997 ms  1.119 ms

While the unicast path goes R4 -> R3 -> R2

Monday 5 March 2012

Junos BGP as-override and autonomous-system loops

In a previous post, I covered how to build and verify a basic MPLS based Unicast IPv4 VPN.  This posting will expand upon what we learnt there and look at how we can address the same problem Service Providers may have when they are managing CE routers that use BGP as CE-PE protocol.  In particular managing the use of BGP ASNs.

One of the most popular posts on this blog addresses this problem using Cisco IOS, in this case I'm going to attempt to resolve the problem the same way but with Junos.

This topology will have 4 routers: two CEs and two PEs

below are our starting configurations:  We will note that both CEs are in AS 64512

R1(CE)

root@R1-CE> show configuration | display set
set system host-name R1-CE
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.1/24
set interfaces lo0 unit 0 family inet address 1.1.1.1/32
set routing-options autonomous-system 64512
set protocols bgp export ToBGP
set protocols bgp group PE type external
set protocols bgp group PE family inet unicast
set protocols bgp group PE peer-as 65500
set protocols bgp group PE neighbor 10.1.12.2
set policy-options policy-statement ToBGP term Direct from protocol direct
set policy-options policy-statement ToBGP term Direct then accept

R2(PE)

root@R2-PE> show configuration | display set
set system host-name R2-PE
set interfaces em1 vlan-tagging
set interfaces em1 unit 12 vlan-id 12
set interfaces em1 unit 12 family inet address 10.1.12.2/24
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.2/24
set interfaces em1 unit 23 family mpls
set interfaces lo0 unit 0 family inet address 2.2.2.2/32
set routing-options router-id 2.2.2.2
set routing-options autonomous-system 65500
set protocols mpls interface em1.23
set protocols bgp group Core type internal
set protocols bgp group Core local-address 2.2.2.2
set protocols bgp group Core family inet-vpn unicast
set protocols bgp group Core peer-as 65500
set protocols bgp group Core neighbor 3.3.3.3
set protocols ospf area 0.0.0.0 interface em1.23
set protocols ospf area 0.0.0.0 interface lo0.0
set protocols ldp interface em1.23
set routing-instances CustomerA instance-type vrf
set routing-instances CustomerA interface em1.12
set routing-instances CustomerA route-distinguisher 65500:1
set routing-instances CustomerA vrf-target target:65500:1
set routing-instances CustomerA vrf-table-label
set routing-instances CustomerA protocols bgp group CE type external
set routing-instances CustomerA protocols bgp group CE family inet unicast
set routing-instances CustomerA protocols bgp group CE neighbor 10.1.12.1 peer-as 64512

R3(PE)

root@R3-PE> show configuration | display set
set system host-name R3-PE
set interfaces em1 vlan-tagging
set interfaces em1 unit 23 vlan-id 23
set interfaces em1 unit 23 family inet address 10.1.23.3/24
set interfaces em1 unit 23 family mpls
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.3/24
set interfaces lo0 unit 0 family inet address 3.3.3.3/32
set routing-options router-id 3.3.3.3
set routing-options autonomous-system 65500
set protocols mpls interface em1.23
set protocols bgp group Core type internal
set protocols bgp group Core local-address 3.3.3.3
set protocols bgp group Core family inet-vpn unicast
set protocols bgp group Core peer-as 65500
set protocols bgp group Core neighbor 2.2.2.2
set protocols ospf area 0.0.0.0 interface em1.23
set protocols ospf area 0.0.0.0 interface lo0.0
set protocols ldp interface em1.23
set routing-instances CustomerA instance-type vrf
set routing-instances CustomerA interface em1.34
set routing-instances CustomerA route-distinguisher 65500:1
set routing-instances CustomerA vrf-target target:65500:1
set routing-instances CustomerA vrf-table-label
set routing-instances CustomerA protocols bgp group CE type external
set routing-instances CustomerA protocols bgp group CE family inet unicast
set routing-instances CustomerA protocols bgp group CE neighbor 10.1.34.4 peer-as 64512

R4(CE)

root@R4-CE> show configuration | display set
set system host-name R4-CE
set interfaces em1 vlan-tagging
set interfaces em1 unit 34 vlan-id 34
set interfaces em1 unit 34 family inet address 10.1.34.4/24
set interfaces lo0 unit 0 family inet address 4.4.4.4/32
set routing-options router-id 4.4.4.4
set routing-options autonomous-system 64512
set protocols bgp export ToBGP
set protocols bgp group PE type external
set protocols bgp group PE family inet unicast
set protocols bgp group PE peer-as 65500
set protocols bgp group PE neighbor 10.1.34.3
set policy-options policy-statement ToBGP term Direct from protocol direct
set policy-options policy-statement ToBGP term Direct then accept

Let's check the Routing tables related to this VPN

root@R1-CE> show route table inet.0 terse

inet.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 1.1.1.1/32         D   0                       >lo0.0
* 10.1.12.0/24       D   0                       >em1.12
* 10.1.12.1/32       L   0                        Local
* 10.1.34.0/24       B 170        100            >10.1.12.2       65500 I

root@R2-PE> show route table CustomerA.inet.0 terse

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 1.1.1.1/32         B 170        100            >10.1.12.1       64512 I
* 4.4.4.4/32         B 170        100            >10.1.23.3       64512 I
* 10.1.12.0/24       D   0                       >em1.12
                     B 170        100            >10.1.12.1       64512 I
* 10.1.12.2/32       L   0                        Local
* 10.1.34.0/24       B 170        100            >10.1.23.3       I

root@R3-PE> show route table CustomerA.inet.0 terse

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 1.1.1.1/32         B 170        100            >10.1.23.2       64512 I
* 4.4.4.4/32         B 170        100            >10.1.34.4       64512 I
* 10.1.12.0/24       B 170        100            >10.1.23.2       I
* 10.1.34.0/24       D   0                       >em1.34
                     B 170        100            >10.1.34.4       64512 I
* 10.1.34.3/32       L   0                        Local

root@R4-CE> show route table inet.0 terse

inet.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 4.4.4.4/32         D   0                       >lo0.0
* 10.1.12.0/24       B 170        100            >10.1.34.3       65500 I
* 10.1.34.0/24       D   0                       >em1.34
* 10.1.34.4/32       L   0                        Local

We shouldn't be surprised that even though the MPLS network can see routes towards R1 and R4,  R1 and R4 cannot see routes to each other since the AS Path would appear as 64512 65500 64512 and BGP uses the AS Path as its loop avoidance mechanism.

One way to overcome this is "AS-override" which is applied to the MPLS PE and replaces the CE's ASN with that of the PE

We'll do that on R2-PE, which will result in R1-CE being able to see R4-CE's loopback

root@R2-PE> configure
Entering configuration mode

[edit]
root@R2-PE# set routing-instances CustomerA protocols bgp group CE neighbor 10.1.12.1 as-override
root@R2-PE# commit and-quit
commit complete
Exiting configuration mode

R2 shows that it knows the AS path for 4.4.4.4/32 as coming from AS 64512

root@R2-PE> show route table CustomerA.inet.0 4.4.4.4/32

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

4.4.4.4/32         *[BGP/170] 00:02:59, localpref 100, from 3.3.3.3
                      AS path: 64512 I
                    > to 10.1.23.3 via em1.23, Push 16

However we are advertising it to R1 as if it comes from the MPLS Core AS

root@R2-PE> show route advertising-protocol bgp 10.1.12.1

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 1.1.1.1/32              10.1.12.1                               65500 I
* 4.4.4.4/32              Self                                    65500 I
* 10.1.34.0/24            Self                                    I

Which is what R1 believes, which is fine because that gets rid of the AS-Path problem

root@R1-CE> show route table inet.0 terse

inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 1 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 1.1.1.1/32         D   0                       >lo0.0
* 4.4.4.4/32         B 170        100            >10.1.12.2       65500 65500 I
* 10.1.12.0/24       D   0                       >em1.12
* 10.1.12.1/32       L   0                        Local
* 10.1.34.0/24       B 170        100            >10.1.12.2       65500 I

An alternative to do this on the PE is to purposely allow looping to occur on the CE - we'll try this config on R4-CE and allow our AS to be seen twice

root@R4-CE> configure
Entering configuration mode

[edit]
root@R4-CE# set routing-options autonomous-system loops 2

[edit]
root@R4-CE# commit and-quit
commit complete
Exiting configuration mode

root@R3-PE> show route advertising-protocol bgp 10.1.34.4

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 10.1.12.0/24            Self                                    I

Unfortunately (or fortunately depending on your point of view) this doesn't appear to be working.  As opposed to IOS, Junos is smart enough to know that it shouldn't advertise prefixes that to a BGP AS that already has that AS in the AS-Path, so a configuration change is also required on the PE router to override that default action

root@R3-PE> configure
Entering configuration mode

[edit]
root@R3-PE# set routing-instances CustomerA protocols bgp group CE advertise-peer-as
[edit]
root@R3-PE# commit and-quit
commit complete
Exiting configuration mode


root@R3-PE> show route advertising-protocol bgp 10.1.34.4

CustomerA.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 1.1.1.1/32              Self                                    64512 I
* 4.4.4.4/32              10.1.34.4                               64512 I
* 10.1.12.0/24            Self                                    I

root@R4-CE> show route table inet.0 terse

inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 1 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
* 1.1.1.1/32         B 170        100            >10.1.34.3       65500 64512 I
* 4.4.4.4/32         D   0                       >lo0.0
* 10.1.12.0/24       B 170        100            >10.1.34.3       65500 I
* 10.1.34.0/24       D   0                       >em1.34
* 10.1.34.4/32       L   0                        Local

R1 can now see the prefix even though the AS-Path is looped

Since either method on Junos requires configuration applied to the PE - using AS-Override is probably the best (and simplest) way to reuse BGP ASNs for your CEs.