This post came about after answering a question on CSC regarding tunnels and VRF, and will elaborate on the the use case I suggested.
Imagine two sites with multiple VRFs which need to communicate intra-VRF over a public WAN, lets run through the configuration options:
Option A
Each VRF would require a public IP which it could be NAT’d be. Services would be made available via port-forwarding on the same public IP.
This provide the intra-VRF communication, but it is not very scalable and dependent on the protocols being used is not very secure.
Option B
Using GRE tunnels between the site VRFs sourced from public the IPs allocated to them. By running a dynamic routing protocol across the tunnel, inter-site/ intra-VRF communication would be possible. No NAT would be necessary however each VRF would still require a public IP to be used as a tunnel source.
Option C
Simplifying the design further, we add more hardware..!
Each VRF now has a dedicated interface (or sub-interface) on the local site firewall. The firewalls would establish an IPSec tunnel between themselves, which would carry the inter-site traffic and therefor the intra-VRF GRE tunnels.
As before each site only requires one public IP for use with IPSec peering. The GRE tunnels should now be sourced on loopback interfaces within the VRFs.
Option D
This is the option we will look at further; no dedicated firewalls and just a single pair of public IP addresses for the tunnel.
The detail
Site A (R1) : 10.1.0.0 /16
ASN: 100
VRF RED : 10.1.64.0 /18
Site B (R3) : 10.3.0.0 /16
ASN: 300
VRF RED : 10.3.64.0 /18
We will create two VRFs: WAN and RED.
The two sites are peered via eBGP via R2:
R1#sh ip bgp vpnv4 vrf WAN
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 100:1 (default for vrf WAN)
*> 10.1.0.0/16 0.0.0.0 0 32768 i
*> 10.3.0.0/16 192.168.1.2 0 200 300 i
* 192.168.1.0/30 192.168.1.2 0 0 200 i
*> 0.0.0.0 0 32768 i
*> 192.168.3.0/30 192.168.1.2 0 0 200 i
Now lets create a GRE tunnel between the WAN VRF to allow an IGRP between them:
!
interface Tunnel10
ip vrf forwarding WAN
ip address 172.16.1.1 255.255.255.252
tunnel source 192.168.1.1
tunnel destination 192.168.3.1
tunnel vrf WAN
!
Can we reach the other end of the tunnel?
R1#ping vrf WAN 172.16.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 20/30/36 ms
Of course!
Now assuming this was a production environment it make sense to use loopbacks within the VRFs as tunnel sources and IGRP peer addresses. The reason being that you may have more than one WAN connection and binding a tunnel /IGRP peer to a physical address may result in flapping as a result of problems with the interface. A loopback is always up and reachable providing you have a route advertised to reach it.
Site A:
!
interface Loopback10
ip vrf forwarding WAN
ip address 10.1.1.1 255.255.255.255
!
interface Tunnel10
ip vrf forwarding WAN
ip address 172.16.1.1 255.255.255.252
tunnel source Loopback10
tunnel destination 10.3.1.1
tunnel vrf WAN
!
Site B:
!
interface Loopback10
ip vrf forwarding WAN
ip address 10.3.1.1 255.255.255.255
!
interface Tunnel10
ip vrf forwarding WAN
ip address 172.16.1.2 255.255.255.252
tunnel source Loopback10
tunnel destination 10.1.1.1
tunnel vrf WAN
!
Next we will configure EIGRP Named Mode processes at both sites:
Looks good, for a moment, but then R3 starts reporting:
*Dec 20 09:51:54.063: %ADJ-5-PARENT: Midchain parent maintenance for IP midchain out of Tunnel10 - looped chain attempting to stack
*Dec 20 09:51:58.999: %TUN-5-RECURDOWN: Tunnel10 temporarily disabled due to recursive routing
*Dec 20 09:51:58.999: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel10, changed state to down
*Dec 20 09:51:59.035: %DUAL-5-NBRCHANGE: EIGRP-IPv4 100: Neighbor 172.16.1.1 (Tunnel10) is down: interface down
The issue we have now is that the GRE tunnel source IP is being advertised down the tunnel hence the recursive routing. We need to make sure packets destined for the loopback10 interfaces always go via the default route. We can achieve this by restricting the routes being advertised using the EIGRP distribute-list command. The updated EIGRP config now looks like:
!
ip access-list standard DENY_TUNNEL_IPS
deny 10.1.1.1
permit any
!
router eigrp R1
!
address-family ipv4 unicast vrf WAN autonomous-system 100
!
af-interface default
shutdown
exit-af-interface
!
af-interface Tunnel10
no shutdown
exit-af-interface
!
topology base
distribute-list DENY_TUNNEL_IPS out
redistribute connected
exit-af-topology
network 172.16.1.0 0.0.0.3
eigrp router-id 10.1.1.1
exit-address-family
!
Inspection of packets between 10.1.1.2 and 10.3.1.2 show that they are being encapsulated so sent down the GRE tunnel.
R1#ping vrf WAN 10.3.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 32/33/36 ms
VRF RED
With our two sites able to communicate via the GRE tunnel lets turn our attention to the other VRFs. Lets assume these are secure VRFs, the best way to ensure data doesn’t end up in the clear on the public network is to ensure there is no route to get there in the first place.
R1#sh ip route vrf RED
Gateway of last resort is not set
10.0.0.0/32 is subnetted, 1 subnets
C 10.1.64.1 is directly connected, Loopback101
Looks good, but how do we get a tunnel established with the VRF at the remote site??
The eagle-eyed of you will have noticed two VRF commands in use on the tunnel interfaces:
ip vrf forwarding
tunnel vrf
The first should be familiar, the second command is crucial to our solution. It is related to the ‘GRE Tunnel IP Source and Destination VRF Membership’ feature. By specifying a VRF with this command you are instructing the router to use that VRF for routing the tunnel source and destination packets. Packets leaving the tunnel are then routed by the VRF specified by the ip vrf forwarding command.
Lets spin up a new tunnel in the RED VRF:
!
interface Tunnel100
ip vrf forwarding RED
ip address 172.16.100.1 255.255.255.252
tunnel source Loopback10
tunnel destination 10.3.1.1
tunnel key 100
tunnel vrf WAN
!
Also note that because we are using the same source and destination IP addresses in the config for two different tunnels we must introduce the tunnel key command, this helps the router differentiate between the incoming GRE packets and know which tunnel they are destined for. The key must match at both ends of the tunnel.
R1#sh int tunnel 100
Tunnel100 is up, line protocol is up
...
Tunnel protocol/transport GRE/IP
Key 0x64, sequencing disabled
With tunnel100, lets establish an EIGRP adjacency across it:
R1#sh ip route vrf RED eigrp
...
10.0.0.0/32 is subnetted, 2 subnets
D EX 10.3.64.1 [170/76800640] via 172.16.100.2, 00:02:22, Tunnel100
Packets are flowing correctly, but what’s this, my hyper secure network is sending them across the public network in the clear!:
Time for one more GRE feature…
tunnel protection
Instead of relying on another device to provide IPSec transport between sites or applying a crypto map to our routers outbound interface, we can apply a crypto policy directly to our GRE tunnel:
!
crypto keyring KEYRING vrf WAN
pre-shared-key address 10.1.1.1 key SECURE_MY_TUNNEL
!
crypto isakmp policy 1
encr aes 256
authentication pre-share
!
crypto isakmp profile ISAKMP-PROFILE
vrf WAN
keyring KEYRING
match identity address 10.1.1.1 WAN
!
!
crypto ipsec transform-set IPSEC-TRANS esp-aes esp-sha512-hmac
mode tunnel
!
crypto ipsec profile IPSEC-GRE-PROFILE
set transform-set IPSEC-TRANS
set pfs group24
set isakmp-profile ISAKMP-PROFILE
!
!
int tun100
tunnel mode gre ip
tunnel protection ipsec profile IPSEC-GRE-PROFILE
!
Lets check what is being sent over the public network:
Great, ESP packets are present, these will contain the GRE traffic passing between the RED VRFs, but we can still see the traffic on tunnel10 in the clear. Lets tidy that up.
If we apply the same IPSec profile to another tunnel interface then we must included the ‘shared’ keyword on all interfaces:
!
interface Tunnel10
ip vrf forwarding WAN
ip address 172.16.1.1 255.255.255.252
tunnel source Loopback10
tunnel destination 10.3.1.1
tunnel key 10
tunnel vrf WAN
tunnel protection ipsec profile IPSEC-GRE-PROFILE shared
!
interface Tunnel100
ip vrf forwarding RED
ip address 172.16.100.1 255.255.255.252
tunnel source Loopback10
tunnel destination 10.3.1.1
tunnel key 100
tunnel vrf WAN
tunnel protection ipsec profile IPSEC-GRE-PROFILE shared
!
It is worth noting that we cannot make these tunnels VTIs by using the tunnel mode ipsec ipv4 command as this document states sharing IPSec SAs is unsupported.
We can now confirm the status of both tunnels:
R1#sh int tunnel 10 | inc transport| protection
Tunnel protocol/transport GRE/IP
Tunnel transport MTU 1394 bytes
Tunnel protection via IPSec (profile "IPSEC-GRE-PROFILE")
R1#sh int tunnel 100 | inc transport| protection
Tunnel protocol/transport GRE/IP
Tunnel transport MTU 1394 bytes
Tunnel protection via IPSec (profile "IPSEC-GRE-PROFILE")
R1#
R1#sh crypto session
Crypto session current status
Interface: Tunnel100 Tunnel10
Session status: UP-NO-IKE
Peer: 10.3.1.1 port 500
IPSEC FLOW: permit 47 host 10.1.1.1 host 10.3.1.1
Active SAs: 2, origin: crypto map
Interface: FastEthernet0/0
Profile: ISAKMP-PROFILE-RED
Session status: UP-IDLE
Peer: 10.3.1.1 port 500
IKEv1 SA: local 10.1.1.1/500 remote 10.3.1.1/500 Active
IKEv1 SA: local 10.1.1.1/500 remote 10.3.1.1/500 Active
R1#
R1#sh ip route vrf WAN eigrp
10.0.0.0/8 is variably subnetted, 5 subnets, 2 masks
D EX 10.3.1.2/32 [170/76800640] via 172.16.1.2, 00:23:46, Tunnel10
R1#
R1#sh ip route vrf RED eigrp
10.0.0.0/32 is subnetted, 2 subnets
D EX 10.3.64.1 [170/76800640] via 172.16.100.2, 00:24:48, Tunnel100
Checking the traffic being transmitted:
There you have it, highly scalable secure cross-site intra-VRF communication exposing the minimum of public IPs.
Implementation note
I had plenty of fun trying to get both tunnels running at the same time. More often than not one tunnel would report as UP/UP but no traffic would pass across it. ISAKMP debug would show the SA being removed once the Phase 1 negotiation had been completed. The only way I found of reliably bringing the tunnels up was to shut them down at both sites and bring the pairs up one at a time.
For reference I was using GNS3 and the c7200-adventerprisek9-mz.152-4.M11.bin image.
Leave a Reply