vPC and OTV as DCI for Cisco ACI Spine-Leaf Architecture

Today I am going to take you through the Cisco ACI where I am going to talk about the vPC as a datacenter interconnection transport. We have various method to connect the two different datacenter with Inter-pod network or with back to back connectivity on the leaves. Some of you already knew as i discussed that in my earlier articles. If you want to have a look on the previous articles, please go through the below mentioned link which helps you to understand Cisco ACI as Spine-Leaf architecture model.

Difference Between Cisco ACI Multi-Pod Vs Cisco ACI Multi-Site

Hope the above mentioned articles helps you to understand the various deployment methods in Cisco ACI connecting to different datacenter across the globe using the single pod, multi-pod or multi-site environment.

There are some questions asked how we can have the L3out in the Cisco ACI environment towards the MPLS or internet kind of environment. I will come up with the concept of L3out in Cisco ACI environment by edge leaves or by GOLF routers but at this point of time i would like to talk about vPC in DCI transport.

vPC as DCI Transport
This is interesting, Here in the case of we are using vPC as a DCI transport, one pair of border leaf nodes can use a back-to-back vPC connection to extend Layer 2 and Layer 3 connectivity across sites. Unlike traditional vPC deployments on Cisco Nexus platforms, with Cisco ACI you don’t need to create a vPC peer link or a peer-keepalive link between the border leaf nodes. Instead, those peerings are established through the fabric.

You can use any number of links to form the back-to-back vPC, but for redundancy reasons, two is the minimum, and this is the number validated by cisco in their document.

fig 1.1- vPC as DCI in Cisco ACI
This dual-link vPC can use dark fiber. It can also use DWDM, but only if the DWDM transport offers high quality of service. Because the transport in this case is ensured by Link Aggregation Control Protocol (LACP), you should not rely on a link that offers only three 9s (99.9 percent) or less resiliency. In general, private DWDM with high availability is good enough.

When using DWDM, you need to keep in mind that loss of signal is not reported. With DWDM, one side may stay up while the other side is down. Cisco ACI allows you to configure Fast LACP to detect such a condition, and the design reported in this document validates this capability to achieve fast convergence.

OTV as DCI Transport
You know how to use the OTV in the traditional Datacenter environment where you are connecting the two different Datacenter on OTV VDC in the Nexus 7K environment. As know about about the OTV protocol.

I earlier wrote some of the articles on Cisco OTV as well, you can have a look on the below mentioned link 

 OTV is a MAC in IP technique for supporting Layer 2 VPNs to extend LANs over any transport. The transport can be Layer 2 based, Layer 3 based, IP switched, label switched, or anything else as long as it can carry IP packets. By using the principles of MAC address routing, OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2 domains while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits of an IP-based interconnection.

Fig 1.2- OTV as DCI transport in Cisco ACI
The core principles on which OTV operates are the use of a control protocol to advertise MAC address reachability information (instead of using data-plane learning) and packet switching of IP encapsulated Layer 2 traffic for data forwarding. OTV can be used to provide connectivity based on MAC address destinations while preserving most of the characteristics of a Layer 3 interconnection.

Before MAC address reachability information can be exchanged, all OTV edge devices must become adjacent to each other from an OTV perspective. This adjacency can be achieved in two ways, depending on the nature of the transport network that interconnects the various sites. If the transport is multicast enabled, a specific multicast group can be used to exchange control protocol messages between the OTV edge devices

OTV edge device (or more) can be configured as an adjacency server to which all other edge devices register. In this way, the adjacency server can build a full list of the devices that belong to a given overlay.

An edge device forwards Layer 2 frames into and out of a site over the overlay interface. There is only one authoritative edge device (AED) for all MAC unicast and multicast addresses for each given VLAN. The AED role is negotiated, on a per-VLAN basis, among all the OTV edge devices that belong to the same site (that is, that are characterised by the same site ID).

The internal interface facing the Cisco ACI fabric can be a vPC on the OTV edge device side. However, the recommended attachment model uses independent port channels between each AED and the Cisco ACI fabric

Each OTV device defines a logical interface, called a join interface, that is used to encapsulate and decapsulate Layer 2 Ethernet frames that need to be transported to remote sites.

OTV requires a site VLAN, which is assigned on each edge device that connects to the same overlay network. OTV sends local hello messages on the site VLAN to detect other OTV edge devices in the site, and it uses the site VLAN to determine the AED for the OTV-extended VLANs. Because OTV uses IS-IS protocol for this hello, the Cisco ACI fabric must run software release 11.1 or later. This requirement is necessary because previous releases prevented the OTV devices from exchanging IS-IS hello message through the fabric.

Note: An important benefit of the OTV site VLAN is the capability to detect a Layer 2 back door that may be created between the two Cisco ACI fabrics. To support this capability, you should use the same site VLAN on both Cisco ACI sites.

Popular Posts

Powered by Blogger.