Cisco single ACI fabric Design and Intelligent Traffic Director (ITD)

In this article, I am going to talk about the Cisco single ACI fabric Design separate data center environments with a single administrative network policy domain. Bare metal hosts and hosts running hypervisors for virtualization (Microsoft HyperV and VMWare ESXi) are defined and managed by the APICs regardless of their physical connectivity.

The IP address ranges for the Bridge Domains and EPGs are also accessible anywhere within the fabric.  Normal ACI forwarding policy can be affected along with a single point of management for both physical sites from the cluster of APICs.

The network architecture is comprised of two data center fabrics connected via Transit Leaf switches.  The ACI Fabric is providing Access and Aggregation LAN segments of the data center while the Border Leafs connect to the Core/Edge of the data center.

Exterior fabric connectivity for each physical data center is delivered through the usual tenant in the ACI fabric. Using the usual tenant is not a requirement, rather a chosen configuration.

Each application tenant will access the WAN through the common tenant by creating an Endpoint Group (EPG) for connectivity purposes like Web. This EPG references a bridge domain (e.g., Production BD) in the common tenant which has exterior connectivity. A contract will permit traffic to flow from the common tenant to the application tenant.

Fig 1.1- Cisco ACI Single Fabric Design

By using the usual tenant for external connectivity, the network and security administrator can allocate the appropriate network configuration policy, security contracts and policy, as well as firewall and load balancing services for the fabrics in each data center. The network policy is like for each data center, but the IP addressing, and Bridge Domain and External Routed Network are specific to each site.

The application (DevOps) teams will position the common tenant configuration and configure application connectivity for intra and inter tenant contact through the Application Network Profile (ANP).

The border leaf switches connect to a Nexus 7000 switch for external Layer 3 connectivity. The Nexus 7000 serves two purposes. It provides connectivity between the ACI fabric/endpoints and external devices/endpoints. It also provides incoming routing correction for the ACI endpoints via the Locator/ID Separation Protocol Multi-­‐Hop Across Subnet Mode (LISP MH ASM) along with Intelligent Traffic Director (ITD).

 Outbound routing correction is handled by the ACI fabric using ACI standard forwarding policy. The traffic will be sent to the closest border leaf using the MP-BGP metric to find that closest border leafs. Intelligent Traffic Director (ITD) allows the Nexus 7000 to load balance the inbound traffic to the Border Leafs along with SLA probing the Border Leafs for reachability and availability.

Intelligent Traffic Director (ITD)
ITD provides scalable load distribution of traffic to a group of servers and/or appliances. It includes the following main features related to the Active/Active ACI design:

Redirection and load balancing of line rate traffic to ACI border leafs; up to 256 in a group,
IP stickiness with weighted load balancing, Health monitoring of border leafs using IP Service Level Agreement (SLA) probes (ICMP).

Automatic failure detection and traffic redistribution in the event of a border leaf failure, with no manual intervention required, node level standby support ITD statistics collection with traffic distribution details VRF support for ITD Service and Probes

Within the Active/Active ACI Fabric, ITD is running on the Nexus 7000 that is directly connected to the ACI Border Leafs. The purpose of ITD within this architecture is load balance ingress traffic amongst the Border Leafs. ITD also uses IP SLA probes in order to verify that Border Leaf are reachable.