Basic Datacenter: Data Center Switching Layers Discussion

In this article, I am going to talk about the best practices in the classic datacenter network best practices. If we talk about the layer of the network in the classic datacenter environment we have 3 layers on the network side and one layer on the virtualization side
  • Core Layer : Pure Routed Layer or Layer 3
  • Aggregation Layer : Mix of Layer 3 and Layer 2 operations
  • Access Layer : Layer 2 
  • Virtualization Layer : Application Layer hosted on Servers
Datacenter Core Layer:
  • Routed layer which is distinct from enterprise network core and provides scalability to build multiple aggregation blocks. 
  • A dedicated Data Center Core provides layer-3 insulation from the rest of the network. 
  • Switch port density in the DC Core is reserved for scaling additional DC Aggregation blocks or pods 
  • Provides single point of DC route summarization
Aggregation Layer:
  • Provides the boundary between layer-3 routing and layer-2 switching and Point of connectivity for service devices (firewall, SLB, etc.)
Access Layer:
  • Provides point of connectivity for servers and shared resources and Typically layer-2 switching
Virtual Access Layer:
  • Still a single logical tier of layer-2 switching
  • Common control plane with virtual hardware and software based I/O modules 



Fig 1.1- Classic Datacenter Switching Architecture

Design the Data Center topology in a consistent, modular fashion for ease of scalability, support, and troubleshooting
Use a pod definition to map an aggregation block or other bounded unit of the network topology to a single pod
The server access connectivity model can dictate port count requirements in the aggregation and affect the entire design

End-of-Row (EoR)
  • High density chassis switch at end or middle of a row of racks, fewer overall switches
  • Provides port scalability and local switching, may create cable management challenges
Top-of-Rack (ToR)
  • Small fixed or modular switch at the top of each rack, more devices to manage
  • Significantly reduces bulk of cable by keeping connections local to rack or adjacent rack
Integrated Switching
  • Switches integrated directly into blade server chassis enclosure
  • Maintaining feature consistency is critical to network management, sometimes pass-through modules are used
Supporting Storage and Data with Unified Fabric

  • Nexus 5000 Series switches support integration of both IP data and Fibre Channel over Ethernet at the network edge.
  • FCoE traffic may be broken out on native Fibre Channel interfaces from the Nexus 5000 to connect to the Storage Area Network (SAN). 
  • Servers require Converged Network Adapters (CNAs) to consolidate this communication over one interface, saving on cabling and power