Pages

Wednesday, February 20, 2019

Configuration of MPLS Switching and Forwarding

Multi-protocol label switching (MPLS) combines the performance and capabilities of Layer 2 (data link layer) switching with the proven scalability of Layer 3 (network layer) routing. MPLS enables you to meet the challenges of explosive growth in network utilisation while providing the opportunity to differentiate services without sacrificing the existing network infrastructure 

Multi-protocol label switching (MPLS) is a high-performance packet forwarding technology that integrates the performance and traffic management capabilities of data link layer (Layer 2) switching with the scalability, flexibility, and performance of network layer (Layer 3) routing. 

Fig 1.1- Basic MPLS Topology
Each label switching router (LSR) in the network makes an independent, local decision as to which label value to use to represent a forwarding equivalence class. This association is known as a label binding. Each LSR informs its neighbors of the label bindings it has made. This awareness of label bindings by neighboring switches is facilitated by the following protocols: 
  • Label Distribution Protocol (LDP)—Enables peer LSRs in an MPLS network to exchange label binding information for supporting hop-by-hop forwarding in an MPLS network. 
  • Border Gateway Protocol (BGP)—Supports MPLS virtual private networks (VPNs). 


Configuring a Switch for MPLS Switching 
Cisco Express Forwarding be enabled on the switch before configuring MPLS switching. Beginning in privileged EXEC mode, perform the following steps: 

Configuration Example for MPLS Switching 
Switch> enable 
Switch# configure terminal 
Switch(config)# ip cef distributed 
Switch(config)# mpls label range 16 4096 
Switch(config)# mpls label protocol ldp 
Switch(config)# end 

Configuring a Switch for MPLS Forwarding 
Forwarding of IPv4 packets must be enabled on the switch before configuring MPLS forwarding. Beginning in privileged EXEC mode, perform the following steps 

Configuration Example for MPLS Forwarding 
Switch> enable 
Switch# configure terminal 
Switch(config)# interface gigabitethernet 1/0/0 
Switch(config-if)# mpls ip 
Switch(config-if)# mpls label protocol ldp 
Switch(config-if)# end

Verifying Configuration of MPLS Forwarding 
To verify that MPLS forwarding has been configured properly, run the following commands, which generates an output similar to that shown below 

Switch# show mpls interfaces detail interface GigabitEthernet 1/0/0 
Type Unknown 
IP labeling enabled 
LSP Tunnel labeling not enabled 
IP FRR labeling not enabled 
BGP labeling not enabled 
MPLS not operational 
MTU = 1500 

For Switch Virtual Interface (SVI): 
Switch# show mpls interfaces detail interface Vlan1000 
Type Unknown 
IP labeling enabled (ldp) : 
Interface config 
LSP Tunnel labeling not enabled 
IP FRR labeling not enabled 
BGP labeling not enabled 
MPLS operational 
MTU = 1500 

Switch# show running-config interface interface GigabitEthernet 1/0/0 
Building configuration... 
Current configuration : 307 bytes 
!
interface TenGigabitEthernet1/0/0 
no switchport 
ip address xx.xx.x.x xxx.xxx.xxx.x 
mpls ip 
mpls label protocol ldp 
end 

For Switch Virtual Interface (SVI): 
Switch# show running-config interface interface Vlan1000 
Building configuration... 
Current configuration : 187 bytes 
!
interface Vlan1000 
ip address xx.xx.x.x xxx.xxx.xxx.x 
mpls ip 
mpls label protocol ldp 
end.


Tuesday, February 19, 2019

Introduction to Red Hat OpenShift Container platform

Today I am going to talk about Red Hat OpenShift Container Platform which helps to manage and control kubernetes environment in the enterprise infrastructure which can be on-prem or cloud-based solution. 

Red Hat OpenShift Container Platform knobs cloud-native and old-fashioned applications on a single platform. Containerize and manage enterprise existing applications, develop with own timeline, and work faster with new, cloud-native applications.

Red Hat OpenShift offers teams self-service to reliable infrastructure across enterprise organization, from development through production.Red Hat OpenShift Container Platform compromises trusted, proven Kubernetes on any infrastructure. 

Get steadiness and control everywhere that Red Hat Enterprise Linux runs.Security is incorporated throughout OpenShift, from infrastructure to services, and throughout the operations and application lifecycle.

Fig 1.1- Red Hat OpenShift Container platform


What are the various features of Red Hat OpenShift Container platform?
There are many features supporting the OpenShift Container platform which includes automates the provisioning, management and scaling applications so that we can focus on writing the code for our business. 
  • Red Hat OpenShift comprises pre-created quick start application guides that permit to build and deploy favorite application frameworks, databases, and more in a click.
  • OpenShift offers access to a private database instance with full control. Choose between classic relational and modern NoSQL datastores comprising MariaDB, MySQL, PostgreSQL, MongoDB, Redis, and SQLite.
  • Developers can build applications, integrate with other systems, orchestrate using rules and processes, and deploy across hybrid environments.
  • OpenShift permits to take benefit of a great group of Docker-formatted Linux containers. From enterprise-ready containers in Red Hat Container Catalog to community registries such as Docker Hub, OpenShift's ability to work directly with the Docker API helps it unlock a new world of content for your developers.
  • Simply perform a "git push" to build and deploy your containerized application.It creates ready to run images by injecting application source into a container image and collecting a new image. The new image incorporates the base image (the builder) and built source and is ready to use with the docker run command.
  • With OpenShift's built-in support for port forwarding, you can tightly interconnect with your pods and make it perform as if the services on your pods are running on your own machine.
  • The OpenShift platform comprises with a console developer interface and a responsive UI design so that it can be simply viewed on devices ranging from mobile smart phones and tablets to laptop and desktop workstations. Developers can create, modify, and manage their applications and related resources from within the web console.


Container Orchestration with Kubernetes
OpenShift comprises Kubernetes for container orchestration and management. OpenShift adds developer and operations-centric tools that enable Rapid application development, Easy deployment and scaling with Long-term life-cycle maintenance for teams and applications. 

It built around a standardized container model powered by Red Hat application programming interfaces (APIs) for Docker, applications created on OpenShift can easily run anywhere that supports Docker-formatted containers.

Wednesday, February 13, 2019

VXLAN Encapsulation and Packet Format

Today I am going to talk about VXLAN encapsulation and packet Format. As many of you know about the VXLAN and is an Overlay protocol. VXLAN now a days is used in many of the Next Generation networks like Cisco ACI. Let's talk about VXLAN in detail as below.

VXLAN Protocol
VXLAN is a Layer 2 overlay scheme over a Layer 3 network. It uses MAC Address-in-User Datagram Protocol (MAC-in-UDP) encapsulation to provide a means to extend Layer 2 segments across the data center network. VXLAN is a solution to support a flexible, large-scale multitenant environment over a shared common physical infrastructure. The transport protocol over the physical data center network is IP plus UDP.

VXLAN defines a MAC-in-UDP encapsulation scheme where the original Layer 2 frame has a VXLAN header added and is then placed in a UDP-IP packet. With this MAC-in-UDP encapsulation, VXLAN tunnels Layer 2 network over Layer 3 network. The VXLAN packet format is shown in Figure below

Fig 1.1- VXLAN Packet Format
VXLAN introduces an 8-byte VXLAN header that consists of a 24-bit VNID and a few reserved bits. The VXLAN header together with the original Ethernet frame goes in the UDP payload. The 24-bit VNID is used to identify Layer 2 segments and to maintain Layer 2 isolation between the segments. With all 24 bits in VNID, VXLAN can support 16 million LAN segments.

VXLAN Tunnel Endpoint
VXLAN uses VXLAN tunnel endpoint (VTEP) devices to map tenants’ end devices to VXLAN segments and to perform VXLAN encapsulation and de-encapsulation. Each VTEP function has two interfaces: One is a switch interface on the local LAN segment to support local endpoint communication through bridging, and the other is an IP interface to the transport IP network.

The IP interface has a unique IP address that identifies the VTEP device on the transport IP network known as the infrastructure VLAN. The VTEP device uses this IP address to encapsulate Ethernet frames and transmits the encapsulated packets to the transport network through the IP interface. A VTEP device also discovers the remote VTEPs for its VXLAN segments and learns remote MAC Address-to-VTEP mappings through its IP interface. 

The functional components of VTEPs and the logical topology that is created for Layer 2 connectivity across the transport IP network is shown in Figure below:-

Fig 1.2- VXLAN Tunnel Endpoint
The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address.

VXLAN Packet Forwarding Flow
VXLAN uses stateless tunnels between VTEPs to transmit traffic of the overlay Layer 2 network through the Layer 3 transport network. An example of a VXLAN packet forwarding flow is shown in Figure below:-

Fig 1.3-VXLAN Packet Forwarding Flow
Host-A and Host-B in VXLAN segment 10 communicate with each other through the VXLAN tunnel between VTEP-1 and VTEP-2. This example assumes that address learning has been done on both sides, and corresponding MAC-to-VTEP mappings exist on both VTEPs.

When Host-A sends traffic to Host-B, it forms Ethernet frames with MAC-B address of Host-B as the destination MAC address and sends them out to VTEP-1. VTEP-1, with a mapping of MAC-B to VTEP-2 in its mapping table, performs VXLAN encapsulation on the packets by adding VXLAN, UDP, and outer IP address header to it. In the outer IP address header, the source IP address is the IP address of VTEP-1, and the destination IP address is the IP address of VTEP-2. 

VTEP-1 then performs an IP address lookup for the IP address of VTEP-2 to resolve the next hop in the transit network and subsequently uses the MAC address of the next-hop device to further encapsulate the packets in an Ethernet frame to send to the next-hop device.

The packets are routed toward VTEP-2 through the transport network based on their outer IP address header, which has the IP address of VTEP-2 as the destination address. After VTEP-2 receives the packets, it strips off the outer Ethernet, IP, UDP, and VXLAN headers, and forwards the packets to Host-B, based on the original destination MAC address in the Ethernet frame

Sunday, February 10, 2019

Chef Vs Puppet: Configuration Management tools

As earlier I discuss about Chef and Puppet, both are the configuration Management tools and are used deploying, configuring and managing servers. Both have the same function of Infrastructure as a code. 

Chef
Chef is basically a tool used for the configuration Management. With the help of Chef, you can get the infrastructure as a code or Network as a programmable infrastructure. Chef has a Client-Server architecture. With the help of Chef we can do infrastructure configuration, Application deployment and configuration management.

In Chef, Nodes are dynamically informed with the configurations in the Server and is called Pull Configuration which means that we don’t need to perform any command on the Chef server to drive the configuration on the nodes, nodes will spontaneously apprise themselves with the configurations existing in the Server. The Chef client pulls configuration updates from the chef server in every 30 minutes.

Puppet
Puppet is basically a tool used for the configuration Management. With the help of Puppet, you can get the infrastructure as a code or Network as a programmable infrastructure. Infrastructure as code is the requirement for usual DevOps exercises such as version control, code review, continuous integration and automated testing. These exercises get us to continuous provision of quality software that gratifies enterprise networks.

Infrastructure as Code is a method to construct infrastructure which operations teams can accomplish and provision automatically through code and thereby getting free of the requirement of physical works to achieve the same tasks. Infrastructure as Code can also be termed as programmable infrastructure.

Fig 1.1- CHEF Vs Puppet
Puppet customs a Master Slave architecture in which the Master and Slave communicate through a secure encrypted channel with the help of SSL.

Chef also uses a Master-Slave architecture, but it has an extra component called Workstation. So, all the configurations are first tested in the Workstation and then it is pushed to the Chef Server. Chef is mainly used to provision infrastructure on cloud, as it is compatible with most of the cloud platforms.

Chef uses Ruby as the configuration language, rather than a custom DSL.Chef is considered from the ground up to incorporate with other tools, or to make that integration as simple as possible. Chef is not the canonical representation of your infrastructure

Puppet uses declarative language which describes the state each resource must achieve while Chef uses imperative language which describe what state to achieve and how to achieve this.

They have the same operation of pull-based configuration management tools, Chef has a separate workstation where the code is written and then sent to the server. In case of Puppet, the code is written and stored on the server.

Saturday, February 9, 2019

Introduction to Puppet- Infrastructure as a Code

Today I am going to discuss basics about Puppet. Puppet is basically a tool used for the configuration Management. With the help of Puppet, you can get the infrastructure as a code or Network as a programmable infrastructure. Infrastructure as code is the requirement for usual DevOps exercises such as version control, code review, continuous integration and automated testing. These exercises get us to continuous provision of quality software that gratifies enterprise networks.

Infrastructure as Code is a method to construct infrastructure which operations teams can accomplish and provision automatically through code and thereby getting free of the requirement of physical works to achieve the same tasks. Infrastructure as Code can also be termed as programmable infrastructure.

Puppet Enterprise offers a strong audit trail with comprehensive reports on the state of infrastructure. We can simply see who altered what, and when and how out-of-policy alterations are spontaneously remediated back to the needed state.

Fig 1.1-Puppet: Infrastructure as a Code

Puppet consists of two physical components and these components are as Puppet Server which is also called as Puppet master and the node.

Puppet server/master: This is the component of puppet infra that grips all the data about all the machines that it can configure.
Node: A machine that can be managed by Puppet in called a Node.

Puppet Server comprises all the configuration for different hosts. Puppet server will run on this master server.The puppet agent and master communicate with each other via a secure encrypted channel implemented using the help of SSL.

Specific software needs to be installed on each of the above machines
Puppet agent: It is an agent that runs on the Node servers
Puppet server: A software which runs on the puppet server where all the node configurations are stored. The puppet server listens to any connection desires starting from agents for configuration changes/upgrades.

Puppet provisions all the configurations for a node in a platform independent manner. This means that a configuration can be related on a CentOS machine, an Ubuntu Machine, and a Windows machine.

This is accomplished by an exceptional theory in Puppet, called Resource Abstraction Layer. RAL is an abstraction which permits workers to describe a configuration which can be platform independent also being extremely manageable. At the time of configuration accumulation, the server looks at the Agent’s Node configuration (OS type, architecture, security restrictions etc.) and gathers a sequence accordingly.

Introduction to Chef- Infrastructure as a Code

Today I am going to discuss basics about Chef. Chef is basically a tool used for the configuration Management. With the help of Chef, you can get the infrastructure as a code or Network as a programmable infrastructure. Chef has a Client-Server architecture. With the help of Chef we can do infrastructure configuration, Application deployment and configuration management.

In Chef, Nodes are dynamically informed with the configurations in the Server and is called Pull Configuration which means that we don’t need to perform any command on the Chef server to drive the configuration on the nodes, nodes will spontaneously apprise themselves with the configurations existing in the Server. The Chef client pulls configuration updates from the chef server in every 30 minutes.

Fig 1.1- Chef as Infrastructure as a code
The Chef client will only make changes when the node is out of spec, it can react to changes using Chef search. Chef permits to dynamically configure and de-provision network infrastructure on request to retain up with spikes in usage and traffic. It permits new services and qualities to be positioned and apprised more regularly, with slight chance of downtime. With Chef, we can take benefit of all the flexibility and cost savings that cloud offers.

There are two ways to accomplish configurations

Pull Configuration: In this type of Configuration Management, the nodes poll a centralized server periodically for updates. These nodes are dynamically configured, and they are pulling configurations from the centralized server. Pull configuration is used by tools like Chef, Puppet etc.
Push Configuration: In this type of Configuration Management, the centralized Server pushes the configurations to the nodes. Unlike Pull Configuration, there are certain commands that have to be executed in the centralized server in order to configure the nodes. Push Configuration is used by tools like Ansible.