Openstack VLAN Networking Overview

Networking In Openstack

 OpenStack Networking is an open source, scalable and API-driven system for producing a cluster of networking services in a manageable environment. Openstack is similar to AWS. Openstack a leading community software for today’s private cloud. Openstack Networking offers software-defined networking, which can create multiple networks in a networking  environment without using hardware switches and routers. Openstack community creates different projects to manage networking as a service such as Nova and Neutron. Nova service is used for both networking and computing, but neutron is made solely for networking purpose. Let’s discuss some networking related factors.                                     

Why software defined networking?

Our traditional network architectures are not able to meet the current demand. There are many factors that cannot be provided by traditional networks, such as effective utilization of monetary value, flexibility and so forth. We struggle to manage a dynamic networking environment by using traditional networking technologies. Hence the importance of software defined networking arises.                                      

Software defined networking is an approach or concept of computer networking that gives us ability to manage network services through different levels of  functionality in an abstract manner. In Software defined networking,  the network control and forward functions are independent. In Software defined networking,  network control to become programmable and the hardware infrastructure abstracted from networking services and corresponding application, this approach gives us affordable, easily manageable and suitable for the dynamic nature of today’s applications.

Abbreviation

  • VLAN – Virtual Local Area Network
  • GRE -Generic Routing Encapsulation
  • ML2-Modular Layer 2
  • OVS-Open Virtual Switch 

Openstack Networking

In openstack networking has implemented using two services Nova and Neutron. Mostly we use the Neutron service for openstack networking. Because Nova supports only flat and VLAN networking technology, but neutron offers different type of networking technologies as a dedicated service for networking. 

Plugins

Neutron support different type of backends called “plugins” that work with a growing diversity of networking technologies.There are different type of plugins available for neutron, such as ML2, VMware NSX, Cisco, Big Switch / Floodlight… etc.

Official documents of openstack give more support to ML2 plugin. ML2 Plugin is a  type drivers to support multiple networking technologies and mechanism drivers to facilitate the access to the networking configuration. ML2 plugin support following networking technology in openstack, such as: Flat networking, GRE Networking,VLAN and VXLAN Networking.

Different networking technologies have implemented on Ml2 plugin by using different mechanism drivers such as OVS, Linux Bridge and l2 population (for creating overlay network).

The GRE is offered by the OVS mechanism driver, it gives more flexible and requires less up front configuration; however, the encryption and encapsulation of packets used by GRE will degrade performance. The VLAN option requires more upfront configuration and design. The VLAN is offered by both OVS and Linuxbridge and VXLAN also offered by both mechanism driver.

In the large production environment, most commonly using VLAN networks. Because, we can create a huge number networks using VLAN than GRE or Flat networking technology. So, I’m intending to explain more about VLAN than other networking technologies. Let’s talk about VLAN.

VLAN(Virtual Local Area Network)

In  VLAN  the switch creates different independent zones by adding tags to each frame as an ID that numbered from 1 to 4096 commonly called as VLAN ID. It allows users to create 4095 tenant networks. In IEEE standards VLAN is represented as IEEE 802.1Q. Commonly First ID used as the native VLAN. This not used for creating network. The only purpose is to handle untagged frames received on a trunk port.

You can know much about VLAN and VLAN, these links may helpful:

https://en.wikipedia.org/wiki/Virtual_LAN

http://www.firewall.cx/networking-topics/vlan-networks/219-vlan-tagging.html

Here, I ‘m try to talk about VLAN implementation using ML2 plugins. There are two common practices of VLAN implementation is  VLAN Using Linux Bridge and the other is VLAN Using Open Virtual Switch (OVS). We can use other plugins such as VMware NSX, Cisco, Big Switch / Floodlight … etc, instead of ML2 plugin.

         In this blog gives a brief introduction to the openstack VLAN.  So, I have utilized a simple openstack structure as below

openstactstructure

 

 

Let start talk about Linux Bridge and OVS Mechanism Driver using VLAN networking technology.

VLAN Using Linux Bridge Mechanism Driver

VLAN using linux bridge is a traditional networking mechanism which gives more reliability and throughput. A Linux bridge is more powerful than a hardware bridge because we can shape the network. We cannot consider Linux bridge as a hub, because it is a full featured bridge with forwarding table, frame ageing and even spanning tree support. In production Linux bridge ensures less admin overhead with great throughput efficiency. In Linux bridge each ethernet segments connected together in a protocol as an independent way

 

VLAN Implementation on  ML2 plugin

We have used ML2 plugin for the implementation VLAN. We must configure both Compute and Controller nodes, If a Network node exists it also. Here, I will try to illustrate basic diagram to how Controller and Compute is configured using Linux Bridge VLAN on ML2 plugin. Below example, physical network supports VLAN.

 

Compute Node

Compute linux bridge

Tap device: Interface of vim eg: eth0 attached to vm’s.

Linux Bridge: It acts as a hub. We can connect multiple interface to Linuxbridge. It transmitted frame coming from one interface to all other interfaces.
Security Groups: Set of IP filter rules that applied in networking of each instance.

Explanation

Each VLAN device is associated with a VLAN tag attaches to a VLAN interface device and adds or removes VLAN tags. Vans  are created on Compute Node, here above example, VLAN ID 10 associated with VLAN interface eth0.10 is attached with interface eth0.Packets received from the outside by eth0. Packet which has tagged with VLAN ID10 will be passed to device eth0.10. On eth0.10, strip off the tags and passed to corresponding machine. On compute node Linux Bridge act as a hub for all tap devices for each VLAN ID.

Controller Node

Controller

Tap device: Interface of vm eg: eth0 attached to vm’s

Linux Bridge: It acts as a hub. We can connect multiple interface to Linuxbridge. It transmitted frame coming from one interface to all other interfaces

qdhcp: The dnsmasq process that listens on that interface, to provide DHCP services

Security Groups: Set of IP filter rules that applied in networking of each instance.

Explanation

In Controller node (including Network node) provides DHCP service for each vm’s in the corresponding namespace. Example shows, DHCP gives IP’s to the vm’s included in the same namespace was distinguished  by corresponding VLAN ID. In our example, we have used VLAN ID 10 for representing each namespace.

 

Sample ML2 configuration    
Sample configuration Linux Bridge on ML2 plugin.

#/etc/neutron/plugins/ml2/ml2_conf.ini

linuxbridgesample

You can know much about Linux Bridge VLAN, these links may helpful:

https://ask.openstack.org/en/question/59878/neutron-linux-bridge-and-vlans/

http://docs.openstack.org/kilo/config-reference/content/networking-plugin-linuxbridge_agent.html

https://wiki.openstack.org/wiki/Neutron-Linux-Bridge-Plugin

http://robhirschfeld.com/2013/10/16/openstack-neutron-using-linux-bridges-technical-explanation/

VLAN Using Open Virtual Switch (OVS) Mechanism Driver

Open vSwitch is a programmable virtual switch that supports multilayer networks. It is designed to establish effective network automation and virtualization. Currently OVS supporting standard management interfaces and protocols such as NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, GRE, VLAN, VXLAN, double tagged VLAN, Geneve Tunnels etc.

VLAN Implementation on ML2 plugin

We have used ML2 plugin for the implementation VLAN, we must configure both Compute and Controller nodes, If a Network node exists it also. Here, I will try to illustrate basic diagram to how Controller and Compute is configured using OVS VLAN on ML2 plugin. The Physical network of our examples supports VLAN.

Compute Node

Compute-ovs

 

Tap device: Interface of vm.

Linux Bridge: It acts as a hub. We can connect multiple interface to Linuxbridge. It transmitted frame coming from one interface to all other interfaces.

br-int:  The br-int OVS is the integration bridge. All guest instances in the compute node are connected to this bridge.

br-ex: This bridge provides connectivity to the physical network interface card, eth0.

Security Groups: Set of IP filter rules that applied in networking of each instance.

Explanation

The OVS  is responsible for configuring forward rules on br-int and br-ex for VLAN networking. When br-ex receives a frame marked with VLAN ID 100 on the port associated with physical port phy-br-eth1, it modifies the VLAN ID in the frame to 101. Similarly, when br-int receives a frame marked with VLAN ID 100 on the port associated with int-br-eth1, it modifies the VLAN ID in the frame to 10.  Then at the veth0.10 port associated with the tap device strip off the tag before passing the frame  to the corresponding vm. The figure below shows internal communication between br -ex and br-int.

int-struc

Controller Node

Controller-ovs

Tap device: Interface of vm

Linux Bridge: It acts as a hub. We can connect multiple interface to Linuxbridge. It transmitted frame coming from one interface to all other interfaces.

br-int:  Thebrin -t OVS is the integration bridge. All guests in compute node are connected to this bridge.

br-ex: This bridge provides connectivity to the physical network interface card, eth0.

qdhcp: The dnsmasq process that listens on that interface, to provide DHCP services.

Security Groups: Set of ip filter rules that applied in networking of each instance.

Explanation

In controller node, packet came from VLAN network receives on eth0 port. OVS responsible for packet flow between br-ex and br-int(Integration bridge). Stripping the tag done same as compute Node. Controller node (including Network node) provides DHCP services for each vm’s in the corresponding namespace.

Sample ML2 configuration

Sample configuration OVS VLAN on ML2 plugin.

#/etc/neutron/plugins/ml2/ml2_conf.ini

ovs-sam-2

 

You can know much about OVS VLAN, these links may helpful:

http://docs.openstack.org/developer/neutron/devref/openvswitch_agent.html

https://visibilityspots.org/vlan-flat-neutron-provider.html

https://www.rdoproject.org/networking/neutron-with-ovs-and-vlans/

http://techbackground.blogspot.in/2013/07/the-open-vswitch-plugin-with-vlans.html

http://docs.openstack.org/liberty/install-guide-rdo/neutron.html

Conclusion

Both OVS and Linux Bridge have their own advantage and disadvantages. OVS intentionally designed  to be compatible with modern switch in chipsets. OVS and Linux bridge, allowing us to take same flexibility and control of the physical infrastructure as virtual infrastructure.   

  According Rackspace survey Linux Bridge VLAN has more throughput than OVS.  They SCP 10G file through network and Linux Bridge VLAN gain Higher throughput than other  mechanism driver such as OVS.  

While OVS gives more kernel panic than Linux Bridge, because of the compression and encapsulation of packets makes more CPU overhead. OVS gives more network virtualization than Linux Bridge, but it is difficult to troubleshoot in a heavy traffic environment.

In a production environment, we were looking for reliability and stability, easier to troubleshoot, maximum throughput and simplicity makes Linux Bridge VLAN more admin friendly than OVS VLAN.

Reference Links

https://wiki.openstack.org/wiki/Neutron-Linux-Bridge-Pluginhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_networking-scenarios.html

http://networkheresy.com/2014/11/13/accelerating-open-vswitch-to-ludicrous-speed/

 

 

Posted November 30, 2015 by Jossy Watson

Leave a Reply

Your email address will not be published. Required fields are marked *