More Answers from our Webinar on “OpenFlow and the New Ethernet Game Plan”

Aricent recently hosted a webinar titled OpenFlow and the New Ethernet Game Plan. The webinar provided Telecom Equipment Manufacturers with exhaustive insights into the latest Software Defined Networking (SDN) and OpenFlow technologies that are driving rapid innovation in the routing and switching domain. We discussed the architectures and application of OpenFlow/SDN to various networks including datacenters, enterprise, and transport networks. We also detailed the present state of adoption and our view on its evolution.

View the Archived Webinar - New Registration
View the Archived Webinar - Already Registered

Given the large number of questions asked we were not able to answer every one of them live, so we've addressed all of them below.

Where does OAM belong in the SDN-layered model? For example, OAM for fault detection and for performance monitoring. And how are they associated with the data plane?

There are 2 aspects to OAM:

  1. Continuous monitoring of connectivity and collecting data
  2. Collation of the gathered events and data to take actions and present a coherent picture to the administrator/user

The 1st aspect must be handled at each network element. One way of doing this is for the controller to instruct the network element to collect certain statistics or to keep transmitting a certain frame out of a certain port with a given time interval. The 2nd aspect would be handled at the controller.

How does OpenFlow impact the resource capacity limits for service providers – if implemented in a hybrid environment of an MPLS network?

Today’s switches have many different types of hardware tables – routing table, MAC address table, VLAN table, MPLS forwarding table, queues, policers, filter tables, etc. OpenFlow promises the possibility to discard these different tables and use a single pattern match table together with the queues and policers, etc. The pattern matching can be done with a specific number of bytes (say 64 or 128 etc.) in every incoming packet. Thus, the complexity of the various elements in the hardware can be reduced. The expected benefit is that with the same amount of resource capacity in the hardware (say total amount of memory), more data path flows can be handled. In addition, the promise is that the data path handling can be changed or upgraded without needing a hardware upgrade.

However, if an integrated hybrid model is implemented then a particular flow may be subjected to control by the legacy control plane and as well as the OpenFlow (OF) control plane in the network. In an MPLS network, to keep things simple, the operator will try to partition the resource pools such as the label space/ports queues/meters etc. This may bring some degree of inefficiency in since some resources with available connections cannot be used by the control plane as at that time the resources may be under the control of the other control plane. There can be dynamic movement of resources across the two types of control planes but that is complex. (Things like this happened in the past, for example, when GPRS/EGPRS was introduced the vendors used to partition the available time slots in the controller between the GPRS and the Voice Controller and later implemented the logic for moving the slots between the two controllers. This was very effective in the beginning when the data requirements were low and most of the resources were allocated to voice, but later when demand for data increased dynamic sharing had to be implemented. I expect a similar concept to be developed.

What do you think would be the ratio for controller to switches in this model?

There is no single answer. This depends on the number of ports on the switches, the processing power of the CPU at the controller, whether it is being used in a pure OpenFlow environment or a hybrid environment, etc. It is more realistic to think of a ratio of end clients/hosts that a controller can handle. A rough thumb rule is that a low capacity controller can handle a few thousand end hosts, a medium capacity controller can handle 10s of 1000s, and a large controller should handle a few 100s of 1000s. Accordingly, the controller’s processing power must be chosen. Making this more complicated is the possibility of a distributed controller (a controller which is not a single physical element but which is a single logical element comprising of multiple physical elements).

When do you see the SDN/OpenFlow implementation in an LTE Access Network or EPC Network?

It is probably 24-30 months away since the predominant interest in SDN/OpenFlow currently seems to be from datacenters. There is promise if the EPC core can also control the flows at the transport network, since a view of the service-based flow already exists at the EPC. We see this happening through PGW and the PCRF interfaces – it can try to route a flow through an alternate path if there is congestion in the transport network. OpenFlow provides the possibility of gathering the transport metrics and it can orchestrate a logic based on the results. There may be other applications also.

You've explained that OpenFlow is a protocol and SDN is a broader architecture. But can you have SDN without OpenFlow at all? Is there an example?

Yes, it is possible. Instead of the OpenFlow messages, some proprietary messages can be exchanged between the controller and the switch. Or another standard could come up. Right now the only standard is OpenFlow. It is interesting to note the similarity of SDN with the PCE model in transport networks. The model is the same and there is no OpenFlow protocol. Currently vendors like Cisco offer APIs to control their switch – their developer platform onePk could probably enable orchestration logic to be built around it. In the cloud environment the Nova stack controls the Open V switch. OpenFlow is another standard way of controlling the switch.

Are OpenFlow applications applicable for routers as well? Reason for the clarification is that the word "switch" has been used in this presentation and not “router”.

Yes, it is applicable to routers as well. We use the term switch in a generic sense since a switch can be an L3 switch or an MPLS switch or an optical switch as well.

When applying SDN to transport networks, in addition to scalability and restoration time issues are there also issues with security and reliability?

These are tackled by the use of an SSL connection between the controller and the switch. SSL provides both, reliability of the messages being delivered and security.

VMWare purchased Nicara a few months ago. Moving forward, how do you think the acquisition will affect the growth of SDN?

This only fuels the number of startups in this space and promises more solutions with interesting innovations and options. With more players jumping in, we can expect the really “best of breed” to survive and for operators to be able to select the most optimal solutions.

In transport networks, such as OTN over DWDM networks, legacy network control is based on Path Computation Element that is centralized. Furthermore such networks are very static. So how can SDN be beneficial over legacy architecture in DWDM networks?

In a completely static network, where everything is already provisioned, SDN does not offer much benefit. You could also say that SDN is already being used but the name for it is different – PCE. The concepts are the same. One benefit can be the standardization of the messages between the controller and the network elements/nodes which can make it easier for operators to bring in new gear or to lay out specifications that OEMs must meet.

One of the many applications in the network is GMPL UNI (overlay networks where IP clouds are connected by an OTN/DWM network). SDN applications can have a better view of the resources and set up the circuit across networks. In PCE path computation the signaling can be GMPLS (RSVP-TE). In SDN that changes. It programs the switches like a network management device, the difference being it is standardized. It does away with the multitude of protocols and becomes simpler. Therefore, SDN is similar to PCE but a tad different. However, both provide the benefit of calculating the path (constraint-based) effectively.

Will the ONF/SDN messages be transmitted over an unknown device?

There is no separate unknown device. These messages go over any channel (like an Ethernet port) between the controller and the switch. Over the physical medium, Secure Socket Layer (SSL) which is layered over TCP/IP is used.

How is this SDN abstraction different than the 7-layer OSI abstraction model?

SDN does not change any of the postulates of the 7-layer OSI model and it does not try to explicitly force-fit anything into this model either. Today, we have many protocols which work at different layers of the OSI model. All that SDN does is to try and move the protocol intelligence from the switch to a controller and thereby simplify the switch. The protocol still remains the same. Over time, the protocol itself might become unnecessary but that is a separate discussion.

Exactly how many months from now are we going to see the SDN deployments, actively? In your opinion.

Trials are already happening in live networks. Deployments in datacenters are already happening. Widespread adoption could become a reality within 2014.

Scalability is an issue in such an architecture. Do you think that finally network operators are going to adopt it?

Transport network operators already use PCE which adopts a similar model like SDN. Operators will adopt whatever provides the best cost for the delivery of services and they will benchmark any new technology against these criteria. Let’s look at this statement, “scalability is an issue in such architecture,” a little more closely. Take, say, a network of a 100 routers. For this to handle, say, 50,000 routes, every router’s CPU must have the capability to advertise, process, and calculate 50,000 routes. All of them are going to calculate the same thing but they do it individually. So, effectively, you have a lot of computation going on which is redundant. With SDN, the change is to move all this computation to one centralized processor. It’s still calculating only 50,000 routes, and so the scalability challenge is not different. In addition, this centralized processor must also handle messaging with the 100 routers. This can cause the computational load to become twice. So, in effect, you have 1 processor with double the processing power replacing 100 processors, which in turn can be replaced by lower power variants. So, the overall cost of the network reduces.

Since the control is software-based, how do we address the latency on the data path (decision-dependent on control information) especially in data centers?

This should not be vastly different from the latency within any switch today. In today’s switches, there is usually some connection, like a PCI bus or an Ethernet connection, between the CPU and the data path hardware. This connection involves latency when an event has to be communicated to the CPU and the CPU makes a decision and sends a message back to the data path hardware. It is the same with OpenFlow except that the Ethernet connection may go through a few other devices to a separate physical device which is the controller. Again, going back to the origins of SDN – although the least latency would have been to put the data path and control plane into the same single processor, this severely restricts how the system can evolve. And the control-plane – data path separation in SDN happened to overcome this problem. Putting the control plane within the switch will definitely have the lowest latency for reacting to events but it also results in hindering evolution and places limits on scalability. The separation envisioned by SDN will throw up problems to be solved but will take us to a more scalable and more intelligent network.

Do you envision the network elements that today are distributed, such as firewalls, session border controllers, content filters, etc. will migrate to central applications under OF?

Wherever network elements are large in number, 100s or 1000s, SDN definitely provides benefits by reducing the number of devices to be upgraded or evolved. If the number of devices is very small, less than 10, or a few 10s, then, there is not much benefit. Content filters are already migrating to a more centralized model. Many home gateways today redirect incoming traffic to a server that is hosted by the service provider for content filtering and sends the traffic back to the gateway. This has happened fundamentally because of the problem that SDN is trying to solve – more and more intelligence required in the network elements (gateways in this case) making them more complex and expensive.

Can 2 or more controllers be put in a data center network? Are there any messages available that can be used for synch between the controllers?

Yes, multi-controller architectures and use-cases are possible. In fact, these will be absolutely necessary if the network has to scale. Currently, there are no standards for this and it is left to proprietary implementations only.

Did not hear much about the "New Ethernet Game Plan". Can you elaborate? Thanks.

The “New Ethernet Game Plan” is the movement towards SDN instead of the plethora of L2 and L3 protocols.

Does the SDN architecture imply 2 separate networks, the control network and data switching network?

Yes. The control network could be a single controller only.

According to network equipment vendors, over 70% of switch software is due to "rainy day" failure scenarios, handling of infrequent/unexpected states and events. How will SDN work across production networks when controller SW and multiple switch HW platforms are being separately developed and evolved?

The standardization of the model of the switch, and of the messages between the controller and the hardware addresses the issue of the software and hardware being developed by different vendors. The 70% software for rainy day scenarios would effectively make a case for SDN usage, since this 70% can get moved to a controller and avoid getting replicated in every switch/router. Also as a general guideline, centralized architecture is better able to handle rainy day scenarios than a distributed control architecture.

Leave a Reply

Your email address will not be published. Required fields are marked *