How Carriers can Implement Software Defined Network (SDN) and Benefit from it?

We recently hosted a webinar titled “Defining Carrier SDN”.  In this webinar, we discussed how network operators can implement Software Defined Networks (SDN) and benefit from it. We looked at the types of solutions that operators need to build carrier-grade software defined networks at both silicon and software level. We also looked at the different types of applications that can be layered onto the SDN stack, the challenges of building and deploying SDN applications, and the benefits of this approach.

Sudeepta Ray, Assistant Vice President of Technology at Aricent answers some of the questions asked during the webinar.

From an operator perspective what's the relationship between SDN and current Network Management System (NMS)/Element Management System (EMS)?
The NMS /EMS allow the operator to manage/ control vendor specific devices. SDN as part of the Element Management System extends that capability to manage /control the device and orchestrate the network by applying policies on the fly, reacting to the network states, which are dynamic in real time. The future devices will provide a standardized interface to the application/controller platform, and the expectation is that the carriers will be able to do much more than just controlling the devices. Innovative application can be built to monetize the network in a better way. This, however, can happen from another platform, but having it integrated with NMS/ EMS is a logical start.

Should network virtualization be part of SDN or can they be two separate features, which can complement each other?
Virtualization and SDN would be complementary. The virtualized network can also be orchestrated by the SDN Controller as efficiently as a real network switch. Having a virtualized network element has its own advantage and can be very effectively used for certain applications. I think SDN will be orchestrating both real and virtual devices.

SDN simplifies provisioning, but how do you simplify troubleshooting?
I agree that simplicity in provisioning is one side of the coin. Debugging capability has to be improved before SDN takes off in the Carrier Networks. The network switches will be simpler if the control plane is moved away, and the Data Path Control Protocol already provides ways to monitor the network switches. Going forward I expect more debugging capabilities to be rolled into the network switches. Open Flow provides an extension capability, which vendors can utilize for the purpose. Based on the technology of the network nodes, fault isolation capabilities are a must for SDN to take off in the Carrier Networks.

Does it make sense to combine current practice and SDN? For example, most of the traffic can use current protocols-based forwarding and only traffic that needs special care is controlled.
Yes that is how it is going to happen. SDN/OpenFlow will not be a green field deployment. There can be isolated switches in the network belonging to two domains, the OpenFlow and the standard protocol based switches. The usage is based upon the need. This is the hybrid concept. However, these two domains in the networks will interconnect using defined interfaces. The specifications have addressed that. NFV can stand on its own. SDN is not necessary for that, but both can coexist as SDN can orchestrate the devices that have been virtualized.

What is the input to derive "network abstraction" sub-function of SDN controller and how is it different from "global network topology"?
Both are important. One provides the topology of all the connected elements in the network. This is important as it gives the view of which links are connected and how to actually reach a destination. However, the orchestrating application needs an abstract view, a point that is connected between different virtual elements like bridges and the switches. The actual network connectivity between them is handled by the abstraction layer and the application should be agnostic to that.

How is the "physical network topology" discovered? Do we need to still run routing/discovery protocols on "data only switches”?

That depends on the controller and the switch. Typically, any standard controller has the capability to do a “packet out" and "packet in" and typically sends out LLDP packet on the switch interfaces and receive them back. Then it can use its own route computation logic to derive the topology. This is an advantage as you can have your own routing algorithm.

Do you see the controller mainly just adding vendor-specific quantum plugins, for the orchestration layer to control vendor specific devices?
The enterprise market is seeing that change, as vendors use a quantum plug-in to control the network. Besides orchestrating the cloud infrastructure, the Open Stack also does the network manipulation using the Quantum plug-in". We expect to see the same trend in the carrier networks, where carriers are offering cloud services. However in the carrier networks we also expect the orchestration to happen from the NMS platform, where the nature of services is different, like effective control of a multilayer transport network.  Since an Open Stack orchestration framework exists the tendency would be reuse it as much as possible.

What are the security implications of carrier SDN?
There are major security implications. I had touched upon them when I mentioned security as an important parameter: security between the controller and the switch, security at the NBI interface. For example: Isolation and authorization of the applications are essential so that privacy is maintained and unauthorized usage is curtailed.

How does SDN distinguish itself from a GMPLS based control and management functions of networks?
SDN and GMPLS would be similar in many ways, yet different. If you look at the PCE approach, it can be viewed as a form of SDN.  In the traditional GMPLS network, the control plane is on the network elements, hence different. With transport being brought in preview of Open Flow we can we expect to see control of transport networks using the OpenFlow. In the further we expect to see sharing of routing database at different layers for a more effective orchestration of multilayer network routing, overcoming the challenges what are faced today in implementing GMPLS Overlay Networks.

What is the impact of using OF 1.3 with MPLS on Network Flow Visor vs OF 1.0?
The Flow Visior needs modification for running the OF1.3 for MPLS Flows. OF version 1.3 introduces the concept of multiple tables.  Handling of that needs to be addressed in the flow visor software before it can be used with the OF 1.3 protocol.

What sort of use-cases has your solution (Aricent) been deployed for?
We do not sell end-to-end solutions. We have applications/networking software that can be used for orchestration of networks. There are customers who have used them in their Nodes.  The OpenFlow Client and Forwarding Software are targeted at multi-core CPU vendors as we see Run to Completion / Pipe Line models are pretty effective for this.

By when do you think SDN will move from the Data Centre to the WAN?
The traction as per my view in the WAN environment is yet to pick up. My personal take is 2 to 3 years.

Would you describe network-based security services (e.g. content filtering, DDoS protection) as an effect of NFV? Or are they just pure network functions like routing?
I am not very sure if I have got the question.  I do not see the content filtering and DDoS protection as an offshoot of NFV. They are a result of the routing and switching protocols that are in place in the network. What I can say is that, to implement these functions we can have this logic housed in a virtualized environment, in short these functionalities are good candidates for Network Function Virtualization.

Take xSTP as an example for an arbitrary mesh topology with arbitrary scale. It is highly unlikely that SDN could do a better job either in terms of complexity or performance. However SDN might be able to do a better job for a limited topology both physically and in dimension. Any comments?
If we see the current OpenFlow network controllers which are in use work well in a topology of say 100 to 1000 nodes, it does a good job. It solves the problem of having unused links provisioned as it programs the flow for the exact path. The problem of convergence time is not there. However, having a scalable solution to any number of nodes in the topology will be there. That brings us to the issue of using multiple controllers and sharing data between them, which I mentioned as challenges preventing the uptake of SDN.

In centralizing any control protocol in SDN, it cannot be affected by the links carrying traffic. A node must not become isolated as a result of the recovery strategy during topology change. I wonder whether it is possible to use a topology-independent protocol. On the surface GMPLS is such a protocol. Evidently it is used regardless of whether you have a physical ring or a physical mesh. Why couldn't it be used for Ethernet as well, eliminating the need for G.8031 and G.8032?

I agree that in GMPLS, the paths for the control path and the data path are different. For example in MPLS-TP (an example of GMPLS in use), the signalling link and the data links are different.  You can compare it with MPLS where the control link and the data path is on the same link. A failure in the control path could be detected through a node reachability protocol say "node based hello", and would imply the data path is down in most cases. However, in GMPLS, since the control path and the data path do not share the same fate, a protocol must be used to detect the failures in the data path, hence protocol is needed and EPLS and ERPS is used ( analogous to 8031/8032). However, the control plane in GMPLS is still distributed and not centralized.

There is talk of PCE being analogous to SDN, but the fact is part of the control plane functionality is still distributed on the nodes.

Now in SDN the controller needs to know which links in the switches are connected and can use its own algorithm to calculate the topology. A user can use his or her own algorithm, and that is flexibility as it provides the option to add user defined constraints. The procedure to detect the link connectivity can be proprietary or by sending LLDP packets. No idea of topology is needed. That is calculated based on the link connectivity information.

The service provider interoperability issue raised in the webinar is a significant point. If SDN can make interoperability features cost efficient, it solves a huge problem in the industry. This may be one area, however, where OpenFlow falls short today.
I completely agree we need more work on this area. OpenFlow switch does not talk about this, it is dumb and just hands over the traffic over to the other link and since there are no protocols involved, it just passes the traffic. Now what policy is applied is determined by the controller, so service providers must ensure that a constant policy has been applied. That is not addressed by Open Flow as that is a policy agreement.

We already have too many protocols to deal with. How do you think OpenFlow help pushing traffic more efficiently compared to the current technologies available?

Open Flow is a control protocol to manage flow-based traffic. The architecture has separated the control protocols out of the switch. Other protocols can do that but not exactly in the same way. If we look at the forwarding path in hardware the concepts are still the same, only the programming is done from a controller. OpenFlow or some other protocol is not the issue here; SDN is independent of Open Flow.

How is SDN different from the Permanent Virtual Circuits (like X.25 or Frame relay) of the older days where a network controller programed switches in the network?
Yes it is similar in many ways with respect to the configuration of circuits provisioned from the NMS (PVC). In SDN the flow can be provisioned from the NMS like PVC (proactive mode). It is also possible that a new flow hits the switch and since the switch does not have an entry for that, the flow is sent to controller and the controller in-turn programs the flow in the switch (reactive). The second option was not possible in earlier networks. In my opinion the SDN provides the option of dynamically monitoring the network and setting up the path and can also orchestrate it, on the fly, based on the network conditions.  Also if seen from the Ethernet perspective there are differences: For traditional layer 2 we programmed the ports and enable learning etc, but in SDN /Open Flow we program how each flow is provisioned, eliminating the need of some control protocols. In short the concept is analogous to the NMS driven provisioning world, but yet different, as it provides much more.

Why would major network equipment vendors support SDN? Won’t it commoditize their proprietary products and make them lose their customers?

Frankly, they are not interested in moving to any open approach using a defined open protocol to control their network switches. That hurts them. However, they want SDN because that allows them the benefits of having orchestration with real-time control that provides better experience to end -ustomers and provide innovative services.
In my opinion, what one would want is SDN, but with proprietary interface to the switching gear and each having their own controller. No one disagrees on the SDN. The reality is that the open standards caught the attention of the large groups, so we are seeing the major network equipment vendors committing to those “OPEN” standards.

Leave a Reply

Your email address will not be published. Required fields are marked *