VanillaPlus hosted a second NFV roundtable discussion. It was an opportunity for colleagues from Ericsson, Cirtix, Comptel, and RedKnee along with Analysys Mason to discuss how some of the deployment risks can be mitigated. A key question posed is “Why are CPE services a target for virtualization?”
Managed enterprise services are a lucrative revenue stream for many service providers, particularly those looking to offset declining or flat revenues from consumer mobile and wireline services. Managing the customer premise equipment is the single largest expense for service providers. NFV provides the opportunity to eliminate management complexity and expense of customer located equipment, reduce technology obsolescence at customer premise. NFV also enables a pay-as-you-grow CAPEX, requiring small initial investment in virtual machines to support router functions that scale with service. Because enterprise services are in many cases customized (by region, location, vertical, and even by customer), NFV enables workflow automation and self- service provisioning through user accessible portals. Reducing provisioning times and improving time to service is a key driver.
As we discussed on the panel, there are obviously some major challenges. Enterprises are increasingly distributed; remote branch offices are served by a variety of access networks, each with varying degrees of performance, throughput, latency, etc.. It has been estimated that 80% of application performance issues are blamed on the network. And latency is the biggest source of application performance problems. For example, Unified Communications applications generate small packet sizes and ~10x more packets than other applications. Any packet loss, jitter, or degraded performance will have a significant and measurable impact to user experience. Additionally, services are not always ubiquitous, but rather tailored (i.e. vertical specific, location specific, etc.). This means that there will be more configurations and parameters that need to be accurately set in order to meet SLAs and deliver the appropriate service. All this is possible and achievable with of NFV, but one must not overlook the potential increased operational complexity when it comes to properly configuring these service settings, monitoring and troubleshooting. NFV creates the potential for more configuration-induced performance issues. There are more (and different) parameters that can be tuned, and more inter dependencies in this shared, multi-tenant type architecture.
Latency sensitive applications are partially why we’re seeing the industry introduce distributed or edge NFV and other techniques whereby certain capabilities may be hosted on a virtualized platform at the network edge or customer premise, while others centralized. After all it’s not like every enterprise location (particularly remote branch offices) is connected via 10Gbps fiber-fed connections. In fact, most enterprise locations are connected by a range of access technologies, each with varying performance characteristics.
This scenario also creates new, creative business models for service providers including a new suite of ‘micro-cloud’ services. Distributed or edge NFV creates a new and different complexities including, most importantly, new security considerations however including an expanded attack surface. New business models will likely could have the service providers maintaining the platform (i.e. the NFVI) and hosting 3rd party virtual machines (VNFs) which belong to the enterprise or another 3rd party. This added challenge created with multi-tenancy and managing the access control policies from a security perspective, and increases troubleshooting and issue isolation complexity. This is why a new, carrier-scale, Identity Access Management strategy is needed.
Automated network data integrity auditing and analytics will be equally crucial in order to understand the exact configurations of the functions in the service chain, isolate and correct misconfigurations instantly, meet service level agreements and customer experience expectations. Automated analysis of service configuration anomalies can dramatically improve resolution of incidents and visibility into service performance. Data-driven analytics helps prioritize remediation, eliminate hours in troubleshooting time, and proactively refine network configurations (such as QoS and TCP/IP settings, which have a direct impact on application performance.
It remains clear that despite the rapid evolution towards NFV, that we’re only scratching the surface of the many operational considerations that are arising. If you missed the live event, a recording of the session is available.