Server virtualization effect
Server virtualization technology enables each server in the data center to process tens or even hundreds of virtual machines simultaneously using multi-core processor technology. Thus, packet processing functions, such as packet classification, routing decisions, encryption/decryption, and the like, are multiplied. Since discrete network systems may not be cost-effective in scale to meet these increased processing demands, some changes in the network are required.
The network functions implemented in the software, as well as the hypervisor in the network, are not very effective, because the x86 server is not optimized for packet processing. Therefore, the control platform needs to be adjusted in some way, by adding a communication processor to offload network control tasks, and the control platform and data platform greatly benefit from the specific functions of providing accelerated hardware assistance.
The table below shows the comparison of 1000 virtualized servers processing packets. As shown, by mapping each processor core to four virtual machines and assuming 1% traffic management versus 25% east-west traffic, in this example, the network management efficiency of a virtualized data center Increased by 32 times.
Virtual machine migration
Support for virtual machine migration between servers, whether it is a cluster of servers or across multiple clusters, creating additional complex management and packet processing. IT administrators can decide to migrate a virtual machine from one server to another for a variety of reasons, including resource availability, quality of experience, hardware/software maintenance, or network failure. The hypervisor processes the virtual machines that remain on the target server, then moves the virtual machine to the new destination and eventually tears down the original virtual machine.
The hypervisor is not able to generate a new generation of Address Resolution Protocol (ARP) broadcasts in a timely manner to notify virtual machine migrations, especially in large-scale virtualized environments. The network can become very crowded even when ARP messages migrated by a virtual machine fail to be controlled in time. Given the dramatic impact of rapid changes in connectivity on network behavior, ARP information and routing tables, existing control platform solutions need to be upgraded to more scalable architectures.
Multi-tenancy and security
Given the high cost of building and operating data centers, many IT organizations have chosen to use different departments, even different corporate multi-tenant models, to share a common infrastructure virtualization resource (in the cloud). In a multi-tenant environment, data protection and security are critical, which requires logical isolation of resources and does not share physical resources with any customers.
Therefore, the control platform must be able to provide secure access to data center resources and be able to dynamically change the security posture of virtual machine migrations. It may also be necessary to control the platform to achieve specific policy and quality of service (QoS) levels for the customer.
Service level agreement and resource measurement
The network as a service model requires effective resources to ensure the maintenance of the SLA. Resource metering by collecting network statistics is a good way to calculate the return on investment, as well as assessing the expansion and upgrade of infrastructure and monitoring service level agreements.
Current network monitoring tasks span management programs, old management tools, and some new infrastructure monitoring tools. The collection and integration of this management information further increase the complexity of the control platforms of data center operators and multi-tenant enterprises.
The control platform can be extended in two directions: outward or upward. In scale-out, the functionality of the control platform is separated and distributed across physical or virtual servers. In the upward expansion, additional computing resources have increased the processing power of the server, such as x86 processors. The architecture that scales out and scales up at the same time enables hardware acceleration performance that provides specific functions to be further improved.
Control platform scale-out architecture
In a scale-out architecture, the base platform is an enhanced general-purpose processor with a separate communications processor and a dedicated hardware accelerator that offloads the control platform functionality. Control platform tasks are broken down into subtasks, such as discovery, propagation, and recovery, which are then distributed in the data center. The scale-out architecture is ideal for software-defined networking (SDN) because the various tasks can run on any server in the network and in the cloud. Due to its decentralized arrangement, the architecture requires reliable communication, using API protocols such as OpenFlow between the control platform and the data platform.
Depending on the size and configuration of the network, hardware acceleration of these network functions may be necessary to satisfy performance. The communication processor that understands the protocol is designed to handle specific control platform tasks or network management functions, including packet analysis and routing, security, ARP offload, OAM offload, IGMP messages, network statistics, application-aware firewalls, QoS quality of service, etc.
Control platform scale-up architecture
Scale up the architecture, existing network control platform, and supplement with additional or more powerful computing engines to help implement the network control stack. These complimentary resources free up server CPU cycles to other tasks and bring overall improvements in network performance. Given that the general-purpose processor’s packet processing capabilities are optimized, they are not an ideal solution for scaling up the architecture. With a scale-out architecture, specific features significantly improve the performance of protocol-aware communication processors.