TECS Cloud OS

Home Page > Products > Infrastructure Introduction > Cloud Resource Pool (NFVI) > TECS Cloud OS

TECS Cloud OS
TECS Cloud OS provides virtualized management upon computing, storage, and network resources, building a cloud environment for users rapidly.
TECS Cloud OS is OpenStack-based, integrated with NFV (Network Function Virtualization) architecture to manage virtualized infrastructures through a unified interface to reduce the operation cost of business. ZTE Cloud OS has made lots of enhancements upon OpenStack, and its performance, reliability and security can meet carrier-grade requirements, so as to meet the needs of network infrastructure cloudification.
TECS Cloud OS uses TECS Compute as hypervisor. TECS Compute adopts KVM technology and related enhancements based on ZTE CGEL carrier-grade OS to provide complete CPU virtualization, memory virtualization, and I/O device virtualization capabilities. At the same time, TECS Compute also supports container and provides lightweight virtualization technology.
TECS Cloud OS uses ZTE DVS as distributed network switching engine to provide a unified, virtualized switching system with high performance. ZTE DVS is based on Open vSwitch and combined with Intel DPDK program, to provide users with a pure software-defined virtualized exchange solution. In addition to meeting the needs of traditional network virtualization, ZTE DVS also supports SDN (software defined network), complying with OpenFlow protocol, to provide a complete Overlay virtualization solution based on VxLAN in coordinate with SDN controller.
Latest Function
If you want to know the latest functions, please send email to SDNFV@zte.com.cn to apply for VIP user.
Major Function
  • VM performance enhancements

    • Support the binding of VM VCPU and physical host CPU, support VM VCPU and memory automatically allocated from the same NUMA node, and support huge page technology to enhance VM's running performance.
    • Support SR-IOV technology and DVS (Distributed vSwitch),and also support the two coexisting in the same physical network card to enhance the VM's network performance.
    • Support storage passthrough by which VMs can directly mount bare disks to improve I/O performance.
  • VM scheduling strategy


    Support setting VM affinity groups (VMs in one group are deployed in the same physical node), anti-affinity groups (VMs in one group are deployed in different physical nodes), soft affinity groups (affinity deployment in preference, if resources are not sufficient, VMs can be deployed freely), soft anti-affinity groups (anti-affinity deployment in preference, if resources are not sufficient, VMs can be deployed freely), VM with exclusive CPU, VM with exclusive NUMA, VM with assigned physical node, VM with assigned NUMA and other types of VM deployment strategy, meeting different needs of customers.

  • VM dynamic adjustment

    Support online and offline adjustment of VM resources (CPU quantity, memory size and disk size), hot-swap function of disk, network card and USB device, online / offline migration of VM and storage, to meet the real-time resource demand of service VM.

  • VM QoS function

    Flexible quality control of VM CPU, IO, network bandwidth and so on, achieves controllable VM computing ability, ensuring or limiting the use of VMs for computing resources within the specified range, so that VMs with different service needs can get the most appropriate computing performance, to achieve optimal reuse of resources and reduce costs.

  • Automated installation and deployment

    The automated deployment tool Daisy achieves navigation deployment of TECS Cloud OS and rapid deployment of clusters, supports plug-and-play hosts and automatically completes the installation of TECS Cloud OS.

  • VM high availability

    The VM supports the watchdog monitoring mechanism, to protect the inside of VM. It resets to restore the VM when the VM is abnormal. Through VM’s local self-healing and remote rebirth function, the VM can automatically restore itself when a fault occurs or a physical node fails, reducing the cost of manual maintenance, shortening the impact of service interruption, and improving the availability of virtual machines effectively.

    TECS Cloud OS supports multi-path storage to guarantee storage reliability, supports the detection and restore of read-only cloud disks caused by storage link interruption, and supports disaster recovery server which can rapidly restore VMs when hosts lose power.

  • Multiple backup recovery mechanisms

    Provide backup and recovery of VMs based on snapshot / file system, and local / remote backup and recovery based on disk array, to meet needs of various scenarios.

  • Version upgrade and rollback

    Support on-line and off-line version upgrade and rollback, without affecting VMs' running.

  • All-around safety

    Support OS simplification and security reinforcement.Support image signature and storage encryption to ensure the safety of image and cloud disks. In order to gurantee the safety of VM data, support clearing data of  internal memory and cloud disks when deleting VMs and cloud disks. In addition, support unified account management, authentication management and network access management to realize an unified and safe access portal.

  • Disaster recovery management system

    Support distributed disaster recovery management systems. Support the configuration of 1+1 or N+1 backup. Offer disaster recovery function of infrastructure layer to improve system reliability.

  • Multiple indicators display

    Offer 500+ indicators of system, hosts and VMs, and support self-defined report templates to realize report customization.

  • Support 40G network card

    TECS Cloud OS supports 40G network card function, and also supports SR-IOV function using 40G network card, to meet requirements of different VM scenarios. In addition, TECS Cloud OS also supports virtual switch and SR-IOV function sharing 40G network card, that is, a physical network card supports virtual switch and SR-IOV function at the same time. The VM can use the two types of network cards in mix to meet different customer demands.

  • DPM(Dynamic Power Management)/DRS(Dynamic Resource Scheduling)

    TECS Cloud OS provides intelligent DPM function, and combines with the use of live migration function to dynamically adjust the power-up/down status of the node, to reduce the energy consumption of data center, to achieve energy saving and emission reduction.
    TECS Cloud OS provides DRS function, and dynamically adjusts the load status of each physical node, so that the entire cluster is load balanced, to ensure that each VM obtains appropriate resources in a timely manner. 

  • Unified deployment and management of VM / bare metal

    TECS Cloud OS provides unified deployment and management for VM / bare metal. It implements unified management and deployment by regarding bare metal as a special VM.

  • VMware and OpenStack heterogeneous cloud management

    TECS Cloud OS supports VMware and OpenStack heterogeneous cloud management, brings VMware resource pools into OpenStack management, and offers customers OpenStack functions.

Performance Index

Index   Index Name Index Value Description
Management Capacity Index   Number of maximum physical hosts supported by a single VIM 1,024 (Containing 9 control clusters, supporting up to 2,048 hosts) Number of physical servers or blades that can be managed by a single VIM 
Number of maximum VMs supported by a single VIM 10,000 (Power-on) VMs that can be managed by a single VIM
Number of virtual switches that can be built internally by a single VIM 1,024 Number of Virtual switches(vSwitches) that can be managed by a single VIM
Specifications Physical Host Specifications Number of maximum logic CPUs of each physical host 480 Number of logic CPUs supported by the host. Logic CPU is the physical host CPU cores. If there are HTs, it is the number of HTs
Maximum physical memory capacity of each physical host  12TB Maximum physical memory capacity supported by the host, which is the maximum memory capacity of the servers or blades
Number of maximum VMs of each physical host  2,048 Number of maximum VMs that can be carried by each physical server or blade 
Maximum volume size supported by each physical host 2,048 Maximum logic volume size that can be mounted by a single physical server or blade
Number of NUMA nodes of each physical host 16 Number of NUMA nodes supported by each physical host 
Number of VMs that can be transplanted concurrentlyby each physical host  16 Number of VMs that are transplanted concurrently on the same physical host
VM Specifications Number of vCPUs supported by a single VM 240 Number of vCPUs supported by a single VM. It is the number of CPUs that are viewed by the VM OS
Number of NICs supported by a single VM 24 Number of NICs supported by a VM
Number of mounted disks supported by a single VM 60 Number of mounted disks supported by a VM
Memory volume supported by a single VM 4TB Maximum memory size supported by a VM
Maximum disk volume supported by a single VM 64TB Maximum space of the disk volume that can be mounted by a VM 
Number of maximum snapshots of a single VM 256 Number of snapshots supported by a VM 
Deployment Indexes   VIM overall deployment time duration <=50m Time duration for deploying management nodes and compute nodes. There are four blades altogether
Compute node deployment time duration <=40s Time duration for calculating the node expansion and installing 8 blades at the same time
Reliability Indexes   Compute node reliability <=90s Time duration from blade server getting offline to off-site VM power-on for regenerationif one compute node breaks down
VM self-healing <=10s Time duration from VM being hungup to VM rebooting success 
Active/standby switchover of control nodes <=60s Time duration from controlling the active/standby switchover of control nodes through the HA command “pcs cluster move”and checking when the system provides services again
Control on the power-off switchover under the active node of the nodes <=120s Time duration from controlling the active/standby switchover of nodes through resetting the active node to checking when the system provides services again
System power-on time <=5m Time duration from powering on the system by using ZXCLOUD E9000 after being powered off to checking when the system provides services again