Time:2023-06-29
Shanghai, China, 29 June 2023 - ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of information and communication technology solutions, has demonstrated a new type of intelligent computing infrastructure at the 10th MWC Shanghai. This infrastructure is designed to address the computing power challenges associated with large-scale model computing.
The power of generative AI, such as ChatGPT, is playing a crucial role in driving digital transformation and intelligent advancements across various industries. As the demand for large-scale and complex deep learning model computing continues to grow, there is an increasing need for a transformation in computing power. The requirements for computing power are becoming more diverse as data volumes explode and data resources become ubiquitous. This surge in demand has provided a strong impetus for the development of the server and storage market. Consequently, there is a greater emphasis on the performance, reliability, and innovation of servers and storage solutions to meet these evolving requirements.
In terms of servers, ZTE offers a full series of server solutions that support GPUs and liquid cooling, enabling the creation of computing resources for large-scale model processing with exceptionally low power consumption. This results in a reduced Power Usage Effectiveness (PUE) of data centers to less than 1.13. ZTE has introduced the R6500G5 GPU servers, which can accommodate up to twenty GPUs. Additionally, ZTE plans to unveil a higher-performance R6900G5 GPU training server by the end of this year, further expanding their server portfolio.
In the field of storage, ZTE offers high-bandwidth multi-converged storage solutions to cater to the diverse data storage needs of AI training. Their offerings include a combination of distributed disk arrays and high-end all-flash disk arrays, capable of meeting requirements for large capacity and high performance. ZTE provides various types of storage options, including files, objects, and blocks. Moreover, the ZTE NEO intelligent cloud card facilitates the offloading of the high-performance storage transmission protocol NVMe, enabling storage performance of up to 3 million IOPS.
In the realm of networking, ZTE employs a high-speed "lossless" network architecture to ensure the optimal utilization of AI computing power. By utilizing high-performance RDMA (Remote Direct Memory Access) networks and lossless switches, ZTE establishes a super-large-scale computing power cluster built around DPUs (Data Processing Units). The introduction of the NEO intelligent cloud card further enhances the server's capabilities, enabling single-node forwarding performance of up to 800Gbps and achieving microsecond-level latency. This breakthrough in networking eliminates bottlenecks between nodes, maximizing the computing power of GPU clusters.
As an integrated solution provider of intelligent computing infrastructure, ZTE is committed to collaborating closely with operators and industry partners. Together, they will drive continuous upgrades in computing power, foster the development of intelligent infrastructure, and embrace the new era of intelligent computing. By doing so, ZTE aims to inject fresh impetus into the growth of the digital economy.