Application Infrastructure

Huawei is launching a full 5G solution series for "1+N" target networks

At the 2020 Global Mobile Broadband Forum (MBBF), Mr. Yang Chaobin, President of Huawei Wireless Network Solution, divulged the future-arranged "1+N" 5G target organizations and delivered a full arrangement of 5G answers for building rearranged "1+N" 5G networks. "To grasp the moving toward brilliant decade of 5G, we have to develop our organizations towards 5G with full range and assemble one high-data transfer capacity rearranged target network that guarantees universal availability with on-request overlay of 'N' abilities," said Yang.

5G isn't just another age of versatile innovation, it likewise speaks to the formation of new business, environments, and openings. 5G includes more expanded administrations and separated necessities in contrast with 4G. The association of individuals requires an adjacent high-data transfer capacity organization to give a superior involvement with enormously diminished per-bit costs. The association of things likewise requires omnipresent inclusion to help the enormous network of IoT terminals. Industry associations, which are applied first in quite a while, require capacities, for example, adaptable high uplink, low idleness, and high-exactness situating to be sent on request. To meet these separated prerequisites, Huawei proposes "1+N" 5G target networks for full-range advancement toward 5G and building a pervasive high-limit establishment organization, with high-data transmission mid-groups as its center and other recurrence groups to accomplish separated advantages and on-request overlay of 'N' capacities.

Manufacture One Foundation Network for Ubiquitous Connectivity

Any undertaking starts with one stage. Client experience is the premise whereupon jumped 5G administrations are created. The expense per piece of 5G networks must be diminished to offer constant cross-age client experience. The mix of high-transmission capacity mid-groups and Massive MIMO (M-MIMO) is the way to building up a high-transfer speed organization to accomplish universal network.

Scaled arrangement worldwide has indicated that TDD high-transfer speed M-MIMO has been exceptionally perceived over the business. TDD mid-band M-MIMO accomplishes co-inclusion at similar locales with 1.8 GHz and a negligible of a ten times improvement in client experience contrasted and that of 4G. The presentation of a similar equipment changes significantly with various calculations. As the foundation of M-MIMO execution, calculations extraordinarily influence the exhibition of business organizations. Huawei's spearheading calculations will empower administrators to fundamentally improve client experience and the cell limit of 5G networks. Huawei's versatile high goal (AHR) calculation will assist administrators with facilitating grow network limit in situations with high client thickness and solid impedance. The UL/DL decoupling arrangement improves the uplink inclusion of TDD mid-band. It improves M-MIMO inclusion by 6 dB to 7 dB in the uplink and by 2 dB to 3 dB in the downlink. At MBBF 2020, Huawei additionally delivered the multi-band UL/DL decoupling arrangement, which underpins the mix of 3.5 GHz/3.7 GHz and 1.8 GHz/2.1 GHz/700 MHz, 2.6 GHz and 1.8 GHz/700 MHz, 4.9 GHz and 2.3 GHz.

In 2019, Huawei dispatched the business' first Blade AAU, which coordinates Sub-3 GHz and 32T32R M-MIMO to address the issue of amazingly restricted recieving wire spaces in certain business sectors, empowering disentangled sending of single-radio wire 5G M-MIMO. This year, Huawei delivered Blade AAU Pro, which underpins 64T64R M-MIMO and Sub-3 GHz full-band. The arrangement accompanies the special "straightforward" reception apparatus innovation which rearranges organization and subsequently decreases costs.

For business sectors in which high-transfer speed TDD is difficult to find or the limit load is weighty, Huawei dispatched the business' first FDD M-MIMO, which generously improves cell ability to three to multiple times that of 4T4R organizations. The imaginative metamaterial dipole plan and tiny without pim channel innovation empower FDD M-MIMO to have a plentifulness width of under 500 mm and designing details comparable to TDD M-MIMO.
"Huawei's wide array of mid-band M-MIMO solutions and advanced software algorithms can help operators build a mid-band high-bandwidth foundation network for ubiquitous connectivity that delivers optimal user experience," Yang added.

How "N" Capabilities Simplify Deployment and Offer Key Benefits

While running basic services on high-bandwidth mid-band network bands, operators can also develop differentiated advantages by using other spectrums, such as those assigned for FDD or Super Uplink. But this poses challenges such as fragmented spectrums, diversified channels and sectors, and varying spectrum lifecycles. "To innovate 'N' capabilities, we need to simplify deployment while maintaining diversity and order," said Yang.

At MBBF 2020, Huawei released its highly integrated Blade Pro portfolio for FDD applications. This portfolio includes the industry-acclaimed ultra-wideband RRU, which slashes the number of required devices by two thirds by integrating three low or three intermediate FDD bands into one box, greatly simplifying deployment. It also supports dynamic power sharing on multiple bands, making this RRU extremely energy efficient. The Blade Pro flexible channel solution, powered by the industry's first digital software-defined antenna (SDA) and 8T8R RRU, is perfect for various scenarios due to its exceptional flexibility. It supports multi-TX, multi-sector coverage to deliver large capacity and 2 x 4T4R or 4 x 2T2R on one module to cover traffic lines or serve tubular tower sites.

Pole sites can close coverage gaps, offload traffic, and enhance the uplink in cases where macro base stations can provide only basic, incomplete coverage. Huawei also launched a series of simplified solutions to supplement 5G coverage. Easy Macro 3.0 is the industry's first application that carries FDD 4T4R and TDD 8T8R in one box, and it also supports simplified deployment of UL/DL decoupling. Book RRU 3.0 realizes both TDD and FDD 4T4R. For indoor coverage, the LampSite EE suite provides B2B capabilities, such as D-MIMO, Super Uplink, and high-precision positioning, to accommodate differentiated services in varying scenarios.

"1+N" to Autonomous Driving Network

Efficient O&M is a huge challenge in building "1+N" target networks. One challenge can be effectively targeting capabilities for B2B and maintaining 2G, 3G, 4G, and 5G simultaneously. Another challenge is balancing user experience and energy consumption or on-demand slicing and spectral efficiency. Fully optimizing capabilities such as latency, bandwidth, uplink, and downlink is also difficult. AI-assisted autonomous driving of networks will be the optimal way to address the challenges O&M poses in the 5G era.

Through continuous innovation, Huawei has developed a new series of solutions powered by its MBB Automation Engine (MAE) autonomous driving network. The 5GtoB suite enables intelligent and precise planning, simplified provisioning on demand, and proactive network O&M. It realizes a perfect alignment with industry Service Level Agreement (SLA) requirements, adaptive deployment in complex scenarios, and real-time SLA monitoring and fault prediction, facilitating digital layout of the 5G B2B industry. The 5G WTTx suite provides reliable service provisioning check, precise network capacity warning and capacity expansion guidance as well as network evaluation and line management, greatly improving the O&M efficiency of 5G WTTx services. PowerStar, with the help of AI, implements intelligent carrier management based on traffic, reverse mobility load balancing (MLB) energy saving based on energy efficiency, and hierarchical energy saving based on key performance indicators (KPIs). It greatly reduces network energy consumption through coordinated energy saving between multiple frequency bands and radio access technologies. Test results on commercial live networks reveal that PowerStar brings "1+N" energy efficiency and optimal user experience, reducing energy consumption by 15% and increasing network traffic by 10%.

5G promises a bright future. The mobile industry needs to build more powerful networks and create opportunities in the coming golden decade of 5G. Huawei is ready with "1+N" simplified 5G networks and a complete range of solutions.

About Huawei

Founded in 1987, Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. We have more than 194,000 employees, and we operate in more than 170 countries and regions, serving more than three billion people around the world.

Our vision and mission is to bring digital to every person, home and organization for a fully connected, intelligent world. To this end, we will drive ubiquitous connectivity and promote equal access to networks; bring cloud and artificial intelligence to all four corners of the earth to provide superior computing power where you need it, when you need it; build digital platforms to help all industries and organizations become more agile, efficient, and dynamic; redefine user experience with AI, making it more personalized for people in all aspects of their life, whether they're at home, in the office, or on the go.

Spotlight

Spotlight

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

Astera Labs First to Break Through the Memory Wall with Industry’s Highest Performance CXL Memory Controllers

Business Wire | September 21, 2023

Astera Labs, the global leader in semiconductor-based connectivity solutions for AI infrastructure, today announced that its Leo Memory Connectivity Platform enables data center servers with unprecedented performance for memory intensive workloads. Leo is the industry’s first Compute Express Link™ (CXL™) memory controller that increases total server memory bandwidth by 50% while also decreasing latency by 25% when integrated with the forthcoming 5th Gen Intel® Xeon® Scalable Processor. Through new hardware-based interleaving of CXL-attached and CPU native memory, Astera Labs and Intel eliminate any application-level software changes to augment server memory resources via CXL. Existing applications can effortlessly “plug-and-play” to take advantage of the highest possible memory bandwidth and capacity in the system. “The growth of computing cores and performance has historically outpaced memory throughput advancements, resulting in degraded server performance efficiency over time,” said Sanjay Gajendra, COO of Astera Labs. “This performance scaling challenge has led to the infamous ‘memory wall,’ and thanks to our collaboration with Intel, our Leo Memory Connectivity Platform breaks through this barrier by delivering on the promise of PCIe 5.0 and CXL memory.” Data center infrastructure scaling limitations due to the memory wall are none more evident than in AI servers where memory bandwidth and capacity bottlenecks result in inefficient processor utilization. The CXL innovations delivered by Astera Labs and Intel directly address these bottlenecks and lay the foundation for cloud, hybrid-cloud and enterprise data centers to maximize accelerated computing performance. Extending leadership of PCIe® 5.0 and CXL 2.0 solutions Astera Labs has a history of delivering industry-first solutions that are critical to advancing the PCIe and CXL ecosystems. In addition to memory performance advancements with Leo, Astera Labs is also driving interoperability leadership with its Aries PCIe 5.0 / CXL 2.0 Smart Retimers on state-of-the-art Intel server platforms. As the most widely deployed and proven PCIe/CXL retimer family in the industry, Aries features a low-latency CXL mode that complements Leo to form the most robust CXL memory connectivity solution. “We applaud Astera Labs for their contributions to the CXL ecosystem and are delighted to extend our ongoing collaboration. We believe Memory Connectivity Platforms containing innovations from companies like Astera Labs will help deliver enhanced performance on next generation Intel Xeon processors, and accelerate a myriad of memory intensive workloads,” said Zane Ball, Corporate Vice President and General Manager, Data Center Platform Engineering and Architecture Group, Intel. Visit Astera Labs at Intel Innovation! Astera Labs will showcase Leo and Aries together with Intel’s latest Xeon® Scalable processors at Booth #210, September 19-20 at the San Jose Convention Center. Talk to Astera Labs’ experts to learn more about industry benchmarks and how to optimize PCIe/CXL memory solutions in data center architectures to deliver optimized performance for applications ranging from AI, real time analytics, genomics and modeling. About Astera Labs Astera Labs, Inc. is a global leader in semiconductor-based connectivity solutions purpose-built to unleash the full potential of intelligent data infrastructure at cloud-scale. Its class-defining, first-to-market products based on PCIe, CXL, and Ethernet technologies deliver critical connectivity in accelerated computing platforms optimized for AI applications.

Read More

Application Infrastructure

Penguin Solutions Certified as NVIDIA DGX-Ready Managed Services Partner

Business Wire | September 28, 2023

Penguin Solutions™, an SGH™ brand (Nasdaq: SGH) that designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale, today announced that it has been certified by NVIDIA to support enterprises deploying NVIDIA DGX™ AI computing platforms under the NVIDIA DGX-Ready Managed Services program. NVIDIA DGX systems are an advanced supercomputing platform for large-scale AI development. The NVIDIA DGX-Ready Managed Services program gives customers the option to outsource management of DGX systems deployed in corporate data centers, including the implementation and monitoring of server, storage, and networking resources required to support DGX platforms. Generative AI requires a completely new computing infrastructure compared to traditional IT, said Troy Kaster, vice president, commercial sales at Penguin Solutions. These new computing infrastructures require services skills, which Penguin is uniquely qualified to support given our extensive experience partnering with some of the largest companies in AI. As a full-service integration and services provider, Penguin has the capabilities to design at scale, deploy at speed, and provide managed services for NVIDIA DGX SuperPOD solutions. Penguin has designed, built, deployed, and managed some of the largest AI training clusters in the world. Penguin currently manages over 50,000 NVIDIA GPUs for Fortune 100 customers including Meta’s AI Research SuperCluster – with 2,000 NVIDIA DGX systems and 16,000 NVIDIA A100 Tensor Core GPUs – one of the most powerful AI training clusters in the world. “AI is transforming organizations around the world, and many businesses are looking to deploy the technology without the complexities of managing infrastructure,” said Tony Paikeday, senior director, DGX platform at NVIDIA. “With DGX-Ready Managed Services offered by Penguin Solutions, our customers can deploy the world’s leading platform for enterprise AI development with a simplified operations model that lets them tap into the leadership-class performance of DGX and focus on innovating with AI.” Advantages of Penguin Solutions powered by NVIDIA DGX include: Design large-scale AI infrastructure combining the most recent DGX systems, ultra-high speed networking solutions, and cutting-edge storage options for clusters tailored to customer requirements Manage AI infrastructure making the most of multiple layers of recent hardware and software, such as acceleration libraries, job scheduling and orchestration Reduce risk associated with investments in computing infrastructure Optimize efficiency of AI infrastructure with best-in-class return on investment. About Penguin Solutions The Penguin Solutions™ portfolio, which includes Penguin Computing™, accelerates customers’ digital transformation with the power of emerging technologies in HPC, AI, and IoT with solutions and services that span the continuum of edge, core, and cloud. By designing highly-advanced infrastructure, machines, and networked systems we enable the world’s most innovative enterprises and government institutions to build the autonomous future, drive discovery and amplify human potential.

Read More