Tencent to Invest $70 Billion in 'New Infrastructure'

Tencent | May 26, 2020

Tencent is best-known for its WeChat messaging app and a range of popular games but is aiming to expand into business services as consumer internet growth slows and companies shift number-crunching from their own computers to the cloud. Tencent shares were 2.5% higher following the announcement. Tencent has said while cloud businesses suffered amid the COVID-19 outbreak it expected to see accelerated cloud services and enterprise software adoption from offline industries and public sectors over the longer term. "Expediting the 'new infrastructure' strategy will help further cement virus containment success," Guangming Daily quoted Tong as saying. Tencent Cloud had 18% of China's cloud market in the fourth quarter, trailing Alibaba Group Holding Ltd which commanded 46.4%, according to research firm Canalys.

Spotlight

IBM Integrated Managed Infrastructure helps you fuse your existing IT with cloud capabilities and orchestrate it from any location across all providers. Learn more about IBM Integrated Managed Infrastructure.


Other News
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,APPLICATION STORAGE

LDRA Advances ‘Shift Left’ Strategy with Amazon Web Services Integration

LDRA | September 05, 2022

LDRA, the leader in standards compliance, automated software verification, software code analysis, and test tools, today announced integration with Amazon Web Services (AWS) to help small- and medium-sized organizations more efficiently deploy security into the earliest stages of software development. AWS is a cloud-hosted development and deployment solution that offers more than 200 fully featured services from global data centers. Millions of customers, including startups, large enterprises and leading government agencies, develop with AWS to lower cost, become more agile and innovate faster. The LDRA tool suite adds testing to the AWS cloud pipeline to more efficiently assess an operation, a file or groups of operations/files while also helping focus. The integration of LDRA tools to AWS’ existing testing tools improves software robustness, enhances security and delivers faster time to market. “With advanced cloud-development platforms like AWS, even the smallest organization can build software that is high quality, safe and secure without the need for expensive servers and infrastructure,” said Ian Hennell, Operations Director, LDRA. “Couple AWS with an analysis and testing tool like our tool suite, and they can easily test and analyze the software for any security holes so they can be fixed long before they get to market.” Startups, Enterprises, Government Agencies Benefit from Cloud-based DevSecOps This LDRA/AWS integration, a model for integration in public and private clouds, brings development, security and operations together to improve efficiencies and automation from the start. Using the LDRA tool suite with AWS lets them execute security tests more efficiently across one or many tasks in parallel. This is especially critical for organizations where security is critical, including a large US-hosted defense contractor who recently moved to AWS for Defense. “As we see customers transition their traditional infrastructure to AWS and AWS for Defense, LDRA's ability to interoperate in a cloud environment has become increasingly important,” Hennell added. “Our tool suite can run in traditional AWS, AWS GovCloud and AWS GovCloud with ITAR restrictions, helping customers meet their security needs regardless of which version of the AWS they’ve deployed.” LDRA tool suite supports multiple on-premises, cloud-hosted deployment options In addition to AWS, the LDRA tool suite supports other on-premises and cloud-hosted deployment options such as Wind River Studio and Azure DevOps platforms to support environment hardening and simplifying achieving security at scale. Deployment options include hardened “Zero Trust” environments that rely on always available “known good” containers, eliminating systemic vulnerabilities. About LDRA For more than 45 years, LDRA has developed and driven the market for software that automates code analysis and software testing for safety-, mission-, security-, and business-critical markets. Working with clients to achieve early error identification and full compliance with industry standards, LDRA traces requirements through static and dynamic analysis to unit testing and verification for a wide variety of hardware and software platforms. Boasting a worldwide presence, LDRA is headquartered in the United Kingdom with subsidiaries in the United States, Germany, and India coupled with an extensive distributor network.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

DJIB Launches First Ever Enterprise Grade Decentralised Data Storage Drive

DJIB | September 06, 2022

Today DJIB launched the first ever end-to-end encrypted, enterprise grade decentralised data storage drive with embedded multi chain non-fungible token functionality. It enables the widespread adoption of NFTs in business applications. Cloud data storage is dominated by services such as Amazon AWS, Google Cloud and Microsoft Azure. However, in the age of blockchains, users find traditional storage limiting as it is centralised in the hands of individual corporations. User data can be potentially accessed without their knowledge by employees of such providers. The currently missing ability to save objects as NFTs will be increasingly required in business applications. This is why, while being AWS S3 compatible and blazingly fast, the DJIB data storage drive for the first time addresses all of these concerns by being end-to-end encrypted, censorship resistant, and with built-in NFT functionality. It reimagines the concept of NFTs, treating them as a new type of file format, whereby users can "Save as NFT" any file stored on the drive, thus demystifying the creation of NFTs. Files can be up to 5TB large, which removes currently existing technical constraints. Users can either attach custom business logic to their NFTs, or use pre-defined templates from a library without knowing how to code. For example, a musician can publish a song with pre-defined licensing rights, or a pharmaceutical company can allow patients to share and profit from their medical data with very granular permissions and usage rights - all without the need of any intermediaries or use of specialist software. Any asset can now be tokenised. Any financial director can issue share certificates in NFT format. Such NFTs are immediately interoperable with all the blockchains with which DJIB has a connector. It started from Solana, Ethereum and BSC, but will soon cover all key networks. DJIB is already working on connectors with teams from major blockchains, starting with those that are enterprise focused and see this as an opportunity to foster the development of applications within their ecosystems. Moe Sayadi, DJIB CEO whose background is of a solutions architect at Microsoft and Avaloq, says: "Making our decentralised drive available to enterprise customers and removing the mystery behind the creation of NFTs opens an unimaginable trove of opportunities. It puts a powerful tool into the hands of non-technical domain experts. They can focus on the business logic attached to any document and potentially physical item, and move entire business processes to the cloud. This enables Object Oriented Business Process Management and many other exciting innovations which are in our pipeline and will be announced soon. We are discussing with corporate CTOs some very interesting use cases and I can confidently say that NFT evolution has finally passed the apes stage."

Read More

IT SYSTEMS MANAGEMENT

Inspur Information's Cloud-Native Computing Platform Certified for Arm SystemReady SR

Inspur Information | June 23, 2022

Inspur Information, a leading IT infrastructure solutions provider, joined the Arm® SystemReady™ program, achieving the highest-level Arm SystemReady SR certification. Thanks to its standardized design that can be adapted to multiple systems, Inspur Information is able to meet the increasingly diverse customer needs for different use scenarios in the new era of big data and cloud computing. Inspur Information announces NF5280R6, its first product supporting Arm based Ampere®Altra® and Ampere®Altra® Max® Cloud Native Processors that are designed for modern cloud infrastructure. The 2U dual socket NF5280R6 platform is Arm SystemReady certified and supports up to 256 high performance CPU cores that deliver predictable performance, scale linearly and consume lower power. The NFS280R6 platform will improve rack density by greater than 36% while lowering the power by more than 41% when compared to legacy x86 platforms. In addition, with support for multi-host and smart NICs, NF5280R6 provides eight standard PCIe 4.0 slots and one optional OCP 3.0 slot, maximizing its scalability for various applications such as high-performance all-flash storage and network acceleration. NF5280R6 provides an open source solution that alleviates application portability difficulties and allows customers to maximize their business benefits with minimum porting costs. This makes it an ideal choice for practicing cloud container deployment, Android cloud gaming, and big data applications. Backed by SystemReady SR, infrastructure solution developers can directly deploy or run mainstream operating systems such as Fedora, Ubuntu, SUSE Linux Enterprise, CentOS, Debian, and WinPE on NF5280R6 for an "out-of-the-box" refined user experience. No extra costs for adaptation to different operating systems or container technologies are required. Simplified deployment and support for standard firmware interfaces reduce the cost of customizing firmware and maintaining multiple software platforms, so customers can focus on innovation with their products. "We always keep consumers in mind as we continually innovate our products and technologies to build increasingly diversified product platforms. "As the Arm architecture grew in the server space, we noticed that our customers focused more on the portability of platforms and the convenience of Arm-based cloud-native applications, which is exactly what the Arm SystemReady program provides our customers. NF5280R6, the SystemReady SR-certified cloud-native dual-socket server, handles diversified customer needs, and provides computing power support for a more extensive customer base. In the future, Inspur Information will continue to bring more Arm-based values and innovations complying with industrial standards to our customers and developers." Ricky Zhao, Deputy General Manager of Inspur Information's Server Product Line Arm SystemReady is a set of standards and a compliance certification program that enables interoperability between Arm-based devices and leading operating systems or applications, so that software "just works" out of the box. This allows easy deployment of software and takes advantage of the comprehensive ecosystem of mature operating systems. "As an industry-wide initiative, Arm SystemReady has gained extensive recognition and support from broad partners in terms of its benefits for the entire industrial chain. These partners have taken active participation in formulation and implementation of SystemReady standards, making significant contributions," said Frank Zou, vice president, Infrastructure Line of Business, Arm. "Having Inspur Information, a global leader in computing power infrastructure, join the SystemReady program is a great driving force for the thriving and innovative Arm-based cloud-native ecosystem." About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

Spotlight

IBM Integrated Managed Infrastructure helps you fuse your existing IT with cloud capabilities and orchestrate it from any location across all providers. Learn more about IBM Integrated Managed Infrastructure.

Resources