STORAGE MANAGEMENT, WINDOWS SERVER MANAGEMENT, IT SYSTEMS MANAGEMENT
Marvell | October 21, 2022
Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced a comprehensive 3nm silicon platform to advance its industry-leading products across the cloud data center, carrier, enterprise, and automotive markets. Leveraging Marvell's success in 5nm, which includes the industry's first 5nm Data Processing Unit (DPU) – the OCTEON® 10 platform, this suite of advanced technology enables cutting-edge monolithic and multi-die solutions for its customers in the industry's most advanced process node, delivering the performance, power, and density (size) necessary to meet most demanding infrastructure requirements for compute, next generation 100T Ethernet switching, and 5G Advanced baseband processing.
The new 3nm Marvell silicon, which is now in fabrication with Taiwan Semiconductor Manufacturing Company (TSMC) on its 3nm shuttle, is available for new product designs and includes foundational IP building blocks such as long reach SerDes, PCIe Gen6 PHY, and several standards-based die-to-die interconnect technologies for managing data flow across the data infrastructure. This 3nm development follows numerous 5nm solutions from Marvell – in production or development – that span its unrivaled portfolio of electro-optics, switch, PHY, compute, 5G baseband, and storage products, as well as a wide range of custom ASIC programs.
Additionally, this IP portfolio is compatible with 2.5D packaging technologies such as TSMC's leading-edge 2.5D Chip-on-Wafer-on-Substrate (CoWoS) and will enable Marvell to develop some of the most advanced multi-die, multi-chiplet systems-in-package (SiP) for its industry-leading infrastructure products and co-development of custom ASIC solutions optimized for some of the most challenging infrastructure use cases, such as machine learning.
Silicon Advancing the Cloud
With data and internet traffic approximately doubling every two years, cloud service providers, software-as-a-service (SaaS) companies, and telecommunication carriers are increasingly relying on silicon optimized by semiconductor providers to deliver breakthrough performance and bandwidth while minimizing power consumption, emissions, and cost. Achieving these objectives, particularly for hyperscale cloud providers, requires silicon partners to move quickly to the most advanced process node available to take advantage of the inherent scaling benefits in power, performance, and density.
Marvell delivers a wide range of industry-leading standard products for cloud infrastructure including electro-optics, processors, accelerators, optical modules, Ethernet switches, storage controllers and PHY chips, and offers customized products through Marvell's ASIC portfolio. By developing and validating each of the critical IP blocks in silicon early in the availability of the 3nm process, Marvell can significantly accelerate customers' time-to-market while reducing the design risk and verification efforts associated with its complex monolithic or multi-die SoC designs.
"Marvell teamed with TSMC to provide our customers with the power to build high-performance, cloud-optimized solutions for the most demanding applications requiring the industry's first 3nm IP on silicon. "The 3nm platform provides advantages for a wide range of solutions, from standard and application-specific SoCs to highly custom chips with unique and innovative designs."
Raghib Hussain, President of Products & Technologies at Marvell
"TSMC is pleased to collaborate with Marvell in taping out a chip on our 3nm shuttle to validate critical cloud-focused IPs," said Yujun Li, Director of High Performance Computing Business Development at TSMC. "TSMC is looking forward to our continued collaboration with Marvell in the development of leading-edge multi-die SoCs utilizing TSMC's process and packaging technologies."
"The cloud will play an outsized role in transforming healthcare, curbing emissions, and taking on other real-world challenges, but only if cloud providers can continue to increase the overall performance and efficiency of their infrastructure," said Alan Weckel, co-founder of the 650 Group. "Marvell's collaboration with TSMC and its strategy of optimizing silicon building blocks for a wide spectrum of devices and applications is poised to play a critical role in allowing cloud providers to fulfill that promise."
To deliver the data infrastructure technology that connects the world, we're building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world's leading technology companies for over 25 years, we move, store, process and secure the world's data with semiconductor solutions designed for our customers' current needs and future ambitions. Through a process of deep collaboration and transparency, we're ultimately changing the way tomorrow's enterprise, cloud, automotive, and carrier architectures transform—for the better.
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,DATA STORAGE
Juniper Networks | November 30, 2022
Juniper Networks , a leader in secure, AI-driven networks, today announced that PT IndoInternetTbk (Indonet), an Indonesian digital infrastructure provider, has selected Juniper Apstra to help automate, modernize and facilitate an experience-first expansion of its network infrastructure. Through managed automated network provisioning and monitoring, Apstra has already delivered Indonet an estimated 20% in cost-savings efficiency.
As the world’s largest archipelago, Indonesia has established a reputation as a preferred data center location in Southeast Asia due to its strategic location across vital shipping lanes, high levels of internet penetration and vibrant digital-first economy. To capitalize on this, the government has rolled out ambitious initiatives to accelerate economic growth, fueled by the urgency for digital transformation across industry sectors like manufacturing, finance and healthcare.
With digital transformation underway, the demand for reliable and scalable colocation services has spiked – with spending in the capital, Jakarta, alone expected to reach $938M by 2027 at a projected five-year CAGR of 22.7%. As Indonesia’s first Internet Service Provider, Indonet has built a reputation for providing scalable and reliable digital infrastructure solutions and is poised to capture more of that ongoing growth.
Indonet utilized Apstra to validate the design, deployment and operation of the EVPN/VXLAN overlay and IP fabric underlay of its latest data center, both built on Juniper QFX Series Switches. The use of validated templates and zero-touch provisioning has resulted in reduced deployment times and reliable data center operations, allowing Indonet to significantly streamline the day-to-day management of its data center networks and unify them in a virtual environment seamlessly.
As the only solution in the industry supporting a multivendor environment, Apstra can manage data centers built with different vendors, simplifying Indonet’s network operations while accelerating its scalability. Not only has Apstra dramatically reduced tedious manual tasks with repeatable blueprints, but it has also freed up skilled engineers at the company for more strategic work.
Additionally, the latest networking upgrades supported by Juniper’s Professional Services and Advanced Services empower Indonet to better respond to customer needs with greater agility, providing ultra-low latency and highly reliable cloud solutions required to power the country’s enterprises – the backbone of Indonesia’s economic growth.
“Our vision is to become the digital infrastructure enabler of choice in Indonesia. Our partnership with Juniper Networks has modernized our network, streamlined the management of our data centers and, most importantly, helped us predict problems before they arise. This forms a strong foundation that will enable us to expand our footprint, respond to our customers with great agility and, ultimately, help them to fully capture the growth potential Indonesia has to offer.”
Den Tossi Ishak, Chief Operating Officer, Indonet
“Juniper Networks is excited to partner with and grow Indonet’s network as they build out a simplified, reliable and efficient infrastructure across Indonesia. Apstra has greatly facilitated the automation and expansion across their data center networks and we look forward to continuing our journey together as they transform into a modern digital infrastructure provider, further fueling Indonesia’s growth momentum.”
Perry Sui, Senior Director, ASEAN & Taiwan, Juniper Networks
About Juniper Networks
Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security, and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the world’s greatest challenges of well-being, sustainability, and equality. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook.
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT
CoreWeave | November 07, 2022
CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization.
“This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.”
NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry.
“AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.”
Dave Salvator, director of product marketing at NVIDIA
In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale.
“CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.”
CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure.
Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds.
CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.