TECHNOLOGY ACROSS TELECOMMUNICATIONS, WEBSCALE AND NEUTRAL CARRIER FOR 150 NETWORK OPPERATORS

prnewswire | August 26, 2020

Telecommunications network operators (TNOs) continue to face slow revenue growth, rising labor costs, and high network capital expenditures (capex). The cost of powering and maintaining networks are also challenges. To cope, telcos are digitally transforming their operations, restructuring their physical asset base, and learning from the Webscale world. Webscale network operators (WNOs) have grown to be among the world's largest companies, and are investing in a range of markets with network needs: connected cars, remote health care, drones and more. WNOs spend big in the cloud, and are pressuring traditional supply chains through their support for open networking, direct work with the ODM/EMS sector, and self-designed chips.

Spotlight

A hyper-converged system is a pre-configured virtualised server platform that combines compute, storage, networking, and management software in a single appliance. Hyper-convergence enables you to simply and rapidly deploy mixed-workload and virtual desktop integrated infrastructure solutions across local or remote locations. Hyper-converged systems help you lower OpEx due to reduced administrative and footprint resources, redeployment flexibility, and simplified storage management.

Spotlight

A hyper-converged system is a pre-configured virtualised server platform that combines compute, storage, networking, and management software in a single appliance. Hyper-convergence enables you to simply and rapidly deploy mixed-workload and virtual desktop integrated infrastructure solutions across local or remote locations. Hyper-converged systems help you lower OpEx due to reduced administrative and footprint resources, redeployment flexibility, and simplified storage management.

Related News

STORAGE MANAGEMENT, WINDOWS SERVER MANAGEMENT, IT SYSTEMS MANAGEMENT

Marvell Announces Industry's Most Comprehensive 3nm Data Infrastructure IP Portfolio

Marvell | October 21, 2022

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced a comprehensive 3nm silicon platform to advance its industry-leading products across the cloud data center, carrier, enterprise, and automotive markets. Leveraging Marvell's success in 5nm, which includes the industry's first 5nm Data Processing Unit (DPU) – the OCTEON® 10 platform, this suite of advanced technology enables cutting-edge monolithic and multi-die solutions for its customers in the industry's most advanced process node, delivering the performance, power, and density (size) necessary to meet most demanding infrastructure requirements for compute, next generation 100T Ethernet switching, and 5G Advanced baseband processing. The new 3nm Marvell silicon, which is now in fabrication with Taiwan Semiconductor Manufacturing Company (TSMC) on its 3nm shuttle, is available for new product designs and includes foundational IP building blocks such as long reach SerDes, PCIe Gen6 PHY, and several standards-based die-to-die interconnect technologies for managing data flow across the data infrastructure. This 3nm development follows numerous 5nm solutions from Marvell – in production or development – that span its unrivaled portfolio of electro-optics, switch, PHY, compute, 5G baseband, and storage products, as well as a wide range of custom ASIC programs. Additionally, this IP portfolio is compatible with 2.5D packaging technologies such as TSMC's leading-edge 2.5D Chip-on-Wafer-on-Substrate (CoWoS) and will enable Marvell to develop some of the most advanced multi-die, multi-chiplet systems-in-package (SiP) for its industry-leading infrastructure products and co-development of custom ASIC solutions optimized for some of the most challenging infrastructure use cases, such as machine learning. Silicon Advancing the Cloud With data and internet traffic approximately doubling every two years, cloud service providers, software-as-a-service (SaaS) companies, and telecommunication carriers are increasingly relying on silicon optimized by semiconductor providers to deliver breakthrough performance and bandwidth while minimizing power consumption, emissions, and cost. Achieving these objectives, particularly for hyperscale cloud providers, requires silicon partners to move quickly to the most advanced process node available to take advantage of the inherent scaling benefits in power, performance, and density. Marvell delivers a wide range of industry-leading standard products for cloud infrastructure including electro-optics, processors, accelerators, optical modules, Ethernet switches, storage controllers and PHY chips, and offers customized products through Marvell's ASIC portfolio. By developing and validating each of the critical IP blocks in silicon early in the availability of the 3nm process, Marvell can significantly accelerate customers' time-to-market while reducing the design risk and verification efforts associated with its complex monolithic or multi-die SoC designs. "Marvell teamed with TSMC to provide our customers with the power to build high-performance, cloud-optimized solutions for the most demanding applications requiring the industry's first 3nm IP on silicon. "The 3nm platform provides advantages for a wide range of solutions, from standard and application-specific SoCs to highly custom chips with unique and innovative designs." Raghib Hussain, President of Products & Technologies at Marvell "TSMC is pleased to collaborate with Marvell in taping out a chip on our 3nm shuttle to validate critical cloud-focused IPs," said Yujun Li, Director of High Performance Computing Business Development at TSMC. "TSMC is looking forward to our continued collaboration with Marvell in the development of leading-edge multi-die SoCs utilizing TSMC's process and packaging technologies." "The cloud will play an outsized role in transforming healthcare, curbing emissions, and taking on other real-world challenges, but only if cloud providers can continue to increase the overall performance and efficiency of their infrastructure," said Alan Weckel, co-founder of the 650 Group. "Marvell's collaboration with TSMC and its strategy of optimizing silicon building blocks for a wide spectrum of devices and applications is poised to play a critical role in allowing cloud providers to fulfill that promise." About Marvell To deliver the data infrastructure technology that connects the world, we're building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world's leading technology companies for over 25 years, we move, store, process and secure the world's data with semiconductor solutions designed for our customers' current needs and future ambitions. Through a process of deep collaboration and transparency, we're ultimately changing the way tomorrow's enterprise, cloud, automotive, and carrier architectures transform—for the better.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave | November 07, 2022

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization. “This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.” NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry. “AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.” Dave Salvator, director of product marketing at NVIDIA In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale. “CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.” CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure. Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds. About CoreWeave CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE, DATA STORAGE

Quali Simplifies Cloud Infrastructure Management with Latest Evolution of Torque Platform

Quali | October 28, 2022

Quali, a leading provider of Environments as a Service infrastructure automation and management solutions, announced today new capabilities that simplify the management of Infrastructure as Code (IaC), strengthen infrastructure governance, and provide further actionable data on the usage and cost of cloud infrastructure. Torque delivers on business’ need to scale with transparency and controls to ensure governance and accountability without introducing inhibitors to rapid execution. Without any additional effort and with no implementation needed, Torque removes friction and promotes productivity by discovering, analyzing and importing existing IaC assets created by DevOps teams, templatizing those assets into complete application environments, and allowing governed self-service access with unprecedented visibility and control. Torque allows teams to set policies that enforce governance, manage costs, and mitigate risks associated with cloud infrastructure, which enables organizations to respond to business requirements and deliver change faster and with greater agility. Torque operates across all major cloud providers, as well as major infrastructure types like containers, VMs and Kubernetes on any target infrastructure. The latest release of Torque delivers key capabilities to simplify complex infrastructure, manage IaC files and integrate with the most widely used technologies to leverage business’ existing investments. New enhancements to Torque include: Helm drift detection – in addition to the ability to detect infrastructure drift for Terraform files, Torque now adds that capability for Helm Charts, building an additional layer of control to ensure infrastructure consistency throughout the CI/CD pipeline. “BYO” Terraform policies – Torque supports basic Terraform policies, but now allows the import of existing definitions, so users can leverage previous work to define policies. Enhanced cost reporting – Cost reporting capabilities have been enhanced to include automatic cost collection for Kubernetes hosts to enhance cost visibility and provide business context to resource consumption. Environment view – From a single pane of glass, Torque lists all elements comprising an environment blueprint definition pulled from the user’s Git, including visibility into all subcomponents of environment definitions. Audit Log integrations – All data collected by Torque can be imported into third-party audit tools like ELK elastic search service, promoting greater visibility and accountability, and further strengthening IT teams’ ability to enforce compliance. “The rate at which technology is evolving has created a level of complexity that businesses are struggling to manage. “As a result, many are turning to IaC for automation, but environments now consist of a larger number of technologies that need to be governed. Torque is the control plane that manages those technologies, so organizations can operate with more speed, greater scale, lower costs and less risk.” Lior Koriat, CEO of Quali With Torque, IT leaders understand what infrastructure is being used, when, why and by who in a consistent, measurable way without any negative impact on development practices and tooling. This ensures freedoms for software development teams are maintained, while accelerating infrastructure delivery speed, accountability and mitigating risk to support the business’ needs to plan, optimize and understand the value delivered by software and infrastructure. Quali will be demonstrating its Torque platform at KubeCon North America October 26th through the 28th in Detroit, Michigan. Stop by booth S6 to learn more. About Quali     Headquartered in Austin, Texas, Quali provides the leading platform for Environments as a Service. Global 2000 enterprises rely on Quali’s infrastructure automation and control plane platform to support the continuous delivery of application software at scale. Quali delivers greater control and visibility over infrastructure, so businesses can increase engineering productivity and velocity, understand and manage cloud costs, optimize infrastructure utilization and mitigate risk.

Read More