HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT
CoreWeave | November 07, 2022
CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization.
“This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.”
NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry.
“AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.”
Dave Salvator, director of product marketing at NVIDIA
In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale.
“CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.”
CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure.
Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds.
CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.
HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE
Spectra Logic | September 26, 2022
Spectra Logic, a global leader in data management and data storage solutions, today announced a collaboration with the iRODS Consortium to create a joint solution built upon Spectra Vail® software, Spectra BlackPearl® S3 storage and the iRODS data management platform. The combined solution enables customers to use industry-standard cloud interfaces for on-premises disk and on-premises glacier* storage with object tape, while unlocking multi-site/multi-cloud capabilities.
The iRODS integration with BlackPearl S3 allows organizations to leverage the performance and cost benefits of on-premises glacier storage as disk or tape to access “cold” data and automate workflows, while the integration with Vail provides access to cloud services across multiple clouds. Spectra Vail software and BlackPearl S3 storage have been tested with the iRODS S3 storage resource plugin to fully support the Amazon® S3 abstraction that iRODS delivers. The new functionality is available as part of the iRODS 4.2.11 release.
"Organizations that need an on-prem glacier tier will see many benefits with the interoperability between BlackPearl S3 and the iRODS data management platform. “Organizations will be able to take full advantage of on-prem storage and the public, private and hybrid cloud by leveraging the Vail and iRODS integration.”
David Feller, Spectra Logic vice president of product management and solutions engineering
"The combined Spectra Logic and iRODS solution will enable organizations that rely heavily on tape to archive petabytes of valuable digital data economically and efficiently in a glacier-like tier,” said Terrell Russell, executive director of the iRODS Consortium. “We look forward to a lasting collaboration with Spectra Logic that will help our mutual customers drive innovation and accelerate business results."
About the iRODS Consortium
The iRODS Consortium is a membership-based organization that guides development and support of iRODS as free open-source software for data discovery, workflow automation, secure collaboration, and data virtualization. The iRODS Consortium provides a production-ready iRODS distribution and iRODS professional integration services, training, and support. The consortium is administered by founding member RENCI, a research institute for applications of cyberinfrastructure located at the University of North Carolina at Chapel Hill, USA.
About Spectra Logic
Spectra Logic develops a full range of Attack Hardened™ data management and data storage solutions for a multi-cloud world. Dedicated solely to data storage innovation for more than 40 years, Spectra Logic helps organizations modernize their IT infrastructures and protect and preserve their data with a broad portfolio of solutions that enable them to manage, migrate, store and preserve business data long-term, along with features to make them ransomware resilient, whether on-premises, in a single cloud, across multiple clouds, or in all locations at once.
WINDOWS SERVER MANAGEMENT,IT SYSTEMS MANAGEMENT,AZURE
GRC | December 05, 2022
GRC (Green Revolution Cooling®), the leader in immersion cooling for data centers, announced today that GRC’s CRO, Jim Weynand will lead a discussion titled Data Center Sustainability is a Team Sport, which will highlight the benefits of data center immersion cooling during the Gartner IT Infrastructure, Operations & Cloud Strategies Conference 2022 at The Venetian Resort Las Vegas.
The 20-minute Exhibit Showcase will highlight how air-cooled data centers simply cannot meet the demands of today’s high-powered processors and high-density deployments. The session takes place at 1:15 pm on December 8.
Participants will learn about liquid immersion cooling solutions that meet the computing demands of today and tomorrow, and help enterprises address the sustainability, energy use, and the cost of running a data center. The session will also focus on GRC’s partnerships with leading hardware providers and cite examples of comprehensive solutions, from facility design to server selection, enabling data center operators to make the transition from air cooling to liquid immersion cooling to be environmentally friendly and address Environmental Social and Governance (ESG) goals.
“Our relationships with leading hardware providers such as Dell and Intel enable our customers to seamlessly and quickly implement changes to their data centers. We are thrilled to share the stage at this Gartner conference to educate users on the sustainability and budget benefits of using liquid immersion cooling solutions to cool their data centers.”
Jim Weynand, CRO at GRC
GRC is The Immersion Cooling Authority®. The company's patented immersion-cooling technology radically simplifies deployment of data center cooling infrastructure. By eliminating the need for chillers, CRACs, air handlers, humidity controls, and other conventional cooling components, enterprises reduce their data center design, build, energy, and maintenance costs. GRC’s solutions are deployed in twenty-one countries and are ideal for next-gen applications platforms, including artificial intelligence, blockchain, HPC, 5G, and other edge computing and core applications. Their systems are environmentally resilient, sustainable, and space saving, making it possible to deploy them in virtually any location with minimal lead time.