Hyper-Converged Infrastructure, Windows Systems and Network
prnewswire | July 26, 2023
CoreWeave, a specialized cloud provider of large-scale GPU-accelerated workloads, today announced a new data center facility in Plano, Texas, to be fully operational by December 31, 2023. The $1.6 billion data center is CoreWeave's first facility in Texas and will support economic activity and job growth in the area.
"We are pleased to partner with Plano and the local community to open this cutting-edge data center and create new jobs," said CoreWeave CEO and Co-founder Michael Intrator. "The 450,000 square foot facility will help meet the unprecedented demand for high-performance cloud solutions for artificial intelligence, machine learning, pixel streaming and other emerging technologies that CoreWeave is uniquely positioned to deliver," said Intrator.
This news comes on the heels of continued growth for the company. Recently, CoreWeave announced the opening of a modern data center in New York City that provides ultra-low latency to over 20 million inhabitants across the metropolitan area. In April, CoreWeave announced a $221 million Series B round, followed by $200 million in Series B extension funding for a total of $421 million in capital raised for the round.
"With the demand for machine learning, AI and visual effects/rendering sharply rising, we are thrilled to partner with CoreWeave as the company invests in its first data center in Texas, capable of high-computing solutions for such specialized needs," said Mayor of Plano, John B. Muns.
Founded in 2017, CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry's fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute-intensive use cases — machine learning and AI, VFX and rendering, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.
Application Infrastructure, Storage Management
prnewswire | August 11, 2023
DataBank, a leading provider of enterprise-class edge colocation, interconnection, and managed services, announced today a new approach to building high-density data centers to accommodate High-Performance Computing (HPC). Enabling HPC, Universal Data Hall Design (UDHD) empowers businesses with the flexibility to support any deployment their workloads require.
With a market expected to reach $103.74 billion by 2030, generative AI's accelerated adoption has driven an increase in demand for high-density colocation. As technology continues to advance, data centers must be able to scale and adapt quickly to handle an increasingly diverse range of workloads – from power-dense HPC clusters to sprawling hyperscale cloud installations to traditional raised-floor, enterprise colocation.
"In order to future-proof their facilities, multi-tenant data center operators must rethink facility design, construction, and operations to allow for more flexibility and sustainability," said Eric Swartz, vice president of engineering at DataBank. "With UDHD, DataBank is able to accommodate hyperscale, traditional, and HPC all within the same, highly secure data hall."
Key elements to DataBank's Next-Gen Data Centers implementing a Universal Data Hall Design are the traditional components of data center colocation with an eye toward flexibility and resiliency:
Space - Starting with slab floor and all power and cooling infrastructure outside the data hall as the initial base, with raised floor and water to rack layers that easily can be added to any hall; this layered design approach allows any hall within the data center to be adjusted to customer needs.
Power - Support for distribution as traditional 120/208V or high density 240/415V as whips or through busway without change to the supporting infrastructure.
Cooling - With a closed chilled water loop and the layered design approach, each hall can independently support different cooling methods from flooded room to localized air delivery using raised floor and even water to the rack supporting rear door heat exchangers and/or direct chip cooling.
This design renders the additional benefit of sustainability, as efficient power and water systems reduce the consumption of resources.
"Universal Data Hall Design is crucial to innovation. As technology evolves, our data centers are able to evolve with it," said Joe Minarik, COO of DataBank. "While most of the industry is trying to navigate the here and now, we're already building the data centers of the future."
DataBank helps the world's largest enterprises, technology, and content providers ensure their data and applications are always on, always secure, always compliant, and ready to scale to meet the needs of the artificial intelligence era. Our edge colocation and infrastructure footprint consists of 65+ "HPC-ready" data centers in 27+ markets, 20 interconnection hubs, and on-ramps to an ecosystem of cloud providers with virtually unlimited reach. We combine these platforms with contract portability, managed security, compliance enablement, hands-on support, and a guarantee of 100% uptime availability, to give our customers absolute confidence in their IT infrastructure and the power to create a boundless digital future for their business.
Business Wire | September 21, 2023
Astera Labs, the global leader in semiconductor-based connectivity solutions for AI infrastructure, today announced that its Leo Memory Connectivity Platform enables data center servers with unprecedented performance for memory intensive workloads. Leo is the industry’s first Compute Express Link™ (CXL™) memory controller that increases total server memory bandwidth by 50% while also decreasing latency by 25% when integrated with the forthcoming 5th Gen Intel® Xeon® Scalable Processor. Through new hardware-based interleaving of CXL-attached and CPU native memory, Astera Labs and Intel eliminate any application-level software changes to augment server memory resources via CXL. Existing applications can effortlessly “plug-and-play” to take advantage of the highest possible memory bandwidth and capacity in the system.
“The growth of computing cores and performance has historically outpaced memory throughput advancements, resulting in degraded server performance efficiency over time,” said Sanjay Gajendra, COO of Astera Labs. “This performance scaling challenge has led to the infamous ‘memory wall,’ and thanks to our collaboration with Intel, our Leo Memory Connectivity Platform breaks through this barrier by delivering on the promise of PCIe 5.0 and CXL memory.”
Data center infrastructure scaling limitations due to the memory wall are none more evident than in AI servers where memory bandwidth and capacity bottlenecks result in inefficient processor utilization. The CXL innovations delivered by Astera Labs and Intel directly address these bottlenecks and lay the foundation for cloud, hybrid-cloud and enterprise data centers to maximize accelerated computing performance.
Extending leadership of PCIe® 5.0 and CXL 2.0 solutions
Astera Labs has a history of delivering industry-first solutions that are critical to advancing the PCIe and CXL ecosystems. In addition to memory performance advancements with Leo, Astera Labs is also driving interoperability leadership with its Aries PCIe 5.0 / CXL 2.0 Smart Retimers on state-of-the-art Intel server platforms. As the most widely deployed and proven PCIe/CXL retimer family in the industry, Aries features a low-latency CXL mode that complements Leo to form the most robust CXL memory connectivity solution.
“We applaud Astera Labs for their contributions to the CXL ecosystem and are delighted to extend our ongoing collaboration. We believe Memory Connectivity Platforms containing innovations from companies like Astera Labs will help deliver enhanced performance on next generation Intel Xeon processors, and accelerate a myriad of memory intensive workloads,” said Zane Ball, Corporate Vice President and General Manager, Data Center Platform Engineering and Architecture Group, Intel.
Visit Astera Labs at Intel Innovation!
Astera Labs will showcase Leo and Aries together with Intel’s latest Xeon® Scalable processors at Booth #210, September 19-20 at the San Jose Convention Center. Talk to Astera Labs’ experts to learn more about industry benchmarks and how to optimize PCIe/CXL memory solutions in data center architectures to deliver optimized performance for applications ranging from AI, real time analytics, genomics and modeling.
About Astera Labs
Astera Labs, Inc. is a global leader in semiconductor-based connectivity solutions purpose-built to unleash the full potential of intelligent data infrastructure at cloud-scale. Its class-defining, first-to-market products based on PCIe, CXL, and Ethernet technologies deliver critical connectivity in accelerated computing platforms optimized for AI applications.