Windows Systems and Network
Business Wire | October 10, 2023
Zayo Group Holdings, Inc., a leading global communications infrastructure platform, today announced its latest series of infrastructure investments to extend global capacity to support rapidly increasing bandwidth demand—including deployment of 400G across the globe, long-haul capacity growth, and enhancements to its global IP infrastructure.
Next-generation technology is being deployed at never-before-seen rates. This has placed the communications infrastructure industry at a unique inflection point as all digital businesses—enterprises, carriers and hyperscalers alike—scramble to ensure they have enough capacity to support these technologies, said Bill Long, Chief Product Officer at Zayo. As this trend plays out, it will be a strong tailwind for those providers who can capitalize on the moment. As one of the newest and most modern networks on the market, Zayo is uniquely positioned to support this growing demand for global bandwidth.
Deploying 400G Globally
For today’s digital businesses, 400G is essential to ensure the speed and scalability to support increasingly complex and data-intensive applications.
Zayo recently completed upgrades of its European network to be fully 400G-enabled, with plans for its Tier-1 backbone in North America to be fully 400G-enabled by the end of 2024.
In Q3, Zayo added nine new 400G-enabled routes to its North American network to provide high-bandwidth options between key cities, including:
Atlanta to DC
Denver to Dallas (Direct)
Montreal to Quebec City
Clinton to Ponchatoula
Indianapolis to Columbus
Ashburn to Baltimore
Bend to Umatilla
Laurel to Denver
Additional 400G routes in progress include:
Houston to New Orleans
St. Louis to Chicago
Buffalo to Albany
Winnipeg to Toronto
Toronto to Montreal
Buffalo to Toronto
Columbus to Ashburn
Cleveland to Buffalo
Houston to Ponchatoula
Umatilla to Quincy
What this means: The enablement of Zayo’s global network with 400G will allow customers to continue scaling their bandwidth with Zayo on existing routes, opening up high-capacity access on new routes, improving network stability and providing an overall better customer experience through quicker delivery and optimal routing. The enhanced capacity from these routes will support customers that have exponential growth needs driven by emerging technologies such as 5G, cloud adoption, IoT, AI, edge computing, and automation.
Expanding Global Low-Latency Network
Zayo has also been working to expand capacity in other key economic centers across the globe. In October 2022, Zayo announced its global low-latency route connecting the U.S. to South America’s financial hub of Sao Paulo. In Q3 2023, the company completed expansions to its connectivity infrastructure in Sao Paulo including a new key terrestrial route that will provide connectivity throughout the metro ring and to four key data centers.
New Sao Paulo Points of Presence (PoPs):
Alameda Araguaia, 3641, Alphaville, Barueri, SP, 06455 000, Brazil
Av. Marcos Penteado de Ulhoa Rodrigues, 249, Santana de Parnaiba, SP, 06543 001, Brazil
Avenida Ceci, 1900, Tambore, Barueri, SP, 06460 120, Brazil
Rua Ricardo Prudente de Aquino, 85 Santan de Parnaiba, Brazil
What this means: As Latin America’s center of innovation and commerce, São Paulo has seen an increased demand for connectivity from the U.S. To meet the growing needs of customers, Zayo is establishing diverse, high-bandwidth connectivity from its first-class North American fiber network directly into the largest economic center in the Southern Hemisphere.
IP Infrastructure Growth
IP demand continues to be a driver for capacity increases. Zayo continues to bolster its IP infrastructure with new PoPs in key markets and data centers across the globe. Zayo added eight new IP PoPs to its North American network in Q3, including:
45 Parliament St, Toronto, ON
250 Williams St NW, Atlanta, GA
6477 W Wells Park Rd, Salt Lake City, UT
2335 S Ellis St, Chandler, AZ
375 Pearl St, New York, NY
626 Wilshire Blvd, Los Angeles, CA
431 Horner Ave, Etobicoke, ON
1100 White St SW, Atlanta, GA
Zayo's IP backbone, which runs on Zayo's wholly owned fiber infrastructure, makes up nearly 10% of the world's Internet capacity. Zayo currently manages 96Tb of core capacity and 34TB of peering capacity, and adds 1-2Tb of peering capacity every quarter.
Upgrading Long-Haul Capacity
As one of the other providers actively investing in its long-haul infrastructure, Zayo is continuing to overbuild its routes in high-demand areas to enable enhanced fiber capacity. In Q3 2023, Zayo completed the overbuild of its Omaha to Denver route, providing increased capacity on this highly sought-after route.
Zayo also has three new long-haul route builds and two additional route overbuilds in progress with scheduled completion by the end of 2023.
What this means: The enhancement to Zayo’s long-haul dark fiber routes provide customers with diverse routing options and the ability to customize and enhance their network to meet the unique needs of their businesses while maximizing resiliency and ability to scale.
Zayo will continue to invest in future-proofing its network and services to connect what’s next for its customers.
About Zayo
For more than 15 years, Zayo has empowered some of the world’s largest and most innovative companies to connect what’s next for their business. Zayo’s future-ready network spans over 17 million fiber miles and 142,000 route miles. Zayo’s tailored connectivity and edge solutions enable carriers, cloud providers, data centers, schools, and enterprises to deliver exceptional experiences, from edge to core to cloud. Discover how Zayo connects what’s next at www.zayo.com and follow us on LinkedIn and Twitter.
Read More
Data Storage
Business Wire | September 21, 2023
Astera Labs, the global leader in semiconductor-based connectivity solutions for AI infrastructure, today announced that its Leo Memory Connectivity Platform enables data center servers with unprecedented performance for memory intensive workloads. Leo is the industry’s first Compute Express Link™ (CXL™) memory controller that increases total server memory bandwidth by 50% while also decreasing latency by 25% when integrated with the forthcoming 5th Gen Intel® Xeon® Scalable Processor. Through new hardware-based interleaving of CXL-attached and CPU native memory, Astera Labs and Intel eliminate any application-level software changes to augment server memory resources via CXL. Existing applications can effortlessly “plug-and-play” to take advantage of the highest possible memory bandwidth and capacity in the system.
“The growth of computing cores and performance has historically outpaced memory throughput advancements, resulting in degraded server performance efficiency over time,” said Sanjay Gajendra, COO of Astera Labs. “This performance scaling challenge has led to the infamous ‘memory wall,’ and thanks to our collaboration with Intel, our Leo Memory Connectivity Platform breaks through this barrier by delivering on the promise of PCIe 5.0 and CXL memory.”
Data center infrastructure scaling limitations due to the memory wall are none more evident than in AI servers where memory bandwidth and capacity bottlenecks result in inefficient processor utilization. The CXL innovations delivered by Astera Labs and Intel directly address these bottlenecks and lay the foundation for cloud, hybrid-cloud and enterprise data centers to maximize accelerated computing performance.
Extending leadership of PCIe® 5.0 and CXL 2.0 solutions
Astera Labs has a history of delivering industry-first solutions that are critical to advancing the PCIe and CXL ecosystems. In addition to memory performance advancements with Leo, Astera Labs is also driving interoperability leadership with its Aries PCIe 5.0 / CXL 2.0 Smart Retimers on state-of-the-art Intel server platforms. As the most widely deployed and proven PCIe/CXL retimer family in the industry, Aries features a low-latency CXL mode that complements Leo to form the most robust CXL memory connectivity solution.
“We applaud Astera Labs for their contributions to the CXL ecosystem and are delighted to extend our ongoing collaboration. We believe Memory Connectivity Platforms containing innovations from companies like Astera Labs will help deliver enhanced performance on next generation Intel Xeon processors, and accelerate a myriad of memory intensive workloads,” said Zane Ball, Corporate Vice President and General Manager, Data Center Platform Engineering and Architecture Group, Intel.
Visit Astera Labs at Intel Innovation!
Astera Labs will showcase Leo and Aries together with Intel’s latest Xeon® Scalable processors at Booth #210, September 19-20 at the San Jose Convention Center. Talk to Astera Labs’ experts to learn more about industry benchmarks and how to optimize PCIe/CXL memory solutions in data center architectures to deliver optimized performance for applications ranging from AI, real time analytics, genomics and modeling.
About Astera Labs
Astera Labs, Inc. is a global leader in semiconductor-based connectivity solutions purpose-built to unleash the full potential of intelligent data infrastructure at cloud-scale. Its class-defining, first-to-market products based on PCIe, CXL, and Ethernet technologies deliver critical connectivity in accelerated computing platforms optimized for AI applications.
Read More
Application Infrastructure
Business Wire | September 28, 2023
Penguin Solutions™, an SGH™ brand (Nasdaq: SGH) that designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale, today announced that it has been certified by NVIDIA to support enterprises deploying NVIDIA DGX™ AI computing platforms under the NVIDIA DGX-Ready Managed Services program.
NVIDIA DGX systems are an advanced supercomputing platform for large-scale AI development. The NVIDIA DGX-Ready Managed Services program gives customers the option to outsource management of DGX systems deployed in corporate data centers, including the implementation and monitoring of server, storage, and networking resources required to support DGX platforms.
Generative AI requires a completely new computing infrastructure compared to traditional IT, said Troy Kaster, vice president, commercial sales at Penguin Solutions. These new computing infrastructures require services skills, which Penguin is uniquely qualified to support given our extensive experience partnering with some of the largest companies in AI. As a full-service integration and services provider, Penguin has the capabilities to design at scale, deploy at speed, and provide managed services for NVIDIA DGX SuperPOD solutions.
Penguin has designed, built, deployed, and managed some of the largest AI training clusters in the world. Penguin currently manages over 50,000 NVIDIA GPUs for Fortune 100 customers including Meta’s AI Research SuperCluster – with 2,000 NVIDIA DGX systems and 16,000 NVIDIA A100 Tensor Core GPUs – one of the most powerful AI training clusters in the world.
“AI is transforming organizations around the world, and many businesses are looking to deploy the technology without the complexities of managing infrastructure,” said Tony Paikeday, senior director, DGX platform at NVIDIA. “With DGX-Ready Managed Services offered by Penguin Solutions, our customers can deploy the world’s leading platform for enterprise AI development with a simplified operations model that lets them tap into the leadership-class performance of DGX and focus on innovating with AI.”
Advantages of Penguin Solutions powered by NVIDIA DGX include:
Design large-scale AI infrastructure combining the most recent DGX systems, ultra-high speed networking solutions, and cutting-edge storage options for clusters tailored to customer requirements
Manage AI infrastructure making the most of multiple layers of recent hardware and software, such as acceleration libraries, job scheduling and orchestration
Reduce risk associated with investments in computing infrastructure
Optimize efficiency of AI infrastructure with best-in-class return on investment.
About Penguin Solutions
The Penguin Solutions™ portfolio, which includes Penguin Computing™, accelerates customers’ digital transformation with the power of emerging technologies in HPC, AI, and IoT with solutions and services that span the continuum of edge, core, and cloud. By designing highly-advanced infrastructure, machines, and networked systems we enable the world’s most innovative enterprises and government institutions to build the autonomous future, drive discovery and amplify human potential.
Read More