GlobeNewswire | October 25, 2023
SoftIron, the worldwide leader in private cloud infrastructure, today announced it has been named as a Sample Vendor for the “Gartner Hype Cycle for Edge Computing, 2023.”
Gartner Hype Cycle provides a view of how a technology or application will evolve over time, providing a sound source of insight to manage its deployment within the context of your specific business goals. The five phases of a Hype cycle are innovation trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment and the Plateau of Productivity.
SoftIron is recognized in the Gartner report as a Sample Vendor for Edge Storage and the report defines the technology as those that enable the creation, analysis, processing and delivery of data services at, or close to, the location where the data is generated or consumed, rather than in a centralized environment. Gartner predicts that infrastructure and operations (I&O) leaders are beginning the process of laying out a strategy for how they intend to manage data at the edge. Although I&O leaders embrace infrastructure as a service (IaaS) cloud providers, they also realize that a significant part of the infrastructure services will remain on-premises, and would require edge storage data services.
Gartner Hype Cycles provide a graphic representation of the maturity and adoption of technologies and applications, and how they are potentially relevant to solving real business problems and exploiting new opportunities. Gartner Hype Cycle methodology gives you a view of how a technology or application will evolve over time, providing a sound source of insight to manage its deployment within the context of your specific business goals. The latest Gartner Hype Cycle analyzed 31 emerging technologies and included a Priority Matrix that provides perspective on the edge computing innovations that will have a bigger impact, and those that might take longer to fully mature.
“We are excited to be recognized in the 2023 Garter Hype Cycle for Edge Computing,” said Jason Van der Schyff, COO at SoftIron. “We believe at SoftIron to be well positioned to help our customers address and take advantage of the latest trends and developments in Edge Computing as reported in Gartner’s Hype Cycle.”
Business Wire | September 21, 2023
Astera Labs, the global leader in semiconductor-based connectivity solutions for AI infrastructure, today announced that its Leo Memory Connectivity Platform enables data center servers with unprecedented performance for memory intensive workloads. Leo is the industry’s first Compute Express Link™ (CXL™) memory controller that increases total server memory bandwidth by 50% while also decreasing latency by 25% when integrated with the forthcoming 5th Gen Intel® Xeon® Scalable Processor. Through new hardware-based interleaving of CXL-attached and CPU native memory, Astera Labs and Intel eliminate any application-level software changes to augment server memory resources via CXL. Existing applications can effortlessly “plug-and-play” to take advantage of the highest possible memory bandwidth and capacity in the system.
“The growth of computing cores and performance has historically outpaced memory throughput advancements, resulting in degraded server performance efficiency over time,” said Sanjay Gajendra, COO of Astera Labs. “This performance scaling challenge has led to the infamous ‘memory wall,’ and thanks to our collaboration with Intel, our Leo Memory Connectivity Platform breaks through this barrier by delivering on the promise of PCIe 5.0 and CXL memory.”
Data center infrastructure scaling limitations due to the memory wall are none more evident than in AI servers where memory bandwidth and capacity bottlenecks result in inefficient processor utilization. The CXL innovations delivered by Astera Labs and Intel directly address these bottlenecks and lay the foundation for cloud, hybrid-cloud and enterprise data centers to maximize accelerated computing performance.
Extending leadership of PCIe® 5.0 and CXL 2.0 solutions
Astera Labs has a history of delivering industry-first solutions that are critical to advancing the PCIe and CXL ecosystems. In addition to memory performance advancements with Leo, Astera Labs is also driving interoperability leadership with its Aries PCIe 5.0 / CXL 2.0 Smart Retimers on state-of-the-art Intel server platforms. As the most widely deployed and proven PCIe/CXL retimer family in the industry, Aries features a low-latency CXL mode that complements Leo to form the most robust CXL memory connectivity solution.
“We applaud Astera Labs for their contributions to the CXL ecosystem and are delighted to extend our ongoing collaboration. We believe Memory Connectivity Platforms containing innovations from companies like Astera Labs will help deliver enhanced performance on next generation Intel Xeon processors, and accelerate a myriad of memory intensive workloads,” said Zane Ball, Corporate Vice President and General Manager, Data Center Platform Engineering and Architecture Group, Intel.
Visit Astera Labs at Intel Innovation!
Astera Labs will showcase Leo and Aries together with Intel’s latest Xeon® Scalable processors at Booth #210, September 19-20 at the San Jose Convention Center. Talk to Astera Labs’ experts to learn more about industry benchmarks and how to optimize PCIe/CXL memory solutions in data center architectures to deliver optimized performance for applications ranging from AI, real time analytics, genomics and modeling.
About Astera Labs
Astera Labs, Inc. is a global leader in semiconductor-based connectivity solutions purpose-built to unleash the full potential of intelligent data infrastructure at cloud-scale. Its class-defining, first-to-market products based on PCIe, CXL, and Ethernet technologies deliver critical connectivity in accelerated computing platforms optimized for AI applications.
Business Wire | September 26, 2023
Cloudflare, Inc. (NYSE: NET), the security, performance, and reliability company helping to build a better Internet, today shared a new independent report published by Analysys Mason that shows switching enterprise network services from on premises devices to Cloudflare’s cloud-based services can cut related carbon emissions up to 78% for very large businesses to up to 96% for small businesses. The report is one of the first of its kind to calculate potential emissions savings achieved by replacing enterprise network and security hardware boxes with more efficient cloud services.
Global Internet usage accounts for 3.7% of global CO2 emissions, about equal to the CO2 emissions of all air traffic around the world. The Internet needs to reduce its overall energy consumption, especially as regulators continue to implement the Paris Climate Accord, including plans to transition to a zero emissions economy. The European Climate Law requires that Europe’s economy and society become climate-neutral by 2050, with a target of reducing net GHG emissions by at least 55% by 2030, compared to 1990 levels. Regulators in the United States and the European Union, among others, have also announced plans to require companies to disclose climate-related information including carbon emissions resulting from their operations and supply chains, as well as climate related risks and opportunities. Finally, among the Fortune Global 500, 63% of companies now set 2050 targets for emissions reductions. Companies large and small will increasingly be looking to reduce carbon throughout their supply chains, particularly their IT infrastructure.
“The best way to reduce your IT infrastructure’s carbon footprint is easy: move to the cloud,” said Matthew Prince, CEO and co-founder, Cloudflare. “At Cloudflare, we’ve built one of the world’s most efficient networks, getting the most out of every watt of energy and every one of our servers. That’s why, with Cloudflare, companies can help hit their sustainability goals without sacrificing security, speed, performance, or innovation.”
The Analysys Mason study found that switching enterprise network services from on premises devices to Cloudflare services can cut related carbon emissions up to 96%, depending on the current network footprint. The greatest reduction comes from consolidating services, which improves carbon efficiency by increasing the utilization of servers that are providing multiple network functions. On premises devices are designed to host multiple workloads and consume power constantly, but are only used for part of the day and part of the week. Cloud infrastructure is shared by millions of customers, often all over the world. As a result, cloud providers are able to achieve economies of scale that result in less downtime, less waste, and lower emissions. Furthermore, the Analysys Mason study found that there are additional gains due to the high Power Usage Effectiveness of cloud data centres, and differences in the carbon intensity of generation in the local electricity grid.
“Happy Cog is a full-service digital agency that designs, builds, and markets experiences that engage our clients and their audiences. We’ve relied on Cloudflare for many of those websites and apps because it's secure, reliable, fast, and affordable – but also aligns with many of our clients’ sustainability roadmaps and goals,” said Matt Weinberg, Co-Founder and President of Technology at Happy Cog. “Switching our clients from their previous on premises or other constant-usage infrastructure to Cloudflare's network and services has let them be greener, more efficient, and more cost effective. It's ideal when you can offer your clients a solution that covers all their needs and provides a delightful experience now, without having to compromise on their longer term priorities.”
Cloudflare, Inc. (www.cloudflare.com / @cloudflare) is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.
Business Wire | September 28, 2023
Penguin Solutions™, an SGH™ brand (Nasdaq: SGH) that designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale, today announced that it has been certified by NVIDIA to support enterprises deploying NVIDIA DGX™ AI computing platforms under the NVIDIA DGX-Ready Managed Services program.
NVIDIA DGX systems are an advanced supercomputing platform for large-scale AI development. The NVIDIA DGX-Ready Managed Services program gives customers the option to outsource management of DGX systems deployed in corporate data centers, including the implementation and monitoring of server, storage, and networking resources required to support DGX platforms.
Generative AI requires a completely new computing infrastructure compared to traditional IT, said Troy Kaster, vice president, commercial sales at Penguin Solutions. These new computing infrastructures require services skills, which Penguin is uniquely qualified to support given our extensive experience partnering with some of the largest companies in AI. As a full-service integration and services provider, Penguin has the capabilities to design at scale, deploy at speed, and provide managed services for NVIDIA DGX SuperPOD solutions.
Penguin has designed, built, deployed, and managed some of the largest AI training clusters in the world. Penguin currently manages over 50,000 NVIDIA GPUs for Fortune 100 customers including Meta’s AI Research SuperCluster – with 2,000 NVIDIA DGX systems and 16,000 NVIDIA A100 Tensor Core GPUs – one of the most powerful AI training clusters in the world.
“AI is transforming organizations around the world, and many businesses are looking to deploy the technology without the complexities of managing infrastructure,” said Tony Paikeday, senior director, DGX platform at NVIDIA. “With DGX-Ready Managed Services offered by Penguin Solutions, our customers can deploy the world’s leading platform for enterprise AI development with a simplified operations model that lets them tap into the leadership-class performance of DGX and focus on innovating with AI.”
Advantages of Penguin Solutions powered by NVIDIA DGX include:
Design large-scale AI infrastructure combining the most recent DGX systems, ultra-high speed networking solutions, and cutting-edge storage options for clusters tailored to customer requirements
Manage AI infrastructure making the most of multiple layers of recent hardware and software, such as acceleration libraries, job scheduling and orchestration
Reduce risk associated with investments in computing infrastructure
Optimize efficiency of AI infrastructure with best-in-class return on investment.
About Penguin Solutions
The Penguin Solutions™ portfolio, which includes Penguin Computing™, accelerates customers’ digital transformation with the power of emerging technologies in HPC, AI, and IoT with solutions and services that span the continuum of edge, core, and cloud. By designing highly-advanced infrastructure, machines, and networked systems we enable the world’s most innovative enterprises and government institutions to build the autonomous future, drive discovery and amplify human potential.