Application Infrastructure

Ori and KX Deliver Real-Time Streaming Analytics with Unprecedented Performance Using Telco Edge

Ori Industries, the global edge computing infrastructure firm, has partnered with KX, a worldwide leader in real-time streaming analytics, to provide an ultra-low latency streaming analytics solution for edge devices, leveraging Tier 1 telco networks to get closer to application users.

The integration of Ori's Global Edge (OGE) platform and KX's market-leading streaming analytics software enables real-time in-stream analytics at the edge, significantly increasing the speed at which data can be analysed while also reducing the amount of data that needs to be sent to the cloud for processing.

For organisations that run a distributed computing environment yet are either unable or as yet unwilling to migrate systems fully to the cloud, this approach delivers all the management and scalability benefits of cloud computing without the associated risks around latency, reliability, security and cost. The combined solution enables automated scaling via Kubernetes and self-healing of workloads, with a standardised set of APIs for ease of use.

The ability to push events in real-time to downstream systems is ideally suited for use-cases where large-scale IoT or Mobile Edge Compute (MEC) architectures are running across multiple infrastructures, environments, and geographies, such as autonomous vehicles, predictive maintenance, smart cities and mobile gaming, as well as other high performance, low latency, secure and private enterprise applications for industrial and mission critical infrastructure.

Andy Corston-Petrie, BT's Senior Manager, Mobile Core Networks Research, said: "Enterprises rely increasingly on latency-sensitive solutions that run on premise or in the cloud. With a private edge offering, we can unlock the edge for our enterprise customers who require secure, rapid decision making in highly regulated or closed environments."

Tier 1 telcos like BT are assessing the potential of edge computing and finding the opportunities, trade-offs and requirements that will unlock different business models at the edge, as they look to offer the benefits of cloud with the reliability and performance of edge for their enterprise customers.

Corston-Petrie added: "In Ori Industries and KX, we see partners who can deliver on that edge potential. Together, we can enable enterprise customers to leverage on-site data and analyse it at the source, reducing the time needed for critical decision making."

Douglas Mancini, Chief Commercial Officer at Ori Industries said: "In pairing one of the industry's fastest streaming analytics solutions with our low latency delivery stack we're enabling telcos to leverage their networks like never before, opening up the true promise of 5G."

Paul Hollway, Head of Partnerships at KX said: "We know from our own research that reducing the decision-making window using real-time data analytics is becoming a critical requirement for more and more organisations across all industry sectors. We're excited to be working with Ori to deliver this capability through its platform."

About Ori Industries

Ori Industries is building the world's largest Distributed Edge Cloud that empowers telcos to leverage their network advantages and helps applications run faster. Ori's platforms seamlessly connect developers and enterprises with the highest performing local, regional and global communication networks, delivering unprecedented cost, performance and security at the Edge.

About KX:

KX, the leading technology for real-time continuous intelligence, is part of First Derivatives plc, a group of data-driven businesses that unlock the value of insight, hindsight and foresight to drive organisations forward. KX Streaming Analytics, built on the kdb+ time-series database, is an industry leading high-performance, in-memory computing, streaming analytics and operational intelligence platform. It delivers the best possible performance and flexibility for high-volume, data-intensive analytics and applications across multiple industries. The Group operates from 15 offices across Europe, North America and Asia Pacific and employs more than 2,500 people worldwide.

Spotlight

Spotlight

Related News

Windows Systems and Network

Zayo Bolsters Global Network Infrastructure, Increases Capacity to Meet Rapid Bandwidth Demand

Business Wire | October 10, 2023

Zayo Group Holdings, Inc., a leading global communications infrastructure platform, today announced its latest series of infrastructure investments to extend global capacity to support rapidly increasing bandwidth demand—including deployment of 400G across the globe, long-haul capacity growth, and enhancements to its global IP infrastructure. Next-generation technology is being deployed at never-before-seen rates. This has placed the communications infrastructure industry at a unique inflection point as all digital businesses—enterprises, carriers and hyperscalers alike—scramble to ensure they have enough capacity to support these technologies, said Bill Long, Chief Product Officer at Zayo. As this trend plays out, it will be a strong tailwind for those providers who can capitalize on the moment. As one of the newest and most modern networks on the market, Zayo is uniquely positioned to support this growing demand for global bandwidth. Deploying 400G Globally For today’s digital businesses, 400G is essential to ensure the speed and scalability to support increasingly complex and data-intensive applications. Zayo recently completed upgrades of its European network to be fully 400G-enabled, with plans for its Tier-1 backbone in North America to be fully 400G-enabled by the end of 2024. In Q3, Zayo added nine new 400G-enabled routes to its North American network to provide high-bandwidth options between key cities, including: Atlanta to DC Denver to Dallas (Direct) Montreal to Quebec City Clinton to Ponchatoula Indianapolis to Columbus Ashburn to Baltimore Bend to Umatilla Laurel to Denver Additional 400G routes in progress include: Houston to New Orleans St. Louis to Chicago Buffalo to Albany Winnipeg to Toronto Toronto to Montreal Buffalo to Toronto Columbus to Ashburn Cleveland to Buffalo Houston to Ponchatoula Umatilla to Quincy What this means: The enablement of Zayo’s global network with 400G will allow customers to continue scaling their bandwidth with Zayo on existing routes, opening up high-capacity access on new routes, improving network stability and providing an overall better customer experience through quicker delivery and optimal routing. The enhanced capacity from these routes will support customers that have exponential growth needs driven by emerging technologies such as 5G, cloud adoption, IoT, AI, edge computing, and automation. Expanding Global Low-Latency Network Zayo has also been working to expand capacity in other key economic centers across the globe. In October 2022, Zayo announced its global low-latency route connecting the U.S. to South America’s financial hub of Sao Paulo. In Q3 2023, the company completed expansions to its connectivity infrastructure in Sao Paulo including a new key terrestrial route that will provide connectivity throughout the metro ring and to four key data centers. New Sao Paulo Points of Presence (PoPs): Alameda Araguaia, 3641, Alphaville, Barueri, SP, 06455 000, Brazil Av. Marcos Penteado de Ulhoa Rodrigues, 249, Santana de Parnaiba, SP, 06543 001, Brazil Avenida Ceci, 1900, Tambore, Barueri, SP, 06460 120, Brazil Rua Ricardo Prudente de Aquino, 85 Santan de Parnaiba, Brazil What this means: As Latin America’s center of innovation and commerce, São Paulo has seen an increased demand for connectivity from the U.S. To meet the growing needs of customers, Zayo is establishing diverse, high-bandwidth connectivity from its first-class North American fiber network directly into the largest economic center in the Southern Hemisphere. IP Infrastructure Growth IP demand continues to be a driver for capacity increases. Zayo continues to bolster its IP infrastructure with new PoPs in key markets and data centers across the globe. Zayo added eight new IP PoPs to its North American network in Q3, including: 45 Parliament St, Toronto, ON 250 Williams St NW, Atlanta, GA 6477 W Wells Park Rd, Salt Lake City, UT 2335 S Ellis St, Chandler, AZ 375 Pearl St, New York, NY 626 Wilshire Blvd, Los Angeles, CA 431 Horner Ave, Etobicoke, ON 1100 White St SW, Atlanta, GA Zayo's IP backbone, which runs on Zayo's wholly owned fiber infrastructure, makes up nearly 10% of the world's Internet capacity. Zayo currently manages 96Tb of core capacity and 34TB of peering capacity, and adds 1-2Tb of peering capacity every quarter. Upgrading Long-Haul Capacity As one of the other providers actively investing in its long-haul infrastructure, Zayo is continuing to overbuild its routes in high-demand areas to enable enhanced fiber capacity. In Q3 2023, Zayo completed the overbuild of its Omaha to Denver route, providing increased capacity on this highly sought-after route. Zayo also has three new long-haul route builds and two additional route overbuilds in progress with scheduled completion by the end of 2023. What this means: The enhancement to Zayo’s long-haul dark fiber routes provide customers with diverse routing options and the ability to customize and enhance their network to meet the unique needs of their businesses while maximizing resiliency and ability to scale. Zayo will continue to invest in future-proofing its network and services to connect what’s next for its customers. About Zayo For more than 15 years, Zayo has empowered some of the world’s largest and most innovative companies to connect what’s next for their business. Zayo’s future-ready network spans over 17 million fiber miles and 142,000 route miles. Zayo’s tailored connectivity and edge solutions enable carriers, cloud providers, data centers, schools, and enterprises to deliver exceptional experiences, from edge to core to cloud. Discover how Zayo connects what’s next at www.zayo.com and follow us on LinkedIn and Twitter.

Read More

Data Storage

Astera Labs First to Break Through the Memory Wall with Industry’s Highest Performance CXL Memory Controllers

Business Wire | September 21, 2023

Astera Labs, the global leader in semiconductor-based connectivity solutions for AI infrastructure, today announced that its Leo Memory Connectivity Platform enables data center servers with unprecedented performance for memory intensive workloads. Leo is the industry’s first Compute Express Link™ (CXL™) memory controller that increases total server memory bandwidth by 50% while also decreasing latency by 25% when integrated with the forthcoming 5th Gen Intel® Xeon® Scalable Processor. Through new hardware-based interleaving of CXL-attached and CPU native memory, Astera Labs and Intel eliminate any application-level software changes to augment server memory resources via CXL. Existing applications can effortlessly “plug-and-play” to take advantage of the highest possible memory bandwidth and capacity in the system. “The growth of computing cores and performance has historically outpaced memory throughput advancements, resulting in degraded server performance efficiency over time,” said Sanjay Gajendra, COO of Astera Labs. “This performance scaling challenge has led to the infamous ‘memory wall,’ and thanks to our collaboration with Intel, our Leo Memory Connectivity Platform breaks through this barrier by delivering on the promise of PCIe 5.0 and CXL memory.” Data center infrastructure scaling limitations due to the memory wall are none more evident than in AI servers where memory bandwidth and capacity bottlenecks result in inefficient processor utilization. The CXL innovations delivered by Astera Labs and Intel directly address these bottlenecks and lay the foundation for cloud, hybrid-cloud and enterprise data centers to maximize accelerated computing performance. Extending leadership of PCIe® 5.0 and CXL 2.0 solutions Astera Labs has a history of delivering industry-first solutions that are critical to advancing the PCIe and CXL ecosystems. In addition to memory performance advancements with Leo, Astera Labs is also driving interoperability leadership with its Aries PCIe 5.0 / CXL 2.0 Smart Retimers on state-of-the-art Intel server platforms. As the most widely deployed and proven PCIe/CXL retimer family in the industry, Aries features a low-latency CXL mode that complements Leo to form the most robust CXL memory connectivity solution. “We applaud Astera Labs for their contributions to the CXL ecosystem and are delighted to extend our ongoing collaboration. We believe Memory Connectivity Platforms containing innovations from companies like Astera Labs will help deliver enhanced performance on next generation Intel Xeon processors, and accelerate a myriad of memory intensive workloads,” said Zane Ball, Corporate Vice President and General Manager, Data Center Platform Engineering and Architecture Group, Intel. Visit Astera Labs at Intel Innovation! Astera Labs will showcase Leo and Aries together with Intel’s latest Xeon® Scalable processors at Booth #210, September 19-20 at the San Jose Convention Center. Talk to Astera Labs’ experts to learn more about industry benchmarks and how to optimize PCIe/CXL memory solutions in data center architectures to deliver optimized performance for applications ranging from AI, real time analytics, genomics and modeling. About Astera Labs Astera Labs, Inc. is a global leader in semiconductor-based connectivity solutions purpose-built to unleash the full potential of intelligent data infrastructure at cloud-scale. Its class-defining, first-to-market products based on PCIe, CXL, and Ethernet technologies deliver critical connectivity in accelerated computing platforms optimized for AI applications.

Read More

Application Infrastructure

Penguin Solutions Certified as NVIDIA DGX-Ready Managed Services Partner

Business Wire | September 28, 2023

Penguin Solutions™, an SGH™ brand (Nasdaq: SGH) that designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale, today announced that it has been certified by NVIDIA to support enterprises deploying NVIDIA DGX™ AI computing platforms under the NVIDIA DGX-Ready Managed Services program. NVIDIA DGX systems are an advanced supercomputing platform for large-scale AI development. The NVIDIA DGX-Ready Managed Services program gives customers the option to outsource management of DGX systems deployed in corporate data centers, including the implementation and monitoring of server, storage, and networking resources required to support DGX platforms. Generative AI requires a completely new computing infrastructure compared to traditional IT, said Troy Kaster, vice president, commercial sales at Penguin Solutions. These new computing infrastructures require services skills, which Penguin is uniquely qualified to support given our extensive experience partnering with some of the largest companies in AI. As a full-service integration and services provider, Penguin has the capabilities to design at scale, deploy at speed, and provide managed services for NVIDIA DGX SuperPOD solutions. Penguin has designed, built, deployed, and managed some of the largest AI training clusters in the world. Penguin currently manages over 50,000 NVIDIA GPUs for Fortune 100 customers including Meta’s AI Research SuperCluster – with 2,000 NVIDIA DGX systems and 16,000 NVIDIA A100 Tensor Core GPUs – one of the most powerful AI training clusters in the world. “AI is transforming organizations around the world, and many businesses are looking to deploy the technology without the complexities of managing infrastructure,” said Tony Paikeday, senior director, DGX platform at NVIDIA. “With DGX-Ready Managed Services offered by Penguin Solutions, our customers can deploy the world’s leading platform for enterprise AI development with a simplified operations model that lets them tap into the leadership-class performance of DGX and focus on innovating with AI.” Advantages of Penguin Solutions powered by NVIDIA DGX include: Design large-scale AI infrastructure combining the most recent DGX systems, ultra-high speed networking solutions, and cutting-edge storage options for clusters tailored to customer requirements Manage AI infrastructure making the most of multiple layers of recent hardware and software, such as acceleration libraries, job scheduling and orchestration Reduce risk associated with investments in computing infrastructure Optimize efficiency of AI infrastructure with best-in-class return on investment. About Penguin Solutions The Penguin Solutions™ portfolio, which includes Penguin Computing™, accelerates customers’ digital transformation with the power of emerging technologies in HPC, AI, and IoT with solutions and services that span the continuum of edge, core, and cloud. By designing highly-advanced infrastructure, machines, and networked systems we enable the world’s most innovative enterprises and government institutions to build the autonomous future, drive discovery and amplify human potential.

Read More