HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT

Run:ai Launches Full-Stack Solution for Hyper-Optimized Enterprise AI Built on NVIDIA DGX Systems

Run:ai | November 11, 2022 | Read time : 03:00 min

Run:ai Launches Full-Stack Solution for Hyper-Optimized Enterprise
Run:ai, the leader in compute orchestration for AI workloads, today announced the launch of the Run:ai MLOps Compute Platform (MCP) powered by NVIDIA DGX™  Systems, a complete, full-stack AI solution for enterprises. Built on NVIDIA DGX systems and using Run:ai Atlas software, Run:ai MCP is an end-to-end AI infrastructure platform that seamlessly orchestrates the hardware and software complexities of AI development and deployment into a single solution, accelerating a company's ROI from artificial intelligence.

Organizations are increasingly turning to AI to grow revenue and improve efficiency. However, each level of the AI stack, from hardware to high-level software, can create challenges and inefficiencies, with multiple teams competing for the same limited GPU computing time. "shadow AI," where individual teams buy their own infrastructure or use pricey cloud compute resources, has become common. This decentralized approach leads to idle resources, duplication, increased expense and delayed time to market. Run:ai MCP is designed to overcome these potential roadblocks to successful AI deployments.

"Enterprises are investing heavily in data science to deliver on the promise of AI, but they lack a single, end-to-end AI infrastructure to ensure access to the resources their practitioners need to succeed," said Omri Geller, co-founder and CEO of Run:ai. "This is a unique, best-in-class hardware/software AI solution that unifies our AI workload orchestration with NVIDIA DGX systems — the universal AI system for every AI workload — to deliver unprecedented compute density, performance and flexibility. Our early design partners have achieved remarkable results with MCP including a 200-500% improved utilization and ROI on their GPUs, which demonstrates the power of this solution to address the biggest bottlenecks in the development of AI."

"AI offers incredible potential for enterprises to grow sales and reduce costs, and simplicity is key for businesses seeking to develop their AI capabilities. "As an integrated solution featuring NVIDIA DGX systems and the Run:ai software stack, Run:ai MCP makes it easier for enterprises to add the infrastructure needed to scale their success."

Matt Hull, vice president of Global AI Data Center Solutions at NVIDIA

Run:ai MCP powered by NVIDIA DGX systems with NVIDIA Base Command is a full-stack AI solution that can be obtained from distributors and simply installed with world-class enterprise support, including direct access to NVIDIA and Run:ai experts.

With MCP, compute resources are gathered into a centralized pool that can be managed and provisioned by one team, but delivered to many users with self-service access. A cloud-native operating system helps IT manage everything from fractions of NVIDIA GPUs to large-scale distributed training. Run:ai's workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed. The solution provides MLOps tools while preserving freedom for developers to use their preferred tools via integrations with Kubeflow, Airflow, MLflow and more.

This bundle is the latest in a series of Run:ai's collaborations with NVIDIA, including Run:ai's Atlas Platform certification on the NVIDIA AI Enterprise software suite, which is included with NVIDIA DGX systems.

About Run:ai
Run:ai's Atlas Platform brings cloud-like simplicity to AI resource management — providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system — which includes a workload-aware scheduler and an abstraction layer — helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud.

Spotlight

Pitney Bowes' Data Architect Manager, Vishal Shah, along with the company's Sr. Director of Data Development, Irina Ashurova, discuss how Snowflake and Select Star have transformed the way their business handles data. Before, the company's data scientists were spending a lot of time trying to find the right data sources and the people who owned those data sources. Now, with Select Star they have an easily accessible place to go do the search.

Spotlight

Pitney Bowes' Data Architect Manager, Vishal Shah, along with the company's Sr. Director of Data Development, Irina Ashurova, discuss how Snowflake and Select Star have transformed the way their business handles data. Before, the company's data scientists were spending a lot of time trying to find the right data sources and the people who owned those data sources. Now, with Select Star they have an easily accessible place to go do the search.

Related News

STORAGE MANAGEMENT,DATA STORAGE

Inspur NF8480M6 Hits a New Record in the SAP SD 2-Tier Benchmark

Inspur Systems | January 05, 2023

In a recent announcement, Inspur Information’s NF8480M6 achieved a world record of 359,780 SAPS and is ranked first in SAP SD 2-tier performance benchmark for 4-socket servers using the Copper Lake platform. The SAP Sales & Distribution (SD) 2-Tier Benchmark is widely recognized as a benchmark in a two-tier architecture based on sales and distribution modules. The SAP SD 2-Tier Benchmark Test has evolved into a crucial indicator of operational performance for the production systems of actual enterprises. Many enterprise users regard it as a more flexible and valuable guide for selecting a server platform, since it considers exact production needs of enterprise users during the architecture design itself. A higher benchmark test score indicates that the server system can accommodate more users and offer more productivity for businesses or a higher SAPS value. The record set by Inspur NF8480M6 in SAP SD 2-Tier Benchmark for 4-Socket Servers is due to the design philosophy of openness, excellence, extremeness and security, which enables Inspur to increase usability, reliability and performance of its servers. About Inspur Information Inspur Information is the second-largest server manufacturer and leading cloud computing, AI, and data center infrastructure provider. Inspur Information addresses crucial technology fields like open computing, cloud data center, AI, and deep learning by delivering cutting-edge computing hardware design and a wide range of products. Inspur Information solutions are performance-optimized and purpose-built, offering customers the tools they need to handle specific workloads and real-world problems.

Read More

APPLICATION INFRASTRUCTURE

Styra’s Authorization Policy Tools Speed Up Infrastructure Deployment

Styra | January 23, 2023

Styra, Inc. creates and maintains Open Policy Agents (OPA) and is one of the leading cloud-native authorization firms. The company recently introduced the toolset for infrastructure to its Styra Declarative Authorization Service (DAS) and the industry’s broadest policy library. In addition, Styra offers NIST Special Publication (SP) 800-190 compliant policies within its expanded library that includes hundreds of validated policies to address security concerns associated with the use of containers. Styra speeds up the deployment of secure, compliant cloud-native infrastructure and gives enterprise platform teams the tools they need to deploy resources for distributed developers systematically. This saves time and money while keeping security best practices. To ensure security, compliance, and operational health, platform engineers repeatedly write or customize software that stops developers from making errors, and over time build customized guardrails that automatically impose security rules, compliance regulations, and other operational policies each time a developer makes a change. This effort often includes undifferentiated heavy lifting, which adds complexity, risk and creates a significant barrier to on-time delivery. It is when enterprises cannot afford to choose between security and business time-to-market platform, engineering teams can benefit from policy editing for business users, validate building blocks and policy-as-code guardrails to deliver infrastructure resources without compromising security instead of building them from scratch-in-house. Styra enables infrastructure-managing platform teams to: Eliminate manual policy creation and reduce production risks systematically for infrastructure deployments by utilizing policy templates and editing tools that are simple to deploy. Deploy faster with hundreds of Styra-validated AWS, Azure, GCP, and Kubernetes policies for Terraform from the most popular open-source tools and libraries. Easily enforce best practices and compliance for Kubernetes clusters with NIST SP 800-190 compliant policies from Styra, in addition to an extensive collection of Styra validated policies for PCI DSS, MITRE ATT&CK, CIS Benchmarks, and Pod Security Compliance. Enforce policy guardrails on CloudFormation stacks to prevent AWS resource misconfiguration during final resource change checks with the first general-purpose third-party CloudFormation hook from Styra. About Styra Styra Founded in 2019, Styra enables enterprises to enforce, define and monitor policy across their cloud-native environments with the combination of both open source (Open Policy Agent) and commercial solutions (Declarative Authorization Service). The company provides operations, security, and compliance guardrails to safeguard applications, also the infrastructure the company works on. Its policy-as-code solution allows developers, DevOps, and security teams to relieve risks, speed up application development and reduce human error.

Read More

STORAGE MANAGEMENT,WINDOWS SERVER OS,WINDOWS SERVER MANAGEMENT

GRC’s new survey seeks end-user feedback on data center sustainability

GRC | December 28, 2022

Green Revolution Cooling (GRC), the leader in immersion cooling for data centres, announced today the launch of its first annual Data Center Sustainability Survey, which will gather feedback from industry professionals on what is working and where there is need for improvement. Participants in the survey will be asked eight short questions about their approach to sustainability and how their organization is working to reduce its carbon footprint. GRC is also interested in gathering stories about sustainability efforts in these mission-critical facilities; the successes, setbacks, and anything else related to data center sustainability. Some of these stories will be featured in GRC's webinar on February 15th at 11:00 a.m. ET. Participants whose stories are selected will receive a $100 Amazon gift card or a donation made to the Rainforest Alliance in their name. The Rainforest Alliance is an international non-profit organisation working to make responsible business the new normal at the intersection of business, agriculture, and forests. Results from the survey will not only help us better serve the industry, they’ll provide a much-needed benchmark for understanding sustainability in the data center ecosystem, providing insight to all of us in the industry on how to be the best stewards possible for our planet.” Gregg Primm, VP of Marketing at GRC. About GRC Green Revolution Cooling is The Immersion Cooling Authority®. The company's patented immersion-cooling technology dramatically simplifies data centre cooling infrastructure deployment. Enterprises reduce their data center design, build, energy, and maintenance costs by eliminating the need for chillers, CRACs, air handlers, humidity controls, and other traditional cooling components. GRC's solutions are used in twenty-one countries and are perfect for next-gen application platforms such as artificial intelligence, blockchain, high-performance computing (HPC), 5G, and other edge computing and core applications. Their systems are environmentally resilient, sustainable, and space efficient, allowing them to be deployed in virtually any location with little lead time.

Read More