WINDOWS SERVER MANAGEMENT,IT SYSTEMS MANAGEMENT,AZURE
GRC | December 05, 2022
GRC (Green Revolution Cooling®), the leader in immersion cooling for data centers, announced today that GRC’s CRO, Jim Weynand will lead a discussion titled Data Center Sustainability is a Team Sport, which will highlight the benefits of data center immersion cooling during the Gartner IT Infrastructure, Operations & Cloud Strategies Conference 2022 at The Venetian Resort Las Vegas.
The 20-minute Exhibit Showcase will highlight how air-cooled data centers simply cannot meet the demands of today’s high-powered processors and high-density deployments. The session takes place at 1:15 pm on December 8.
Participants will learn about liquid immersion cooling solutions that meet the computing demands of today and tomorrow, and help enterprises address the sustainability, energy use, and the cost of running a data center. The session will also focus on GRC’s partnerships with leading hardware providers and cite examples of comprehensive solutions, from facility design to server selection, enabling data center operators to make the transition from air cooling to liquid immersion cooling to be environmentally friendly and address Environmental Social and Governance (ESG) goals.
“Our relationships with leading hardware providers such as Dell and Intel enable our customers to seamlessly and quickly implement changes to their data centers. We are thrilled to share the stage at this Gartner conference to educate users on the sustainability and budget benefits of using liquid immersion cooling solutions to cool their data centers.”
Jim Weynand, CRO at GRC
GRC is The Immersion Cooling Authority®. The company's patented immersion-cooling technology radically simplifies deployment of data center cooling infrastructure. By eliminating the need for chillers, CRACs, air handlers, humidity controls, and other conventional cooling components, enterprises reduce their data center design, build, energy, and maintenance costs. GRC’s solutions are deployed in twenty-one countries and are ideal for next-gen applications platforms, including artificial intelligence, blockchain, HPC, 5G, and other edge computing and core applications. Their systems are environmentally resilient, sustainable, and space saving, making it possible to deploy them in virtually any location with minimal lead time.
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT
Run:ai | November 11, 2022
Run:ai, the leader in compute orchestration for AI workloads, today announced the launch of the Run:ai MLOps Compute Platform (MCP) powered by NVIDIA DGX™ Systems, a complete, full-stack AI solution for enterprises. Built on NVIDIA DGX systems and using Run:ai Atlas software, Run:ai MCP is an end-to-end AI infrastructure platform that seamlessly orchestrates the hardware and software complexities of AI development and deployment into a single solution, accelerating a company's ROI from artificial intelligence.
Organizations are increasingly turning to AI to grow revenue and improve efficiency. However, each level of the AI stack, from hardware to high-level software, can create challenges and inefficiencies, with multiple teams competing for the same limited GPU computing time. "shadow AI," where individual teams buy their own infrastructure or use pricey cloud compute resources, has become common. This decentralized approach leads to idle resources, duplication, increased expense and delayed time to market. Run:ai MCP is designed to overcome these potential roadblocks to successful AI deployments.
"Enterprises are investing heavily in data science to deliver on the promise of AI, but they lack a single, end-to-end AI infrastructure to ensure access to the resources their practitioners need to succeed," said Omri Geller, co-founder and CEO of Run:ai. "This is a unique, best-in-class hardware/software AI solution that unifies our AI workload orchestration with NVIDIA DGX systems — the universal AI system for every AI workload — to deliver unprecedented compute density, performance and flexibility. Our early design partners have achieved remarkable results with MCP including a 200-500% improved utilization and ROI on their GPUs, which demonstrates the power of this solution to address the biggest bottlenecks in the development of AI."
"AI offers incredible potential for enterprises to grow sales and reduce costs, and simplicity is key for businesses seeking to develop their AI capabilities. "As an integrated solution featuring NVIDIA DGX systems and the Run:ai software stack, Run:ai MCP makes it easier for enterprises to add the infrastructure needed to scale their success."
Matt Hull, vice president of Global AI Data Center Solutions at NVIDIA
Run:ai MCP powered by NVIDIA DGX systems with NVIDIA Base Command is a full-stack AI solution that can be obtained from distributors and simply installed with world-class enterprise support, including direct access to NVIDIA and Run:ai experts.
With MCP, compute resources are gathered into a centralized pool that can be managed and provisioned by one team, but delivered to many users with self-service access. A cloud-native operating system helps IT manage everything from fractions of NVIDIA GPUs to large-scale distributed training. Run:ai's workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed. The solution provides MLOps tools while preserving freedom for developers to use their preferred tools via integrations with Kubeflow, Airflow, MLflow and more.
This bundle is the latest in a series of Run:ai's collaborations with NVIDIA, including Run:ai's Atlas Platform certification on the NVIDIA AI Enterprise software suite, which is included with NVIDIA DGX systems.
Run:ai's Atlas Platform brings cloud-like simplicity to AI resource management — providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system — which includes a workload-aware scheduler and an abstraction layer — helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud.
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,DATA STORAGE
MoEngage | November 17, 2022
MoEngage, the insights-led customer engagement platform, today announced its new product, MoEngage Inform, a unified messaging infrastructure that enables brands to build and manage multichannel transactional alerts through one API.
Consumers expect immediate updates on critical transactional notifications at their fingertips. Using MoEngage Inform, brands can provide real-time transactional alerts to improve the customer experience, such as an order or booking confirmation after a product is purchased, a delivery notification when a package arrives, one-time passwords (OTPs) for approving transactions or logging in securely, or notifications around password resets, among other time-sensitive alerts.
Oftentimes, building, updating, and delivering these critical alerts requires significant engineering bandwidth and resources. There is a heavy reliance on development teams to maintain a transactional messaging infrastructure and add new channels; in some cases, integrating a new vendor or new communication channel provider can take at least eight weeks of engineering efforts.
Moreover, brands often encounter a siloed customer experience due to multiple delivery providers and API demands, resulting in limited visibility into customers' actions. There is no unified view of notifications received by customers, meaning product teams cannot easily identify if a customer has already received or has acted on an alert; potentially leading to customers receiving duplicate alerts across channels.
Inform makes transactional alert management seamless so brands can focus more on delivering the cohesive, time-sensitive messages that consumers want. Inform's single API requires a one-time setup, freeing up engineering bandwidth and pushing control to the product or marketing teams. MoEngage Inform is a component of the MoEngage Customer Engagement Platform, which together enables brands to have one platform to support all of their customer messaging and notifications needs, both transactional and marketing-related. Product and marketing teams will be able to have a unified view of the customer journey, so they can collectively understand and gather insights to inform future initiatives to deliver a better customer experience.
With MoEngage Inform, brands can achieve:
Unified Customer Experiences - Get a unified view of how customers engage with the brand, including transactional and promotional messages across channels, and leverage an advanced algorithm that uses AI to determine what channels each customer prefers to receive critical alerts on and set a priority order automatically.
Centralized Visibility and Performance - Track and optimize the performance of your multichannel transactional and promotional messages in one central dashboard.
Reduced Engineering Resources and Improved Effectiveness - Power all transactional messaging with a single API and integrate with any communication channel with ease, supporting more than 30 providers.
More Autonomy, Faster Delivery of Alerts - Get out-of-the-box templates to create new alerts in minutes, with alerts being delivered in under 5 seconds. A built-in fallback mechanism ensures critical alerts are delivered on other channels upon disruptions.
Improved Data Security and Reporting - Achieve unified notification logs and delivery reports across channels, making identifying and debugging issues easier.
"As organizations grow, their messaging and communication needs become more complex. With MoEngage Inform, engineering teams can focus on delivering core offerings instead of building backend infrastructures, and product and marketing teams can deliver critical transactional alerts without breaking customer experiences," said Raviteja Dodda, CEO and co-founder of MoEngage.