Oracle launches Cloud Infrastructure Data Science Service

ZDNet: | February 12, 2020

Oracle on Wednesday announced the launch of the new Oracle Cloud Infrastructure Data Science Service, a native service on Oracle Cloud Infrastructure (OCI) that's designed to let teams of data scientists collaborate on the development, deployment and maintenance of machine learning models. As Oracle grows the footprint of its "second generation" cloud, the new service aims to leapfrog the services other public cloud vendors offer for data scientists -- and the problems that come with typical data scientist workflows. "One of the traditional problems in data science, and I think it's still what you see typically in almost all organizations, is that [data scientists] are really working in silos, working in isolation," Greg Pavlik, Oracle's SVP for product development, Data and AI services, told ZDNet.

Spotlight

Your networking infrastructure is the beating heart of your organization, providing the apps and tools your team needs to keep your business running and the data your workers need to complete their tasks. In the age of the cloud, IT infrastructure has become increasingly complex, but too many organizations continue to rely on ou

Spotlight

Your networking infrastructure is the beating heart of your organization, providing the apps and tools your team needs to keep your business running and the data your workers need to complete their tasks. In the age of the cloud, IT infrastructure has become increasingly complex, but too many organizations continue to rely on ou

Related News

HYPER-CONVERGED INFRASTRUCTURE, STORAGE MANAGEMENT, IT SYSTEMS MANAGEMENT

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling -- Reduces Data Center Power Costs by Up to 40%

Prnewswire | May 22, 2023

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized. "Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time." To learn more about Supermicro's GPU servers, visit: https://www.supermicro.com/en/products/gpu AI-optimized racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customized based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centers by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30x performance and efficiency in today's large transformer models with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. State-of-the-art eight NVIDIA H100 SXM5 Tensor Core GPU servers from Supermicro for today's largest scale AI models include: SYS-821GE-TNHR – (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/SYS-821GE-TNHR AS -8125GS-TNHR – (Dual 4th Gen AMD EPYC CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/AS-8125GS-TNHR Supermicro also designs a range of GPU servers customizable for fast AI training, vast volume AI inferencing, or AI-fused HPC workloads, including the systems with four NVIDIA H100 SXM5 Tensor Core GPUs. SYS-421GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 4U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-421GU-TNXR SYS-521GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 5U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-521GU-TNXR Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Learn more about the Supermicro Liquid Cooling system at: https://www.supermicro.com/en/solutions/liquid-cooling Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organizations, configurations from the server level to the entire data center must be optimized and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide. Read the Supermicro Large Scale AI Solution Brief - https://www.supermicro.com/solutions/Solution-Brief_Rack_Scale_AI.pdf Supermicro at ISC To explore these technologies and meet with our experts, plan on visiting the Supermicro Booth D405 at ISC High Performance 2023 event in Hamburg, Germany, May 21 – 25, 2023. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Read More

STORAGE MANAGEMENT, DATA STORAGE

42Gears Launches ChatGPT Plugin for SureMDM Mobile Device Management Platform

PRNewswire | May 25, 2023

42Gears, a global leader in Unified Endpoint Management (UEM) and Mobile Device Management (MDM) solutions, proudly announces the launch of the ChatGPT plugin for its flagship offering, SureMDM. This marks a significant milestone as 42Gears becomes the first MDM provider to leverage the ChatGPT plugin for its comprehensive device management solution. SureMDM, known for its robust features and innovative approach to managing mobile devices, is now further enhanced with ChatGPT, enabling intelligent conversational capabilities. This plugin empowers users to interact with their devices in a more natural and intuitive manner, unlocking a new level of productivity and efficiency. With the SureMDM ChatGPT Plugin, users can now Monitor Devices and apply policies and configurations through Conversations: SureMDM users can now monitor devices and apply policies and configurations through natural language conversations. ChatGPT understands the context of the request, and provides a more user-friendly experience. Receive Actionable Insights: SureMDM's powerful troubleshooting capabilities are now augmented with ChatGPT's conversational AI. Users can quickly ask questions or seek assistance regarding device configurations, policies, app installations, and more, receiving instant guidance and resolutions. Streamline Device Management Workflows: ChatGPT eliminates the need for extensive training or documentation by providing intuitive assistance. Users can efficiently carry out routine device management tasks, such as profile deployments, compliance checks, and software updates, using simple conversational interactions. This groundbreaking launch reaffirms 42Gears' position as a trailblazer in the MDM industry, continuously pushing the boundaries of what is possible. The seamless combination of SureMDM's comprehensive device management features with ChatGPT's AI-powered conversational capabilities positions 42Gears as an industry leader in delivering intelligent, user-centric solutions. Prakash Gupta, COO & CTO of 42Gears, adds, "We are excited to bring the power of ChatGPT to our SureMDM customers. With the ChatGPT plugin, we are transforming the way organizations manage their mobile devices. This technology breakthrough will enable users to interact with their devices in a more natural and conversational manner, leading to increased productivity and a seamless user experience." The ChatGPT plugin for SureMDM allows for easy implementation and streamlined adoption. For more information on the SureMDM ChatGPT Plugin, click here. To learn more about 42Gears and its solutions to manage mobile devices About 42Gears 42Gears is a leader in enterprise mobility management, offering cutting-edge solutions that aim to transform the digital workplace. Delivered from the cloud and on-premise, 42Gears products support all major mobile and desktop operating systems, enabling IT and DevOps teams to improve frontline workforce productivity and the efficiency of software development teams.

Read More

APPLICATION INFRASTRUCTURE, DATA STORAGE

CoreWeave Raises $221M Series B to Expand Specialized Cloud Infrastructure Powering the Generative AI and Large Language Model Boom

Businesswire | April 21, 2023

CoreWeave a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it has secured $221 million in Series B funding. The round was led by Magnetar Capital (“Magnetar”), a leading alternative asset manager, with contributions from NVIDIA, and rounded out by Nat Friedman and Daniel Gross. The latest funding will be used to further expand CoreWeave’s specialized cloud infrastructure for compute-intensive workloads — including artificial intelligence and machine learning, visual effects and rendering, batch processing and pixel streaming — to meet the explosive demand for generative AI technology. This strategic focus has allowed CoreWeave to offer purpose-built, customized solutions that can outperform larger, more generalized cloud providers. The new capital will also support U.S.-based data center expansion with the opening of two new centers this year, bringing CoreWeave’s total North American-based data centers to five. “CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers,” said CoreWeave CEO and co-founder Michael Intrator. “Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures we can maintain that momentum without skipping a beat. Additionally, we’re thrilled to expand our collaboration with the team at NVIDIA. NVIDIA consistently pushes the boundaries of what’s possible in the field of technology, and their vision and guidance will be invaluable as we continue to scale our organization.” NVIDIA recently released the highest-performance data center GPU, the NVIDIA H100 Tensor Core, along with the NVIDIA HGX H100 platform. CoreWeave announced at the NVIDIA GTC conference in March that its HGX H100 clusters are live and currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs, interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts. Investor Perspectives on $221M Series B Round “AI has reached an inflection point, and we’re seeing accelerated interest in AI computing infrastructure from startups to major enterprises,” said Manuvir Das, Vice President of Enterprise Computing at NVIDIA. “CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.” “With the seemingly limitless boundaries of AI applications and technologies, the demand for compute-intensive hardware and infrastructure is higher than it's ever been,” said Ernie Rogers, Magnetar’s chief operating officer. “CoreWeave’s innovative, agile and customizable product offering is well-situated to service this demand and the company is consequently experiencing explosive growth to support it. We are proud to collaborate with NVIDIA in supporting CoreWeave’s next phase of growth as it continues to bolster its already strong positioning in the marketplace.” About CoreWeave Founded in 2017, CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute-intensive use cases — VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.

Read More