HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE
businesswire | April 17, 2023
Pulumi, makers of the fastest-growing infrastructure as code product, today announced Pulumi Insights, a breakthrough innovation that brings intelligent infrastructure as code to every engineer’s fingertips. Pulumi Insights unlocks analytics and search across cloud infrastructure, generates infrastructure as code from natural language prompts, and even enables novel AI-driven infrastructure automation. The end result is tremendous gains in engineering productivity – an area in which Pulumi is already best in class – while also helping teams better understand and control their cloud usage patterns. Pulumi Insights works with infrastructure provisioned by other tools in addition to infrastructure under management using Pulumi, and integrates with industry-leading data platforms.
As companies rely on more cloud resources across an increasing number of infrastructure and managed service providers, it becomes all the more difficult to locate, manage, and track resources across organizations, teams, and projects,” said Kelly Fitzpatrick, Senior Industry Analyst at RedMonk. “Often, this results in the infrastructure layer working as a bottleneck that increases the lead time from ideation to delivery. Pulumi aims to address these issues by providing tools and processes designed to enable companies to analyze infrastructure as code resources and apply that knowledge to cost control, forecasting, security, and compliance.”
Search – Find Anything in Any Cloud
Pulumi Insights lets engineers ask any question about their infrastructure across more than 100 clouds, using either structured search queries or natural language prompts. Supported clouds include public clouds like AWS, Microsoft Azure, and Google Cloud; cloud native technologies like Kubernetes, Helm, and VMWare; and SaaS infrastructure like Snowflake, Cloudflare, and MongoDB. Search helps engineers find that needle in the haystack – locating a single resource across many clouds and environments – as well as running sophisticated queries such as tracking down untagged or expensive resources across the whole organization. Search facets highlight the most used clouds and resources, broken down by project and environment, helping teams quickly understand more about the infrastructure they manage.
Analytics – Gain Deeper Insights into Cloud Infrastructure
Pulumi Insights includes new out-of-the-box dashboards and analytics, enabling engineers to gain rich insights over their own organization’s cloud infrastructure. A REST API can be used to programmatically query and add automation around search results, or to integrate with internal platforms and dashboards. Data export to other data warehouses including Snowflake, Amazon Redshift, Google BigQuery and Azure Synapse unlocks integration with other data and analytics platforms, enabling teams to build custom dashboards using the data tools they know and love. These capabilities can be used to identify anomalies or trends in resource usage, and dig into cost, security and compliance concerns.
Intelligent Infrastructure as Code – Be More Productive with the Power of AI
Pulumi Insights embeds new AI capabilities throughout the Pulumi platform. Pulumi deeply understands usage patterns and can deliver recommendations or even generate infrastructure as code automatically. Pulumi’s Automation API – a unique approach that embeds infrastructure as code into larger software programs – lets AI go beyond simply generating content to enable advanced automation. A new companion website and command-line tool leverages large language models (LLMs) to author infrastructure as code for any architecture for any cloud in any language. Thanks to Pulumi’s unique approach of employing infrastructure as code in any programming language, industry tools like GitHub CoPilot and OpenAI ChatGPT already deliver superior support for Pulumi.
Pulumi Insights builds upon Pulumi’s flagship infrastructure as code technology which supports any programming language, and delivers semantic understanding across a connected graph of infrastructure on any cloud. Because over 2/3rds of Pulumi’s community, and 99% of its customers use Pulumi Cloud, this uniquely unlocks search, insights, and deep learning across over 1 petabyte of cloud usage data.
“With Pulumi Insights, we now have the industry’s smartest infrastructure as code,” said Joe Duffy, Founder and CEO of Pulumi. “This is yet another step-function boost to infrastructure productivity. Leveraging cloud infrastructure to deliver innovation, intelligence, and business impact has never been easier. This is an inflection point for infrastructure as code and there is so much more to come.”
Pulumi lets engineers deliver infrastructure as code faster, using any programming language. The Pulumi Platform enables customers to manage 10x more resources at lower cost than traditional tools, while Pulumi Insights unlocks analytics and search across cloud infrastructure, and enables novel AI-driven infrastructure automation.
APPLICATION INFRASTRUCTURE, DATA STORAGE
Businesswire | April 21, 2023
CoreWeave a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it has secured $221 million in Series B funding. The round was led by Magnetar Capital (“Magnetar”), a leading alternative asset manager, with contributions from NVIDIA, and rounded out by Nat Friedman and Daniel Gross.
The latest funding will be used to further expand CoreWeave’s specialized cloud infrastructure for compute-intensive workloads — including artificial intelligence and machine learning, visual effects and rendering, batch processing and pixel streaming — to meet the explosive demand for generative AI technology. This strategic focus has allowed CoreWeave to offer purpose-built, customized solutions that can outperform larger, more generalized cloud providers. The new capital will also support U.S.-based data center expansion with the opening of two new centers this year, bringing CoreWeave’s total North American-based data centers to five.
“CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers,” said CoreWeave CEO and co-founder Michael Intrator. “Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures we can maintain that momentum without skipping a beat. Additionally, we’re thrilled to expand our collaboration with the team at NVIDIA. NVIDIA consistently pushes the boundaries of what’s possible in the field of technology, and their vision and guidance will be invaluable as we continue to scale our organization.”
NVIDIA recently released the highest-performance data center GPU, the NVIDIA H100 Tensor Core, along with the NVIDIA HGX H100 platform. CoreWeave announced at the NVIDIA GTC conference in March that its HGX H100 clusters are live and currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs, interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts.
Investor Perspectives on $221M Series B Round
“AI has reached an inflection point, and we’re seeing accelerated interest in AI computing infrastructure from startups to major enterprises,” said Manuvir Das, Vice President of Enterprise Computing at NVIDIA. “CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.”
“With the seemingly limitless boundaries of AI applications and technologies, the demand for compute-intensive hardware and infrastructure is higher than it's ever been,” said Ernie Rogers, Magnetar’s chief operating officer. “CoreWeave’s innovative, agile and customizable product offering is well-situated to service this demand and the company is consequently experiencing explosive growth to support it. We are proud to collaborate with NVIDIA in supporting CoreWeave’s next phase of growth as it continues to bolster its already strong positioning in the marketplace.”
Founded in 2017, CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute-intensive use cases — VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.
STORAGE MANAGEMENT, DATA STORAGE
prnewswire | May 24, 2023
Pure Storage the IT pioneer that delivers the world's most advanced data storage technology and services, announced that MediaZen, a leading artificial intelligence (AI)-based voice recognition provider in South Korea, is leveraging FlashBlade®, its unified fast file and object storage platform, to accelerate time to market for new AI services and enhance R&D capabilities that propel AI innovation and competitiveness.
MediaZen's leading voice recognition solutions serve customers across industries including automotive, education, public service, retail, and telecommunications. Driven by operations in its NAMZ Language Engineering Research Institute and the Magok R&D Center, MediaZen is at the cutting edge of AI-enabled voice recognition technologies, with a goal to become the leading provider of AI services in South Korea.
In order to accelerate innovation across its AI R&D capabilities, the company required a data storage solution that could develop faster training data and combine training results in its AI advancement research tasks. However, MediaZen faced difficulties scaling GPU clusters due to the limited flexibility of its legacy storage solution, as well as the inability to process large volumes of unstructured data and AI workloads effectively.
MediaZen selected FlashBlade for its high performance parallel processing architecture, superior I/O performance, and simple management and upgradability. Benefits Pure Storage delivers to MediaZen include
Accelerated Time to Market: MediaZen has reduced voice recognition modeling tasks that previously took up to 12 months to two weeks (96% enhancement) with FlashBlade, and new speech recognition models were created in just four weeks with high-speed shared storage that can support a multi-GPU distributed processing environment.
Simple, Efficient Storage Management: Data movement between GPUs and storage, and within shared storage environments, can now be processed at high speeds, while integrated networking reduces storage complexity and enables efficient operation of compute and storage environments with no extra headcount.
Enhanced R&D Capabilities to Propel AI Innovation: With FlashBlade, MediaZen created a new infrastructure environment to advance its AI R&D and to develop and expand services that are specific to market needs, providing a foundation for global growth. As a result, MediaZen has seen remarkable growth by providing diverse markets with AI-powered voice and language services.
"With FlashBlade, MediaZen now has the infrastructure to advance its AI services to meet current and future market demands. Tasks involving STT modeling that would have taken up to 12 months to complete using the legacy equipment were completed in just 2 weeks with the aid of Pure Storage's solutions. We at MediaZen are thrilled with the superb performance of FlashBlade, and the simple operation and maintenance of it that requires no additional headcount to manage." -- Yoon JongSung, Deputy Director, NAMZ AI R&D Group, MediaZen
"The AI era has arrived, and the need for modern all-flash storage systems to support large-scale AI workloads and advanced data analytics is increasing daily. To support market leaders including MediaZen innovating and growing their businesses, we at Pure Storage will deliver higher performance, capacity density, and reliability with our differentiated flash technology." -- Jaesung Yoo, Managing Director, Pure Storage Korea
About Pure Storage
Pure Storage uncomplicates data storage, forever. Pure delivers a cloud experience that empowers every organization to get the most from their data while reducing the complexity and expense of managing the infrastructure behind it. Pure's commitment to providing true storage as-a-service gives customers the agility to meet changing data needs at speed and scale, whether they are deploying traditional workloads, modern applications, containers, or more. Pure believes it can make a significant impact in reducing data center emissions worldwide through its environmental sustainability efforts, including designing products and solutions that enable customers to reduce their carbon and energy footprint. And with the highest Net Promoter Score in the industry, Pure's ever-expanding list of customers are among the happiest in the world.