HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,DATA STORAGE
Liftr Insights | November 11, 2022
Liftr Insights, a pioneer in market intelligence driven by unique data, shows strength within NVIDIA's business.
Recent disappointing activity with NVIDIA has drawn many eyes, but a deeper dive shows that one of their business units has remained strong while others have faltered. That part? Data center revenue.
NVIDIA data center revenue revolves around GPU accelerators sold to public and private cloud centers.
Public cloud providers continue to grow and expand their use of NVIDIA accelerators. Sales and deployment of components such as GPUs on public clouds providers are important indicators because public clouds mirror the needs of the larger semiconductor market, public and private.
Liftr Insights, a provider of reliable data about public clouds and semiconductors, recently assessed the continued growth of NVIDIA's data center business.
The data show that NVIDIA continues to dominate the accelerator space despite attempts by AMD, Intel and other providers to gain a foothold. Among those other contenders are the cloud providers themselves, designing and building their own accelerators as they have been doing with ARM-based CPUs.
"We're proud to have over 40 months of data on companies like AMD, Intel, and NVIDIA that investors can use to track progress. "Particularly in uncertain times and when data center sales are a core part—the backbone—of these businesses."
Tab Schadt, CEO of Liftr Insights
The top six cloud providers, AWS, Azure, Google Cloud, Aliyun, Oracle Cloud, and Tencent Cloud, represent over 75% of the total public cloud market. These data show growth of NVIDIA in size and market share.
"The data show effects from recent semiconductor backlog issues, but also the continued investment in companies like NVIDIA, despite the challenges," says Schadt.
NVIDIA's most recent quarterly statement demonstrates that data center revenue represents 57% of NVIDIA's quarterly revenue, up from 41% the year prior. This percentage is expected to continue to increase in upcoming earnings reports. Since data center revenue represents the majority of NVIDIA's business, this segment is critical to monitor as the other markets for NVIDIA products (e.g., gaming, crypto mining) have become fickle.
"Our customers see our objective data as a reliable indicator in uncertain times," says Schadt. "We look forward to seeing what the next rounds of data this month will signal."
About Liftr Insights
Liftr Insights generates reliable market intelligence using unique data, including details about configurations, components, deployment geo, and pricing for:
Server processors: Intel Xeon, AMD EPYC, and AWS's Arm-based Graviton brands
Datacenter compute accelerators: GPUs, FPGAs, TPUs, and AI chips from NVIDIA, Xilinx, Intel, AMD, AWS, and Google
As shown on the Liftr Cloud Regions Map at https://bit.ly/LiftrCloudRegionsMap, among the companies tracked are Amazon Web Services, Microsoft Azure, Alibaba Cloud, Google Cloud, Oracle Cloud, and Tencent Cloud as well as semiconductor vendors AMD, Ampere, Intel, NVIDIA, and Xilinx. Liftr Insights subject matter experts translate company-specific service provider data into actionable alternative data. Market intelligence consumers can easily ingest this timely, standardized, and operationally-compliant information into their predictive financial models.
HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE
Fog Works | September 23, 2022
W3 Storage Lab announced today it has changed its name to Fog Works. The new name better reflects the company’s positioning, has greater brand-building potential, and is more indicative of the company’s vision of being a key builder of Web3 infrastructure, applications, and devices.
The name Fog Works is derived from the term fog computing which was coined by Cisco. Fog computing is an extension of cloud computing: a network architecture where computing and storage is mostly decentralized and pushed to the edge of the network, but a cloud still exists in the center. Web3 is a fully decentralized, blockchain-enabled iteration of the internet. By being entirely decentralized, Web3 is essentially the ultimate fog computing architecture with no cloud in the center.
“Our goal is to make Web3 a reality for everyday consumers. “Because we’re making Web3 work for everyone, the name Fog Works really encapsulates our vision. We’re excited to build a brand around it.”
Xinglu Lin, CEO of Fog Works
Fog Works has co-developed a next generation distributed storage ecosystem that is based on the public blockchain, CYFS, and the Datamall Coin. CYFS is a next-generation protocol that re-invents basic Web protocols – TCP/IP, DNS, and HTTP – to create the infrastructure necessary for the complete decentralization of Web3. It has been in development for over seven years, practically eliminates latency in file retrieval – a huge problem with current decentralized storage solutions – and has infinite scalability. Fog Works is developing a series of killer applications for both consumers and enterprises that will use both CYFS and the Datamall Coin, which facilitates a more efficient market for decentralized storage.
To further the development of decentralized applications (dApps) on CYFS, Fog Works is co-sponsoring the CodeDAO Web3 Hackathon. CodeDAO is the world’s first fully decentralized code hosting platform in the world. During the hackathon, developers will compete for prizes by developing dApps using CYFS. Teams will have seven days to develop their projects. The CodeDAO Hackathon runs October 15, 2022, to October 21, 2022. For more information, please visit https://codedao.ai/hackathon.html.
About Fog Works
Fog Works, formerly known as W3 Storage Lab, is a Web3 decentralized application company headquartered in Sunnyvale, CA with operations around the world. Its mission is to leverage the power of Web3 to help people manage, protect, and control their own data. Fog Works is led by an executive team with a highly unique blend of P2P networking experience, blockchain expertise, and entrepreneurship. It is funded by Draper Dragon Fund, OKX Blockdream Ventures, Lingfeng Capital, and other investors.
HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE
Inspur Information | October 20, 2022
Inspur Information, a leading IT infrastructure solutions provider, announced today it is showcasing four recently Open Compute Project (OCP) certified systems at the 2022 OCP Global Summit being held October 18-20, 2022 in San Jose, CA. The OCP Inspired™ products include three General-Purpose (GP) Enterprise servers, namely the Inspur NF5180M6, NF5280A6, and NF5280R6, and one high-density cloud-optimized system called Inspur NF8260M6. Inspur continues to promote the data center sustainability, accelerating the adoption of open computing from computing to storage.
NF5180M6 is powered by 3rd generation Intel® Xeon®, a mainstream 1U 2-socket platform providing performance, flexibility and reliability for not only cloud Tier 1 service providers but also enterprise customers. NF5280A6 is a high-end dual-socket server that uses the 3rd Gen AMD® EPYC™ Processors, fulfilling a variety of complicated workloads such as data analysis and processing, cloud, and high-performance computing. NF5280R6 is a high-end dual-socket server featuring the Ampere® Altra® and Ampere® Altra® Max processors, addressing multiple workloads such as cloud container deployment, Android cloud games, and big data. Contributing GP-Enterprise servers across all three different platforms will help accelerate adoption of the open compute technologies by our customers.
Open compute is breaking boundaries in data center innovation and enabling the convergence of more technologies with its unique technical edge, subtle design thinking, and ecosystem collaboration. Global collaboration and co-innovation revolving around open compute will drive further data center advancement while addressing worldwide issues such as carbon emissions.
With the growing concern over data center sustainability, such as utilizing renewable energy, recycling, thermal reuse, and the use of liquid-cooling technologies to reduce water consumption, creating a greener carbon footprint is one of OCP's and Inspur’s top priorities.
Inspur Information is taking an active part in green technology. Inspur Information has put forward the company-level strategy of "All in Liquid Cooling" and releases its full-stack liquid-cooled products, with cold plate liquid-cooling technology being available in all of its products including general-purpose servers, high-density servers, rack servers, and AI servers. In addition, Inspur Information builds the largest liquid-cooled data center production and R&D base in Asia to march towards being carbon neutral.
“Soaring temperatures and energy prices are leading data centers to adopt liquid cooling technology to cut carbon emissions, reduce electricity usage and improve efficiencies. “Inspur’s open development and green technology innovations demonstrate our commitment to enabling data center sustainability.”
Alan Chang, Vice President of Technical Operations for Inspur Information
During the 2022 OCP Global Summit, Inspur Information is also announcing and demonstrating partnerships and collaborations with key industry leaders.
The collaboration between Ampere and Inspur is driving the transition to Cloud Native with optimized designs that deliver leadership performance, scalability, and power-efficiency. We will showcase the Inspur NF5280R6 platform based on Ampere® Altra® and Ampere® Altra® Max processors at the OCP Summit. This platform is now broadly available and is in volume production at multiple customers. Inspur is a lead engineering partner for Ampere’s next-generation customer-reference platform, Mt. Mitchell. Ampere has contributed the Mt. Mitchell motherboard specification to OCP to accelerate the ecosystem adoption of Cloud Native Processors and platform hardware.
In collaboration with Habana, an Intel company, Inspur is developing AI servers based on open computing specifications that can integrate Habana® Gaudi2 deep learning processors. The Gaudi2 HL-225H mezzanine card is utilized in Inspur’s next-gen OAM platform to accelerate AI development on open architecture and supports a variety of AI computer vision and natural language processing workloads.
Since the announcement of Poseidon V2 E3.x reference system in last year’s 2021 San Jose OCP Global Summit, the joint team has made great progress and in this year’s Summit you will witness a demo of this system that is close to PVT production ready and with the most up to date Gen5 NVMe performance under stress. Poseidon V2 E3.x is a joint open storage solution which adopted composable architecture to maximize the benefits of EDSFF E3.x form factor, accommodating not only the PCIe Gen5 SSDs but also various devices like AI/ML accelerators or CXL Memory Expanders. Data center users can configure the system according to applications needs.
Inspur Information actively participates in the establishment of AI, edge and other standards specification. Examples are participating in the improvement of the DC-SCM2.0 specifications for the OCP Hardware Management Module project, and taking leadership role in OAM UBB specifications, Scorpion standards, edge OITT specifications, etc. Additionally, Inspur Information promotes the productization of technological standards and has contributed, among the first vendors, a number of products to the community, such as OAI AI system, OTII edge server, and a rack scale system meeting three major open organization standards (OCP, ODCC, Open19).
Open compute has been accelerating to expand from the Internet to other industries, such as telecommunications, finance, gaming, healthcare, auto manufacturing, etc. Omdia predicts that the market share of non-Internet industries in open compute will grow from 10.5% in 2020 to 21.9% in 2025.
About Inspur Information
Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.
APPLICATION INFRASTRUCTURE,DATA STORAGE
DigitalOcean | November 16, 2022
DigitalOcean Holdings, Inc., the cloud for developers, startups and SMBs, today announced it is expanding its global presence with a new data center in Sydney, Australia (SYD1). This new facility will better support DigitalOcean’s current and prospective customers who are based in or have end-users in Australia and New Zealand. Sydney is the ninth global region to house a DigitalOcean data center and the fifteenth facility overall.
The SYD1 data center features the most up-to-date network architecture and is connected to DigitalOcean's private internet edge and backbone network, providing 400 Gbps of on-net access to Asia, North America, and Europe. This reduces dependency on public internet and, as a result, mitigates jitter, latency and packet loss for users. All equipment has redundant network and power connections that can route traffic smartly in case of unexpected failures, making for a reliable and secure experience.
“With hundreds of thousands of current customers using our global network today, we’re excited to expand the breadth and capability of our infrastructure to better serve small and medium-sized business (SMB) customers in Australia, New Zealand and the surrounding region. “This state-of-the-art data center will provide low-latency connectivity and our IaaS and PaaS productivity tools for startup businesses and SMBs in these important, rapidly growing markets.”
DigitalOcean CEO, Yancey Spruill
The cloud computing market in Australia is expected to grow 12.5% by 2025, with cloud spending by SMBs expected to grow marginally faster than enterprise organizations. The strong and growing technology business landscape in Australia and in particular Sydney, coupled with the telecommunications connectivity options, including submarine communications cables connecting directly to the United States and Asia, made Sydney an ideal choice for the next DigitalOcean data center.
“This new data center was built with our small business customer’s needs in mind,” said Gabe Monroy, Chief Product Officer at DigitalOcean. “Scalability, availability, and security have been top priorities for our customers and were baked into this build, ensuring that end customers always have a great and secure experience.”
The Sydney facility will provide direct connectivity to the market and improve the overall experience of end customers utilizing applications hosted on the DigitalOcean platform. SYD1 will also provide seamless peering with hyperscalers, making a multi-cloud strategy simple for SMBs and startups who utilize more than one cloud provider.
Beginning today, users can deploy droplets, spin up DigitalOcean Kubernetes clusters, provision a managed database, or utilize any other DigitalOcean product from the SYD1 region.
DigitalOcean simplifies cloud computing so builders can spend more time creating software that changes the world. With its mission-critical infrastructure and fully managed offerings, DigitalOcean helps developers, startups and small and medium-sized businesses (SMBs) rapidly build, deploy and scale applications to accelerate innovation and increase productivity and agility. DigitalOcean combines the power of simplicity, community, open source and customer support so customers can spend less time managing their infrastructure and more time building innovative applications that drive business growth.