Enterprise Infrastructure, the Technology Backbone of Modern Business

Gaby Matar, Group Managing Partner for eSolutions Maximo, discusses modern day organizations and how they rely on IT solutions to create and maintain their competitive edge.

Spotlight

Cisco

Cisco was founded in 1984 by a small group of computer scientists from Stanford University. Since the company's inception, Cisco engineers have been leaders in the development of Internet Protocol (IP)-based networking technologies. Today, with more than 71,000 employees worldwide, this tradition of innovation continues with industry-leading products and solutions in the company's core development areas of routing and switching.

OTHER ARTICLES
Hyper-Converged Infrastructure

As Edge Applications Multiply, OpenInfra Community Delivers StarlingX 5.0, Offering Cloud Infrastructure Stack for 5G, IoT

Article | October 3, 2023

StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.

Read More
Hyper-Converged Infrastructure

Advancing 5G with cloud-native networking and intelligent infrastructure

Article | October 3, 2023

The success of 5G technology is a function of both the infrastructure that supports it and the ecosystems that enable it. Today, the definitive focus in the 5G space is on enterprise use cases, ranging from dedicated private 5G networks to accessing edge compute infrastructure and public or private clouds from the public 5G network. As a result, vendor-neutral multitenant data center providers and their rich interconnection capabilities are pivotal in helping make 5G a reality. This is true both in terms of the physical infrastructure needed to support 5G and the ability to effectively connect enterprises to 5G. Industry experts expect 5G to enable emerging applications such as virtual and augmented reality (AR/VR), industrial robotics/controls as part of the industrial internet of things (IIoT), interactive gaming, autonomous driving, and remote medical procedures. These applications need a modern, cloud-based infrastructure to meet requirements around latency, cost, availability and scalability. This infrastructure must be able to provide real-time, high-bandwidth, low-latency access to latency-dependent applications distributed at the edge of the network. How Equinix thinks about network slicing Network slicing refers to the ability to provision and connect functions within a common physical network to provide the resources necessary to deliver service functionality under specific performance constraints (such as latency, throughput, capacity and reliability) and functional constraints (such as security and applications/services). With network slicing, enterprises can use 5G networks and services for a wide variety of use cases on the same infrastructure. Providing continuity of network slices with optimal UPF placement and intelligent interconnection Mobile traffic originates in the mobile network, but it is not contained to the mobile network domain, because it runs between the user app on a device and the server workload on multi-access edge compute (MEC) or on the cloud. Therefore, to preserve intended characteristics, the slice must be extended all the way to where the traffic wants to go. This is why we like to say “the slicing must go on.” The placement of network functions within the slice must be optimized relative to the intended traffic flow, so that performance can be ensured end-to-end. As a result, organizations must place or activate the user plane function (UPF) in optimal locations relative to the end-to-end user plane traffic flow. We expect that hybrid and multicloud connectivity will remain a key requirement for enterprises using 5G access. In this case, hybrid refers to private edge computing resources (what we loosely call “MEC”) located in data centers—such as Equinix International Business Exchange™ (IBX®) data centers—and multicloud refers to accessing multiple cloud providers from 5G devices. To ensure both hybrid and multicloud connectivity, enterprises need to make the UPF part of the multidomain virtual Layer 2/Layer 3 interconnection fabric. Because a slice must span multiple domains, automation of UPF activation, provisioning and virtual interconnection to edge compute and multicloud environments is critical. Implementing network slicing for interconnection of core and edge technology Equinix partnered with Kaloom to develop network slicing for interconnection of core and edge (NICE) technology within our 5G and Edge Technology Development Center (5G ETDC) in Dallas. NICE technology is built using cloud-native network fabric and high-performance 5G UPF from Kaloom. This is a production-ready software solution, running on white boxes built with P4 programmable application-specific integrated circuits (ASICs), allowing for deep network slicing and support for high-performance 5G UPF with extremely fast data transfer rates. With NICE technology in the 5G ETDC, Equinix demonstrates: 5G UPF deployment/activation and traffic breakout at Equinix for multiple slices. Software-defined interconnection between the 5G core and MEC resources from multiple providers. Software-defined interconnection between the 5G core and multiple cloud service providers. Orchestration of provisioning and automation of interconnection across the 5G core, MEC and cloud resources. Architecture of NICE technology in the Equinix 5G ETDC The image above shows (from left to right): The mobile domain with radio access network (RAN), devices (simulated) and mobile backhaul connected to Equinix. The Equinix domain with: Equinix Metal® supporting edge computing servers and a fabric controller from Kaloom. Network slicing fabric providing interconnection and Layer 2/Layer 3 cloud-native networking to dynamically activate UPF instances/interfaces connected with MEC environments and clouds, forming two slices (shown above in blue and red). Equinix Fabric™ and multicloud connectivity. This demonstrates the benefit of having the UPF as a feature of the interconnection fabric, effectively allowing UPF activation as part of the virtual fabric configuration. This ultimately enables high-performance UPF that’s suitable for use cases such as high-speed 5G fixed wireless access. Combining UPF instances and MEC environments into an interconnection fabric makes it possible to create continuity for the slices and influence performance and functionality. Equinix Fabric adds multicloud connectivity to slices, enabling organizations to directly integrate network slicing with their mobile hybrid multicloud architectures. Successful private 5G edge deployments deliver value in several ways. Primarily, they offer immediate access to locally provisioned elastic compute, storage and networking resources that deliver the best user and application experiences. In addition, they help businesses access a rich ecosystem of partners to unlock new technologies at the edge. Secure, reliable connectivity and scalable resources are essential at the edge. A multivendor strategy with best-of-breed components complemented by telemetry, advanced analytics with management and orchestration—as demonstrated with NICE in Equinix data centers—is a most effective way to meet those requirements. With Equinix’s global footprint of secure, well-equipped facilities, customers can maximize benefits.” - Suresh Krishnan, CTO, Kaloom Equinix and its partners are building the future of 5G NICE technology is just one example of how the Equinix 5G and Edge Technology Development Center enables the innovation and development of real-world capabilities that underpin the edge computing and interconnection infrastructure required to successfully implement 5G use cases. A key benefit of the 5G ETDC is the ability to combine cutting-edge innovations from our partners like Kaloom with proven solutions from Equinix that already serve a large ecosystem of customers actively utilizing hybrid multicloud architectures.

Read More
Hyper-Converged Infrastructure

Adapting to Changing Landscape: Challenges and Solutions in HCI

Article | October 10, 2023

Navigating the complex terrain of Hyper-Converged Infrastructure: Unveiling the best practices and innovative strategies to harness the maximum benefits of HCI for transformation of business. Contents 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI 1.2 Importance of Adapting to the Changing HCI Environment 2. Challenges in HCI 2.1 Integration & Compatibility: Legacy System Integration 2.2 Efficient Lifecycle: Firmware & Software Management 2.3 Resource Forecasting: Scalability Planning 2.4 Workload Segregation: Performance Optimization 2.5 Latency Optimization: Data Access Efficiency 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability 3.2 Lifecycle Management 3.3 Capacity Planning 3.4 Performance Isolation 3.5 Data Locality 4. Importance of Ongoing Adaptation in the HCI Domain 4.1 Evolving Technology 4.2 Performance Optimization 4.3 Scalability and Flexibility 4.4 Security and Compliance 4.5 Business Transformation 5. Key Takeaways from the Challenges and Solutions Discussed 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI Hyper-Converged Infrastructure has transformed by providing a consolidated and software-defined approach to data center infrastructure. HCI combines virtualization, storage, and networking into a single integrated system, simplifying management and improving scalability. It has gained widespread adoption due to its ability to address the challenges of data center consolidation, virtualization, and resource efficiency. HCI solutions have evolved to offer advanced features like hybrid and multi-cloud support, data deduplication, and disaster recovery, making them suitable for various workloads. The HCI market has experienced significant growth, with a diverse ecosystem of vendors offering turnkey appliances and software-defined solutions. It has become the preferred infrastructure for running workloads like VDI, databases, and edge computing. HCI's ability to simplify operations, improve resource utilization, and support diverse workloads ensures its continued relevance. 1.2 Importance of Adapting to the Changing HCI Environment Adapting to the changing Hyper-Converged Infrastructure is of utmost importance for businesses, as it offers a consolidated and software-defined approach to IT infrastructure, enabling streamlined management, improved scalability, and cost-effectiveness. Staying up-to-date with evolving HCI technologies and trends ensures businesses to leverage the latest advancements for optimizing their operations. Embracing HCI enables organizations to enhance resource utilization, accelerate deployment times, and support a wide range of workloads. In accordance with enhancement, it facilitates seamless integration with emerging technologies like hybrid and multi-cloud environments, containerization, and data analytics. Businesses can stay competitive, enhance their agility, and unlock the full potential of their IT infrastructure. 2. Challenges in HCI 2.1 Integration and Compatibility: Legacy System Integration Integrating Hyper-Converged Infrastructure with legacy systems can be challenging due to differences in architecture, protocols, and compatibility issues. Existing legacy systems may not seamlessly integrate with HCI solutions, leading to potential disruptions, data silos, and operational inefficiencies. This may hinder the organization's ability to fully leverage the benefits of HCI and limit its potential for streamlined operations and cost savings. 2.2 Efficient Lifecycle: Firmware and Software Management Managing firmware and software updates across the HCI infrastructure can be complex and time-consuming. Ensuring that all components within the HCI stack, including compute, storage, and networking, are running the latest firmware and software versions is crucial for security, performance, and stability. However, coordinating and applying updates across the entire infrastructure can pose challenges, resulting in potential vulnerabilities, compatibility issues, and suboptimal system performance. 2.3 Resource Forecasting: Scalability Planning Forecasting resource requirements and planning for scalability in an HCI environment is as crucial as efficiently implementing HCI systems. As workloads grow or change, accurately predicting the necessary computing, storage, and networking resources becomes essential. Without proper resource forecasting and scalability planning, organizations may face underutilization or overprovisioning of resources, leading to increased costs, performance bottlenecks, or inefficient resource allocation. 2.4 Workload Segregation: Performance Optimization In an HCI environment, effectively segregating workloads to optimize performance can be challenging. Workloads with varying resource requirements and performance characteristics may coexist within the HCI infrastructure. Ensuring that high-performance workloads receive the necessary resources and do not impact other workloads' performance is critical. Failure to segregate workloads properly can result in resource contention, degraded performance, and potential bottlenecks, affecting the overall efficiency and user experience. 2.5 Latency Optimization: Data Access Efficiency Optimizing data access latency in an HCI environment is a rising challenge. HCI integrates computing and storage into a unified system, and data access latency can significantly impact performance. Inefficient data retrieval and processing can lead to increased response times, reduced user satisfaction, and potential productivity losses. Failure to ensure the data access patterns, caching mechanisms, and optimized network configurations to minimize latency and maximize data access efficiency within the HCI infrastructure leads to such latency. 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability Achieved by: Standards-based Integration and API HCI solutions should prioritize adherence to industry standards and provide robust support for APIs. By leveraging standardized protocols and APIs, HCI can seamlessly integrate with legacy systems, ensuring compatibility and smooth data flow between different components. This promotes interoperability, eliminates data silos, and enables organizations to leverage their existing infrastructure investments while benefiting from the advantages of HCI. 3.2 Lifecycle Management Achieved by: Centralized Firmware and Software Management Efficient Lifecycle Management in Hyper-Converged Infrastructure can be achieved by implementing a centralized management system that automates firmware and software updates across the HCI infrastructure. This solution streamlines the process of identifying, scheduling, and deploying updates, ensuring that all components are running the latest versions. Centralized management reduces manual efforts, minimizes the risk of compatibility issues, and enhances security, stability, and overall system performance. 3.3 Capacity Planning Achieved by: Analytics-driven Resource Forecasting HCI solutions should incorporate analytics-driven capacity planning capabilities. By analyzing historical and real-time data, HCI systems can accurately predict resource requirements and assist organizations in scaling their infrastructure proactively. This solution enables efficient resource utilization, avoids underprovisioning or overprovisioning, and optimizes cost savings while ensuring that performance demands are met. 3.4 Performance Isolation Achieved by: Quality of Service and Resource Allocation Policies To achieve effective workload segregation and performance optimization, HCI solutions should provide robust Quality of Service (QoS) mechanisms and flexible resource allocation policies. QoS settings allow organizations to prioritize critical workloads, allocate resources based on predefined policies, and enforce performance guarantees for specific applications or users. This solution ensures that high-performance workloads receive the necessary resources while preventing resource contention and performance degradation for other workloads. 3.5 Data Locality Achieved by: Data Tiering and Caching Mechanisms Addressing latency optimization and data access efficiency, HCI solutions must incorporate data tiering and caching mechanisms. By intelligently placing frequently accessed data closer to the compute resources, such as utilizing flash storage or caching algorithms, HCI systems can minimize data access latency and improve overall performance. This solution enhances data locality, reduces network latency, and ensures faster data retrieval, resulting in optimized application response times and improved user experience. 4. Importance of Ongoing Adaptation in the HCI Domain continuous adaptation is of the utmost importance in the HCI domain. HCI is a swiftly advancing technology that continues to provide new capabilities. Organizations are able to maximize the benefits of HCI and maintain a competitive advantage if they stay apprised of the most recent advancements and adapt to the changing environment. Here are key reasons highlighting the significance of ongoing adaptation in the HCI domain: 4.1 Evolving Technology HCI is constantly changing, with new features, functionalities, and enhancements being introduced regularly. Ongoing adaptation allows organizations to take advantage of these advancements and incorporate them into their infrastructure. It ensures that businesses stay up-to-date with the latest technological trends and can make informed decisions to optimize their HCI deployments. 4.2 Performance Optimization Continuous adaptation enables organizations to fine-tune their HCI environments for optimal performance. By staying informed about performance best practices and emerging optimization techniques, businesses can make necessary adjustments to maximize resource utilization, improve workload performance, and enhance overall system efficiency. Ongoing adaptation ensures that HCI deployments are continuously optimized to meet evolving business requirements. 4.3 Scalability and Flexibility Adapting to the changing HCI landscape facilitates scalability and flexibility. As business needs evolve, organizations may require the ability to scale their infrastructure, accommodate new workloads, or adopt hybrid or multi-cloud environments. Ongoing adaptation allows businesses to assess and implement the necessary changes to their HCI deployments, ensuring they can seamlessly scale and adapt to evolving demands. 4.4 Security and Compliance The HCI domain is not immune to security threats and compliance requirements. Ongoing adaptation helps organizations stay vigilant and up-to-date with the latest security practices, threat landscapes, and regulatory changes. It enables businesses to implement robust security measures, proactively address vulnerabilities, and maintain compliance with industry standards and regulations. Ongoing adaptation ensures that HCI deployments remain secure and compliant in the face of evolving cybersecurity challenges. 4.5 Business Transformation Ongoing adaptation in the HCI domain supports broader business transformation initiatives. Organizations undergoing digital transformation may need to adopt new technologies, integrate with cloud services, or embrace emerging trends like edge computing. Adapting the HCI infrastructure allows businesses to align their IT infrastructure with strategic objectives, enabling seamless integration, improved agility, and the ability to capitalize on emerging opportunities. The adaptation is thus crucial in the HCI domain as it enables organizations to stay current with technological advancements, optimize performance, scale infrastructure, enhance security, and align with business transformation initiatives. By continuously adapting to the evolving HCI, businesses can maximize the value and benefits derived from their HCI investments. 5. Key Takeaways from Challenges and Solutions Discussed Hyper-Converged Infrastructure poses several challenges during the implementation and execution of systems that organizations need to address for optimal performance. Integration and compatibility issues arise when integrating HCI with legacy systems, requiring standards-based integration and API support. Efficient lifecycle management is crucial, involving centralized firmware and software management to automate updates and enhance security and stability. Accurate resource forecasting is vital for capacity planning, enabling organizations to scale their HCI infrastructure effectively. Workload segregation demands QOS mechanisms and flexible resource allocation policies to optimize performance. Apart from these, latency optimization requires data tiering and caching mechanisms to minimize data access latency and improve application response times. By tackling these challenges and implementing appropriate solutions, businesses can harness the full potential of HCI, streamlining operations, maximizing resource utilization, and ensuring exceptional performance and user experience.

Read More
Application Infrastructure

How NSPs Prepare to Thrive in the 5G Era

Article | November 23, 2021

In my last blog in this series, we looked at the present state of 5G. Although it’s still early and it’s impossible to fully comprehend the potential impact of 5G use cases that haven’t been built yet, opportunities to monetize 5G with little additional investment are out there for network service providers (NSPs) who know where to look. Now, it’s time to look toward the future. Anyone who’s been paying attention knows that 5G technology will be revolutionary across many industry use cases, but I’m not sure everyone understands just how revolutionary, and how quickly it will go down. According to Gartner®, “While 10% of CSPs in 2020 provided commercializable 5G services, which could achieve multiregional availability, this number will increase to 60% by 2024”.[i] With so many recognizing the value of 5G and acting to capitalize on it, NSPs that fail to prepare for future 5G opportunities today are doing themselves and their enterprise customers a serious disservice. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier. 5G is so challenging for NSPs and their customers because it is so revolutionary. Mobile radio networks were built with consumer use cases in mind, which means the traffic from those networks is generally dumped straight to the internet. 5G is the first generation of wireless technology capable of supporting enterprise-class business applications, which means it’s also forcing many NSPs to consider alternatives to the public internet to support those applications. User plane function breakout helps put traffic near the app In my last article, I mentioned that one of the key steps mobile network operators (MNOs) could take to enable 5G monetization in the short term would be to bypass the public internet by enabling user traffic functions in the data center. This is certainly a step in the right direction, but to prepare themselves for future 5G and multicloud opportunities, they must go further by enabling user plane function (UPF) breakout. The 5G opportunities of tomorrow will rely on wireless traffic residing as close as possible to business applications, to reduce the distance data must travel and keep latency as low as possible. This is a similar challenge to the one NSPs faced in the past with their wireline networks. To address that challenge, they typically deployed virtual network functions (VNFs) on their own equipment. This helped them get the network capabilities they needed, when and where they needed them, but it also required them to buy colocation capacity and figure out how to interconnect their VNFs with the rest of their digital infrastructure. Instead, Equinix customers have the option to do UPF breakout with Equinix Metal®, our automated bare-metal-as-a-service offering, or Network Edge virtual network services on Platform Equinix®. Both options provide a simple, cost-effective way to get the edge infrastructure needed to support 5G business applications. Since both offerings are integrated with Equinix Fabric™, they allow NSPs to create secure software-defined interconnection with a rich ecosystem of partners. This streamlines the process of setting up hybrid deployments. Working with Equinix can help make UPF breakout less daunting. Instead of investing massive amounts of money to create 5G-ready infrastructure everywhere they need it, they can take advantage of more than 235 Equinix International Business Exchange™ (IBX®) data centers spread across 65 metros in 27 countries on five continents. This allows them to shift from a potentially debilitating up-front CAPEX investment to an OPEX investment spread over time, making the economics around 5G infrastructure much more manageable. Support MEC with a wide array of partners Multiaccess edge compute (MEC) will play a key role in enabling advanced 5G use cases, but first enterprises need a digital infrastructure capable of supporting it. This gets more complicated when they need to modernize their infrastructure while maintaining existing application-level partnerships. To put it simply, NSPs and their enterprise customers need an infrastructure provider that can not only partner with them, but also partner with their partners. With Equinix Metal, organizations can deploy the physical infrastructure they need to support MEC at software speed, while also supporting capabilities from a diverse array of partners. For instance, Equinix Metal provides support for Google Anthos, Amazon Elastic Container Service (ECS) Anywhere and Amazon Elastic Kubernetes Service (EKS) Anywhere. These are just a few examples of how Equinix interconnection offerings make it easier to collaborate with leading cloud providers to deploy MEC-driven applications. Provision reliable network slicing in a matter of minutes Network slicing is another important 5G capability that can help NSPs differentiate their offerings and unlock new business opportunities. On the surface, it sounds simple: slicing up network traffic into different classes of service, so that the most important traffic is optimized for factors such as high throughput, low latency and security. However, NSPs won’t always know exactly what slices their customers will want to send or where they’ll want to send them, making network slice mapping a serious challenge. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.” Equinix Fabric offers a quicker, more cost-effective way to map network slices, with no need for cross connects to be set on the fly. With software-defined interconnection, the counterparty that receives the network slice essentially becomes an automated function that NSPs can easily control. This means NSPs can provision network slicing in a matter of minutes, not days, even when they don’t know who the counterparty is going to be. Service automation enabled by Equinix Fabric can be a critical element of an NSP’s multidomain orchestration architecture. 5G use case: Reimagining the live event experience As part of the MEF 3.0 Proof of Concept showcase, Equinix partnered with Spectrum Enterprise, Adva, and Juniper Networks to create a proof of concept (PoC) for a differentiated live event experience. The PoC showed how event promoters such as minor league sports teams could ingest multiple video feeds into an AI/ML-driven GPU farm that lives in an Equinix facility, and then process those feeds to present fans with custom content on demand. With the help of network slicing and high-performance MEC, fans can build their own unique experience of the event, looking at different camera angles or following a particular player throughout the game. Event promoters can offer this personalized experience even without access to the on-site data centers that are more common in major league sports venues. DISH taps Equinix for digital infrastructure services in support of 5G rollout As DISH looks to build out the first nationwide 5G network in the U.S., they will partner with Equinix to gain access to critical digital infrastructure services in our IBX data centers. This is a great example of how Equinix is equipped to help its NSP partners access the modern digital infrastructure needed to capitalize on 5G—today and into the future. DISH is taking the lead in delivering on the promise of 5G in the U.S., and our partnership with Equinix will enable us to secure critical interconnections for a nationwide 5G network. With proximity to large population centers, as well as network and cloud density, Equinix is the right partner to connect our cloud-native 5G network.” - Jeff McSchooler, DISH executive vice president of wireless network operations

Read More

Spotlight

Cisco

Cisco was founded in 1984 by a small group of computer scientists from Stanford University. Since the company's inception, Cisco engineers have been leaders in the development of Internet Protocol (IP)-based networking technologies. Today, with more than 71,000 employees worldwide, this tradition of innovation continues with industry-leading products and solutions in the company's core development areas of routing and switching.

Related News

Hyper-Converged Infrastructure, Storage Management, IT Systems Management

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling -- Reduces Data Center Power Costs by Up to 40%

Prnewswire | May 22, 2023

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized. "Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time." To learn more about Supermicro's GPU servers, visit: https://www.supermicro.com/en/products/gpu AI-optimized racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customized based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centers by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30x performance and efficiency in today's large transformer models with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. State-of-the-art eight NVIDIA H100 SXM5 Tensor Core GPU servers from Supermicro for today's largest scale AI models include: SYS-821GE-TNHR – (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/SYS-821GE-TNHR AS -8125GS-TNHR – (Dual 4th Gen AMD EPYC CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/AS-8125GS-TNHR Supermicro also designs a range of GPU servers customizable for fast AI training, vast volume AI inferencing, or AI-fused HPC workloads, including the systems with four NVIDIA H100 SXM5 Tensor Core GPUs. SYS-421GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 4U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-421GU-TNXR SYS-521GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 5U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-521GU-TNXR Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Learn more about the Supermicro Liquid Cooling system at: https://www.supermicro.com/en/solutions/liquid-cooling Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organizations, configurations from the server level to the entire data center must be optimized and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide. Read the Supermicro Large Scale AI Solution Brief - https://www.supermicro.com/solutions/Solution-Brief_Rack_Scale_AI.pdf Supermicro at ISC To explore these technologies and meet with our experts, plan on visiting the Supermicro Booth D405 at ISC High Performance 2023 event in Hamburg, Germany, May 21 – 25, 2023. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Read More

Nutanix Announces Remote IT Solutions for Cloud Infrastructure Management

Nutanix | June 25, 2020

Nutanix announced new solutions that will allow IT teams to deploy, upgrade and troubleshoot their cloud infrastructure while working from anywhere — whether at home or from a central office location. These solutions will be delivered via Nutanix Foundation Central, Insights and Lifecycle Manager — all of which will be available as part of Nutanix HCI software at no additional cost to customers. While IT teams have been working overtime to deliver remote work solutions for businesses worldwide in light of the COVID-19 pandemic, they’re not always able to do so themselves. Managing IT infrastructure, troubleshooting issues, and updating software often requires IT teams to be on-site at their datacenter, something even more challenging with social distancing requirements.

Read More

BloqCloud: on-demand blockchain infrastructure

Enterprise Times | August 21, 2019

Bloq has made available its BloqCloud platform for on-demand blockchain infrastructure services (BaaS or Blockchain as a Service). BloqCloud has applicability both for conventional enterprises exploring blockchain technology and for blockchain-native companies building next generation cryptocurrency exchanges, wallets and other services. Bloq believes the future state blockchain will include multi-network, multi-chain and multi-token support – thereby enabling a software infrastructure capable of engage with tokenised networks. This will happen via BloqEnterprise with tokenised networks and applications coming via BloqLabs.

Read More

Hyper-Converged Infrastructure, Storage Management, IT Systems Management

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling -- Reduces Data Center Power Costs by Up to 40%

Prnewswire | May 22, 2023

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized. "Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time." To learn more about Supermicro's GPU servers, visit: https://www.supermicro.com/en/products/gpu AI-optimized racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customized based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centers by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30x performance and efficiency in today's large transformer models with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. State-of-the-art eight NVIDIA H100 SXM5 Tensor Core GPU servers from Supermicro for today's largest scale AI models include: SYS-821GE-TNHR – (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/SYS-821GE-TNHR AS -8125GS-TNHR – (Dual 4th Gen AMD EPYC CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/AS-8125GS-TNHR Supermicro also designs a range of GPU servers customizable for fast AI training, vast volume AI inferencing, or AI-fused HPC workloads, including the systems with four NVIDIA H100 SXM5 Tensor Core GPUs. SYS-421GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 4U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-421GU-TNXR SYS-521GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 5U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-521GU-TNXR Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Learn more about the Supermicro Liquid Cooling system at: https://www.supermicro.com/en/solutions/liquid-cooling Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organizations, configurations from the server level to the entire data center must be optimized and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide. Read the Supermicro Large Scale AI Solution Brief - https://www.supermicro.com/solutions/Solution-Brief_Rack_Scale_AI.pdf Supermicro at ISC To explore these technologies and meet with our experts, plan on visiting the Supermicro Booth D405 at ISC High Performance 2023 event in Hamburg, Germany, May 21 – 25, 2023. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Read More

Nutanix Announces Remote IT Solutions for Cloud Infrastructure Management

Nutanix | June 25, 2020

Nutanix announced new solutions that will allow IT teams to deploy, upgrade and troubleshoot their cloud infrastructure while working from anywhere — whether at home or from a central office location. These solutions will be delivered via Nutanix Foundation Central, Insights and Lifecycle Manager — all of which will be available as part of Nutanix HCI software at no additional cost to customers. While IT teams have been working overtime to deliver remote work solutions for businesses worldwide in light of the COVID-19 pandemic, they’re not always able to do so themselves. Managing IT infrastructure, troubleshooting issues, and updating software often requires IT teams to be on-site at their datacenter, something even more challenging with social distancing requirements.

Read More

BloqCloud: on-demand blockchain infrastructure

Enterprise Times | August 21, 2019

Bloq has made available its BloqCloud platform for on-demand blockchain infrastructure services (BaaS or Blockchain as a Service). BloqCloud has applicability both for conventional enterprises exploring blockchain technology and for blockchain-native companies building next generation cryptocurrency exchanges, wallets and other services. Bloq believes the future state blockchain will include multi-network, multi-chain and multi-token support – thereby enabling a software infrastructure capable of engage with tokenised networks. This will happen via BloqEnterprise with tokenised networks and applications coming via BloqLabs.

Read More

Events