Hyper-Converged Infrastructure
Article | October 3, 2023
StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.
Read More
Hyper-Converged Infrastructure
Article | October 3, 2023
The success of 5G technology is a function of both the infrastructure that supports it and the ecosystems that enable it. Today, the definitive focus in the 5G space is on enterprise use cases, ranging from dedicated private 5G networks to accessing edge compute infrastructure and public or private clouds from the public 5G network. As a result, vendor-neutral multitenant data center providers and their rich interconnection capabilities are pivotal in helping make 5G a reality. This is true both in terms of the physical infrastructure needed to support 5G and the ability to effectively connect enterprises to 5G.
Industry experts expect 5G to enable emerging applications such as virtual and augmented reality (AR/VR), industrial robotics/controls as part of the industrial internet of things (IIoT), interactive gaming, autonomous driving, and remote medical procedures. These applications need a modern, cloud-based infrastructure to meet requirements around latency, cost, availability and scalability. This infrastructure must be able to provide real-time, high-bandwidth, low-latency access to latency-dependent applications distributed at the edge of the network.
How Equinix thinks about network slicing
Network slicing refers to the ability to provision and connect functions within a common physical network to provide the resources necessary to deliver service functionality under specific performance constraints (such as latency, throughput, capacity and reliability) and functional constraints (such as security and applications/services). With network slicing, enterprises can use 5G networks and services for a wide variety of use cases on the same infrastructure.
Providing continuity of network slices with optimal UPF placement and intelligent interconnection
Mobile traffic originates in the mobile network, but it is not contained to the mobile network domain, because it runs between the user app on a device and the server workload on multi-access edge compute (MEC) or on the cloud. Therefore, to preserve intended characteristics, the slice must be extended all the way to where the traffic wants to go. This is why we like to say “the slicing must go on.”
The placement of network functions within the slice must be optimized relative to the intended traffic flow, so that performance can be ensured end-to-end. As a result, organizations must place or activate the user plane function (UPF) in optimal locations relative to the end-to-end user plane traffic flow.
We expect that hybrid and multicloud connectivity will remain a key requirement for enterprises using 5G access. In this case, hybrid refers to private edge computing resources (what we loosely call “MEC”) located in data centers—such as Equinix International Business Exchange™ (IBX®) data centers—and multicloud refers to accessing multiple cloud providers from 5G devices. To ensure both hybrid and multicloud connectivity, enterprises need to make the UPF part of the multidomain virtual Layer 2/Layer 3 interconnection fabric.
Because a slice must span multiple domains, automation of UPF activation, provisioning and virtual interconnection to edge compute and multicloud environments is critical.
Implementing network slicing for interconnection of core and edge technology
Equinix partnered with Kaloom to develop network slicing for interconnection of core and edge (NICE) technology within our 5G and Edge Technology Development Center (5G ETDC) in Dallas. NICE technology is built using cloud-native network fabric and high-performance 5G UPF from Kaloom. This is a production-ready software solution, running on white boxes built with P4 programmable application-specific integrated circuits (ASICs), allowing for deep network slicing and support for high-performance 5G UPF with extremely fast data transfer rates.
With NICE technology in the 5G ETDC, Equinix demonstrates:
5G UPF deployment/activation and traffic breakout at Equinix for multiple slices.
Software-defined interconnection between the 5G core and MEC resources from multiple providers.
Software-defined interconnection between the 5G core and multiple cloud service providers.
Orchestration of provisioning and automation of interconnection across the 5G core, MEC and cloud resources.
Architecture of NICE technology in the Equinix 5G ETDC
The image above shows (from left to right):
The mobile domain with radio access network (RAN), devices (simulated) and mobile backhaul connected to Equinix.
The Equinix domain with:
Equinix Metal® supporting edge computing servers and a fabric controller from Kaloom.
Network slicing fabric providing interconnection and Layer 2/Layer 3 cloud-native networking to dynamically activate UPF instances/interfaces connected with MEC environments and clouds, forming two slices (shown above in blue and red).
Equinix Fabric™ and multicloud connectivity.
This demonstrates the benefit of having the UPF as a feature of the interconnection fabric, effectively allowing UPF activation as part of the virtual fabric configuration. This ultimately enables high-performance UPF that’s suitable for use cases such as high-speed 5G fixed wireless access.
Combining UPF instances and MEC environments into an interconnection fabric makes it possible to create continuity for the slices and influence performance and functionality. Equinix Fabric adds multicloud connectivity to slices, enabling organizations to directly integrate network slicing with their mobile hybrid multicloud architectures.
Successful private 5G edge deployments deliver value in several ways. Primarily, they offer immediate access to locally provisioned elastic compute, storage and networking resources that deliver the best user and application experiences. In addition, they help businesses access a rich ecosystem of partners to unlock new technologies at the edge.
Secure, reliable connectivity and scalable resources are essential at the edge. A multivendor strategy with best-of-breed components complemented by telemetry, advanced analytics with management and orchestration—as demonstrated with NICE in Equinix data centers—is a most effective way to meet those requirements. With Equinix’s global footprint of secure, well-equipped facilities, customers can maximize benefits.”
- Suresh Krishnan, CTO, Kaloom
Equinix and its partners are building the future of 5G
NICE technology is just one example of how the Equinix 5G and Edge Technology Development Center enables the innovation and development of real-world capabilities that underpin the edge computing and interconnection infrastructure required to successfully implement 5G use cases. A key benefit of the 5G ETDC is the ability to combine cutting-edge innovations from our partners like Kaloom with proven solutions from Equinix that already serve a large ecosystem of customers actively utilizing hybrid multicloud architectures.
Read More
Hyper-Converged Infrastructure
Article | October 10, 2023
Navigating the complex terrain of Hyper-Converged Infrastructure: Unveiling the best practices and innovative strategies to harness the maximum benefits of HCI for transformation of business.
Contents
1. Introduction to Hyper-Converged Infrastructure
1.1 Evolution and adoption of HCI
1.2 Importance of Adapting to the Changing HCI Environment
2. Challenges in HCI
2.1 Integration & Compatibility: Legacy System Integration
2.2 Efficient Lifecycle: Firmware & Software Management
2.3 Resource Forecasting: Scalability Planning
2.4 Workload Segregation: Performance Optimization
2.5 Latency Optimization: Data Access Efficiency
3. Solutions for Adapting to Changing HCI Landscape
3.1 Interoperability
3.2 Lifecycle Management
3.3 Capacity Planning
3.4 Performance Isolation
3.5 Data Locality
4. Importance of Ongoing Adaptation in the HCI Domain
4.1 Evolving Technology
4.2 Performance Optimization
4.3 Scalability and Flexibility
4.4 Security and Compliance
4.5 Business Transformation
5. Key Takeaways from the Challenges and Solutions Discussed
1. Introduction to Hyper-Converged Infrastructure
1.1 Evolution and adoption of HCI
Hyper-Converged Infrastructure has transformed by providing a consolidated and software-defined approach to data center infrastructure. HCI combines virtualization, storage, and networking into a single integrated system, simplifying management and improving scalability. It has gained widespread adoption due to its ability to address the challenges of data center consolidation, virtualization, and resource efficiency. HCI solutions have evolved to offer advanced features like hybrid and multi-cloud support, data deduplication, and disaster recovery, making them suitable for various workloads.
The HCI market has experienced significant growth, with a diverse ecosystem of vendors offering turnkey appliances and software-defined solutions. It has become the preferred infrastructure for running workloads like VDI, databases, and edge computing. HCI's ability to simplify operations, improve resource utilization, and support diverse workloads ensures its continued relevance.
1.2 Importance of Adapting to the Changing HCI Environment
Adapting to the changing Hyper-Converged Infrastructure is of utmost importance for businesses, as it offers a consolidated and software-defined approach to IT infrastructure, enabling streamlined management, improved scalability, and cost-effectiveness. Staying up-to-date with evolving HCI technologies and trends ensures businesses to leverage the latest advancements for optimizing their operations. Embracing HCI enables organizations to enhance resource utilization, accelerate deployment times, and support a wide range of workloads. In accordance with enhancement, it facilitates seamless integration with emerging technologies like hybrid and multi-cloud environments, containerization, and data analytics. Businesses can stay competitive, enhance their agility, and unlock the full potential of their IT infrastructure.
2. Challenges in HCI
2.1 Integration and Compatibility: Legacy System Integration
Integrating Hyper-Converged Infrastructure with legacy systems can be challenging due to differences in architecture, protocols, and compatibility issues. Existing legacy systems may not seamlessly integrate with HCI solutions, leading to potential disruptions, data silos, and operational inefficiencies. This may hinder the organization's ability to fully leverage the benefits of HCI and limit its potential for streamlined operations and cost savings.
2.2 Efficient Lifecycle: Firmware and Software Management
Managing firmware and software updates across the HCI infrastructure can be complex and time-consuming. Ensuring that all components within the HCI stack, including compute, storage, and networking, are running the latest firmware and software versions is crucial for security, performance, and stability. However, coordinating and applying updates across the entire infrastructure can pose challenges, resulting in potential vulnerabilities, compatibility issues, and suboptimal system performance.
2.3 Resource Forecasting: Scalability Planning
Forecasting resource requirements and planning for scalability in an HCI environment is as crucial as efficiently implementing HCI systems. As workloads grow or change, accurately predicting the necessary computing, storage, and networking resources becomes essential. Without proper resource forecasting and scalability planning, organizations may face underutilization or overprovisioning of resources, leading to increased costs, performance bottlenecks, or inefficient resource allocation.
2.4 Workload Segregation: Performance Optimization
In an HCI environment, effectively segregating workloads to optimize performance can be challenging. Workloads with varying resource requirements and performance characteristics may coexist within the HCI infrastructure. Ensuring that high-performance workloads receive the necessary resources and do not impact other workloads' performance is critical. Failure to segregate workloads properly can result in resource contention, degraded performance, and potential bottlenecks, affecting the overall efficiency and user experience.
2.5 Latency Optimization: Data Access Efficiency
Optimizing data access latency in an HCI environment is a rising challenge. HCI integrates computing and storage into a unified system, and data access latency can significantly impact performance. Inefficient data retrieval and processing can lead to increased response times, reduced user satisfaction, and potential productivity losses. Failure to ensure the data access patterns, caching mechanisms, and optimized network configurations to minimize latency and maximize data access efficiency within the HCI infrastructure leads to such latency.
3. Solutions for Adapting to Changing HCI Landscape
3.1 Interoperability
Achieved by: Standards-based Integration and API
HCI solutions should prioritize adherence to industry standards and provide robust support for APIs. By leveraging standardized protocols and APIs, HCI can seamlessly integrate with legacy systems, ensuring compatibility and smooth data flow between different components. This promotes interoperability, eliminates data silos, and enables organizations to leverage their existing infrastructure investments while benefiting from the advantages of HCI.
3.2 Lifecycle Management
Achieved by: Centralized Firmware and Software Management
Efficient Lifecycle Management in Hyper-Converged Infrastructure can be achieved by implementing a centralized management system that automates firmware and software updates across the HCI infrastructure. This solution streamlines the process of identifying, scheduling, and deploying updates, ensuring that all components are running the latest versions. Centralized management reduces manual efforts, minimizes the risk of compatibility issues, and enhances security, stability, and overall system performance.
3.3 Capacity Planning
Achieved by: Analytics-driven Resource Forecasting
HCI solutions should incorporate analytics-driven capacity planning capabilities. By analyzing historical and real-time data, HCI systems can accurately predict resource requirements and assist organizations in scaling their infrastructure proactively. This solution enables efficient resource utilization, avoids underprovisioning or overprovisioning, and optimizes cost savings while ensuring that performance demands are met.
3.4 Performance Isolation
Achieved by: Quality of Service and Resource Allocation Policies
To achieve effective workload segregation and performance optimization, HCI solutions should provide robust Quality of Service (QoS) mechanisms and flexible resource allocation policies. QoS settings allow organizations to prioritize critical workloads, allocate resources based on predefined policies, and enforce performance guarantees for specific applications or users. This solution ensures that high-performance workloads receive the necessary resources while preventing resource contention and performance degradation for other workloads.
3.5 Data Locality
Achieved by: Data Tiering and Caching Mechanisms
Addressing latency optimization and data access efficiency, HCI solutions must incorporate data tiering and caching mechanisms. By intelligently placing frequently accessed data closer to the compute resources, such as utilizing flash storage or caching algorithms, HCI systems can minimize data access latency and improve overall performance. This solution enhances data locality, reduces network latency, and ensures faster data retrieval, resulting in optimized application response times and improved user experience.
4. Importance of Ongoing Adaptation in the HCI Domain
continuous adaptation is of the utmost importance in the HCI domain. HCI is a swiftly advancing technology that continues to provide new capabilities. Organizations are able to maximize the benefits of HCI and maintain a competitive advantage if they stay apprised of the most recent advancements and adapt to the changing environment.
Here are key reasons highlighting the significance of ongoing adaptation in the HCI domain:
4.1 Evolving Technology
HCI is constantly changing, with new features, functionalities, and enhancements being introduced regularly. Ongoing adaptation allows organizations to take advantage of these advancements and incorporate them into their infrastructure. It ensures that businesses stay up-to-date with the latest technological trends and can make informed decisions to optimize their HCI deployments.
4.2 Performance Optimization
Continuous adaptation enables organizations to fine-tune their HCI environments for optimal performance. By staying informed about performance best practices and emerging optimization techniques, businesses can make necessary adjustments to maximize resource utilization, improve workload performance, and enhance overall system efficiency. Ongoing adaptation ensures that HCI deployments are continuously optimized to meet evolving business requirements.
4.3 Scalability and Flexibility
Adapting to the changing HCI landscape facilitates scalability and flexibility. As business needs evolve, organizations may require the ability to scale their infrastructure, accommodate new workloads, or adopt hybrid or multi-cloud environments. Ongoing adaptation allows businesses to assess and implement the necessary changes to their HCI deployments, ensuring they can seamlessly scale and adapt to evolving demands.
4.4 Security and Compliance
The HCI domain is not immune to security threats and compliance requirements. Ongoing adaptation helps organizations stay vigilant and up-to-date with the latest security practices, threat landscapes, and regulatory changes. It enables businesses to implement robust security measures, proactively address vulnerabilities, and maintain compliance with industry standards and regulations. Ongoing adaptation ensures that HCI deployments remain secure and compliant in the face of evolving cybersecurity challenges.
4.5 Business Transformation
Ongoing adaptation in the HCI domain supports broader business transformation initiatives. Organizations undergoing digital transformation may need to adopt new technologies, integrate with cloud services, or embrace emerging trends like edge computing. Adapting the HCI infrastructure allows businesses to align their IT infrastructure with strategic objectives, enabling seamless integration, improved agility, and the ability to capitalize on emerging opportunities.
The adaptation is thus crucial in the HCI domain as it enables organizations to stay current with technological advancements, optimize performance, scale infrastructure, enhance security, and align with business transformation initiatives. By continuously adapting to the evolving HCI, businesses can maximize the value and benefits derived from their HCI investments.
5. Key Takeaways from Challenges and Solutions Discussed
Hyper-Converged Infrastructure poses several challenges during the implementation and execution of systems that organizations need to address for optimal performance. Integration and compatibility issues arise when integrating HCI with legacy systems, requiring standards-based integration and API support.
Efficient lifecycle management is crucial, involving centralized firmware and software management to automate updates and enhance security and stability. Accurate resource forecasting is vital for capacity planning, enabling organizations to scale their HCI infrastructure effectively. Workload segregation demands QOS mechanisms and flexible resource allocation policies to optimize performance.
Apart from these, latency optimization requires data tiering and caching mechanisms to minimize data access latency and improve application response times. By tackling these challenges and implementing appropriate solutions, businesses can harness the full potential of HCI, streamlining operations, maximizing resource utilization, and ensuring exceptional performance and user experience.
Read More
Application Infrastructure
Article | November 23, 2021
In my last blog in this series, we looked at the present state of 5G. Although it’s still early and it’s impossible to fully comprehend the potential impact of 5G use cases that haven’t been built yet, opportunities to monetize 5G with little additional investment are out there for network service providers (NSPs) who know where to look.
Now, it’s time to look toward the future. Anyone who’s been paying attention knows that 5G technology will be revolutionary across many industry use cases, but I’m not sure everyone understands just how revolutionary, and how quickly it will go down. According to Gartner®, “While 10% of CSPs in 2020 provided commercializable 5G services, which could achieve multiregional availability, this number will increase to 60% by 2024”.[i]
With so many recognizing the value of 5G and acting to capitalize on it, NSPs that fail to prepare for future 5G opportunities today are doing themselves and their enterprise customers a serious disservice. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.
5G is so challenging for NSPs and their customers because it is so revolutionary. Mobile radio networks were built with consumer use cases in mind, which means the traffic from those networks is generally dumped straight to the internet. 5G is the first generation of wireless technology capable of supporting enterprise-class business applications, which means it’s also forcing many NSPs to consider alternatives to the public internet to support those applications.
User plane function breakout helps put traffic near the app
In my last article, I mentioned that one of the key steps mobile network operators (MNOs) could take to enable 5G monetization in the short term would be to bypass the public internet by enabling user traffic functions in the data center. This is certainly a step in the right direction, but to prepare themselves for future 5G and multicloud opportunities, they must go further by enabling user plane function (UPF) breakout.
The 5G opportunities of tomorrow will rely on wireless traffic residing as close as possible to business applications, to reduce the distance data must travel and keep latency as low as possible. This is a similar challenge to the one NSPs faced in the past with their wireline networks. To address that challenge, they typically deployed virtual network functions (VNFs) on their own equipment. This helped them get the network capabilities they needed, when and where they needed them, but it also required them to buy colocation capacity and figure out how to interconnect their VNFs with the rest of their digital infrastructure.
Instead, Equinix customers have the option to do UPF breakout with Equinix Metal®, our automated bare-metal-as-a-service offering, or Network Edge virtual network services on Platform Equinix®. Both options provide a simple, cost-effective way to get the edge infrastructure needed to support 5G business applications. Since both offerings are integrated with Equinix Fabric™, they allow NSPs to create secure software-defined interconnection with a rich ecosystem of partners. This streamlines the process of setting up hybrid deployments.
Working with Equinix can help make UPF breakout less daunting. Instead of investing massive amounts of money to create 5G-ready infrastructure everywhere they need it, they can take advantage of more than 235 Equinix International Business Exchange™ (IBX®) data centers spread across 65 metros in 27 countries on five continents. This allows them to shift from a potentially debilitating up-front CAPEX investment to an OPEX investment spread over time, making the economics around 5G infrastructure much more manageable.
Support MEC with a wide array of partners
Multiaccess edge compute (MEC) will play a key role in enabling advanced 5G use cases, but first enterprises need a digital infrastructure capable of supporting it. This gets more complicated when they need to modernize their infrastructure while maintaining existing application-level partnerships. To put it simply, NSPs and their enterprise customers need an infrastructure provider that can not only partner with them, but also partner with their partners.
With Equinix Metal, organizations can deploy the physical infrastructure they need to support MEC at software speed, while also supporting capabilities from a diverse array of partners. For instance, Equinix Metal provides support for Google Anthos, Amazon Elastic Container Service (ECS) Anywhere and Amazon Elastic Kubernetes Service (EKS) Anywhere. These are just a few examples of how Equinix interconnection offerings make it easier to collaborate with leading cloud providers to deploy MEC-driven applications.
Provision reliable network slicing in a matter of minutes
Network slicing is another important 5G capability that can help NSPs differentiate their offerings and unlock new business opportunities. On the surface, it sounds simple: slicing up network traffic into different classes of service, so that the most important traffic is optimized for factors such as high throughput, low latency and security. However, NSPs won’t always know exactly what slices their customers will want to send or where they’ll want to send them, making network slice mapping a serious challenge.
Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.”
Equinix Fabric offers a quicker, more cost-effective way to map network slices, with no need for cross connects to be set on the fly. With software-defined interconnection, the counterparty that receives the network slice essentially becomes an automated function that NSPs can easily control. This means NSPs can provision network slicing in a matter of minutes, not days, even when they don’t know who the counterparty is going to be. Service automation enabled by Equinix Fabric can be a critical element of an NSP’s multidomain orchestration architecture.
5G use case: Reimagining the live event experience
As part of the MEF 3.0 Proof of Concept showcase, Equinix partnered with Spectrum Enterprise, Adva, and Juniper Networks to create a proof of concept (PoC) for a differentiated live event experience. The PoC showed how event promoters such as minor league sports teams could ingest multiple video feeds into an AI/ML-driven GPU farm that lives in an Equinix facility, and then process those feeds to present fans with custom content on demand.
With the help of network slicing and high-performance MEC, fans can build their own unique experience of the event, looking at different camera angles or following a particular player throughout the game. Event promoters can offer this personalized experience even without access to the on-site data centers that are more common in major league sports venues.
DISH taps Equinix for digital infrastructure services in support of 5G rollout
As DISH looks to build out the first nationwide 5G network in the U.S., they will partner with Equinix to gain access to critical digital infrastructure services in our IBX data centers. This is a great example of how Equinix is equipped to help its NSP partners access the modern digital infrastructure needed to capitalize on 5G—today and into the future.
DISH is taking the lead in delivering on the promise of 5G in the U.S., and our partnership with Equinix will enable us to secure critical interconnections for a nationwide 5G network. With proximity to large population centers, as well as network and cloud density, Equinix is the right partner to connect our cloud-native 5G network.”
- Jeff McSchooler, DISH executive vice president of wireless network operations
Read More