Hyper-Converged Infrastructure
Article | July 13, 2023
StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.
Read More
Application Infrastructure, Application Storage
Article | July 19, 2023
IT infrastructure scaling is when the size and power of an IT system are scaled to accommodate changes in storage and workflow demands. Infrastructure scaling can be horizontal or vertical. Vertical scaling, or scaling up, adds more processing power and memory to a system, giving it an immediate boost. Horizontal scaling, or scaling out, adds more servers to the cloud, easing the bottleneck in the long run, but also adding more complexity to the system.
Read More
Hyper-Converged Infrastructure
Article | October 3, 2023
In my last blog in this series, we looked at the present state of 5G. Although it’s still early and it’s impossible to fully comprehend the potential impact of 5G use cases that haven’t been built yet, opportunities to monetize 5G with little additional investment are out there for network service providers (NSPs) who know where to look.
Now, it’s time to look toward the future. Anyone who’s been paying attention knows that 5G technology will be revolutionary across many industry use cases, but I’m not sure everyone understands just how revolutionary, and how quickly it will go down. According to Gartner®, “While 10% of CSPs in 2020 provided commercializable 5G services, which could achieve multiregional availability, this number will increase to 60% by 2024”.[i]
With so many recognizing the value of 5G and acting to capitalize on it, NSPs that fail to prepare for future 5G opportunities today are doing themselves and their enterprise customers a serious disservice. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.
5G is so challenging for NSPs and their customers because it is so revolutionary. Mobile radio networks were built with consumer use cases in mind, which means the traffic from those networks is generally dumped straight to the internet. 5G is the first generation of wireless technology capable of supporting enterprise-class business applications, which means it’s also forcing many NSPs to consider alternatives to the public internet to support those applications.
User plane function breakout helps put traffic near the app
In my last article, I mentioned that one of the key steps mobile network operators (MNOs) could take to enable 5G monetization in the short term would be to bypass the public internet by enabling user traffic functions in the data center. This is certainly a step in the right direction, but to prepare themselves for future 5G and multicloud opportunities, they must go further by enabling user plane function (UPF) breakout.
The 5G opportunities of tomorrow will rely on wireless traffic residing as close as possible to business applications, to reduce the distance data must travel and keep latency as low as possible. This is a similar challenge to the one NSPs faced in the past with their wireline networks. To address that challenge, they typically deployed virtual network functions (VNFs) on their own equipment. This helped them get the network capabilities they needed, when and where they needed them, but it also required them to buy colocation capacity and figure out how to interconnect their VNFs with the rest of their digital infrastructure.
Instead, Equinix customers have the option to do UPF breakout with Equinix Metal®, our automated bare-metal-as-a-service offering, or Network Edge virtual network services on Platform Equinix®. Both options provide a simple, cost-effective way to get the edge infrastructure needed to support 5G business applications. Since both offerings are integrated with Equinix Fabric™, they allow NSPs to create secure software-defined interconnection with a rich ecosystem of partners. This streamlines the process of setting up hybrid deployments.
Working with Equinix can help make UPF breakout less daunting. Instead of investing massive amounts of money to create 5G-ready infrastructure everywhere they need it, they can take advantage of more than 235 Equinix International Business Exchange™ (IBX®) data centers spread across 65 metros in 27 countries on five continents. This allows them to shift from a potentially debilitating up-front CAPEX investment to an OPEX investment spread over time, making the economics around 5G infrastructure much more manageable.
Support MEC with a wide array of partners
Multiaccess edge compute (MEC) will play a key role in enabling advanced 5G use cases, but first enterprises need a digital infrastructure capable of supporting it. This gets more complicated when they need to modernize their infrastructure while maintaining existing application-level partnerships. To put it simply, NSPs and their enterprise customers need an infrastructure provider that can not only partner with them, but also partner with their partners.
With Equinix Metal, organizations can deploy the physical infrastructure they need to support MEC at software speed, while also supporting capabilities from a diverse array of partners. For instance, Equinix Metal provides support for Google Anthos, Amazon Elastic Container Service (ECS) Anywhere and Amazon Elastic Kubernetes Service (EKS) Anywhere. These are just a few examples of how Equinix interconnection offerings make it easier to collaborate with leading cloud providers to deploy MEC-driven applications.
Provision reliable network slicing in a matter of minutes
Network slicing is another important 5G capability that can help NSPs differentiate their offerings and unlock new business opportunities. On the surface, it sounds simple: slicing up network traffic into different classes of service, so that the most important traffic is optimized for factors such as high throughput, low latency and security. However, NSPs won’t always know exactly what slices their customers will want to send or where they’ll want to send them, making network slice mapping a serious challenge.
Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.”
Equinix Fabric offers a quicker, more cost-effective way to map network slices, with no need for cross connects to be set on the fly. With software-defined interconnection, the counterparty that receives the network slice essentially becomes an automated function that NSPs can easily control. This means NSPs can provision network slicing in a matter of minutes, not days, even when they don’t know who the counterparty is going to be. Service automation enabled by Equinix Fabric can be a critical element of an NSP’s multidomain orchestration architecture.
5G use case: Reimagining the live event experience
As part of the MEF 3.0 Proof of Concept showcase, Equinix partnered with Spectrum Enterprise, Adva, and Juniper Networks to create a proof of concept (PoC) for a differentiated live event experience. The PoC showed how event promoters such as minor league sports teams could ingest multiple video feeds into an AI/ML-driven GPU farm that lives in an Equinix facility, and then process those feeds to present fans with custom content on demand.
With the help of network slicing and high-performance MEC, fans can build their own unique experience of the event, looking at different camera angles or following a particular player throughout the game. Event promoters can offer this personalized experience even without access to the on-site data centers that are more common in major league sports venues.
DISH taps Equinix for digital infrastructure services in support of 5G rollout
As DISH looks to build out the first nationwide 5G network in the U.S., they will partner with Equinix to gain access to critical digital infrastructure services in our IBX data centers. This is a great example of how Equinix is equipped to help its NSP partners access the modern digital infrastructure needed to capitalize on 5G—today and into the future.
DISH is taking the lead in delivering on the promise of 5G in the U.S., and our partnership with Equinix will enable us to secure critical interconnections for a nationwide 5G network. With proximity to large population centers, as well as network and cloud density, Equinix is the right partner to connect our cloud-native 5G network.”
- Jeff McSchooler, DISH executive vice president of wireless network operations
Read More
Hyper-Converged Infrastructure
Article | October 10, 2023
Building trust through HCI by unveiling strategies to ensure the long-term reliability of technology partnerships, cementing lasting collaborations in a dynamic business landscape through vendor stability.
Contents
1. Introduction
2. How HCI Overcomes Infrastructural Challenges
3. Evaluation Criteria for Enterprise HCI
3.1. Distributed Storage Layer
3.2. Data Security
3.3. Data Reduction
4. Assessing Vendor Stability: Ensuring Long-Term Reliability of Partners
4.1. Vendor Track Record
4.2. Financial Stability
4.3. Customer Base and References
4.4. Product Roadmap and Innovation
4.5. Support and Maintenance
4.6. Partnerships and Ecosystem
4.7. Industry Recognition and Analyst Reports
4.8. Contracts and SLAs
5. Final Takeaway
1. Introduction
When collaborating with a vendor, it is essential to evaluate their financial stability. This ensures that they are able to fulfil their obligations and deliver the promised services or goods. Prior to making contractual commitments, it is necessary to conduct due diligence to determine a vendor's financial health. This article examines when a vendor's financial viability must be evaluated, why to do so, and how vendor and contract management software can assist businesses.
IT organizations of all sizes face numerous infrastructure difficulties. On one hand, they frequently receive urgent demands from the business to keep their organization agile and proactive while implementing new digital transformation initiatives. They also struggle to keep their budget under control, provide new resources swiftly, and manage the increasing complexity while maintaining a reasonable level of efficiency. For many organizations, a cloud-only IT strategy is not a viable option; as a result, there is a growing interest in hybrid scenarios that offer the best of both realms. By combining cloud and traditional IT infrastructures, there is a real danger of creating silos, going in the incorrect direction, and further complicating the overall infrastructure, thereby introducing inefficiencies.
2. How HCI Overcomes Infrastructural Challenges
Hyper-converged infrastructures (HCI) surpass conventional infrastructures in terms of simplicity and adaptability. HCI enables organizations to conceal the complexity of their IT infrastructure while reaping the benefits of a cloud-like environment. HCI simplifies operations and facilitates the migration of on-premises data and applications to the cloud. HCI is a software-defined solution that abstracts and organizes CPU, memory, networking, and storage devices as resource pools, typically utilizing commodity x86-based hardware and virtualization software. It enables the administrator to rapidly combine and provision these resources as virtual machines and, more recently, as independent storage resources such as network-attached storage (NAS) filers and object stores. Management operations are also simplified, allowing for an increase in infrastructure productivity while reducing the number of operators and system administrators per virtual machine managed.
HCI market and itssolutions can be categorized into three groups:
Enterprise Solutions
They have an extensive feature set, high scalability, core-to-cloud integrations, and tools that extend beyond traditional virtualization platform management and up the application stack.
Small/Medium Enterprise Solutions
Comparable to the previous category, but simplified and more affordable. The emphasis remains on simplifying the IT infrastructure for virtualized environments, with limited core-to-cloud integrations and a limited ecosystem of solutions.
Vertical Solutions
Designed for particular use cases or vertical markets, they are highly competitive in edge-cloud or edge-core deployments, but typically have a limited ecosystem of solutions. These solutions incorporate open-source hypervisors, such as KVM, to provide end-to-end support at lower costs. They are typically not very scalable, but they are efficient from a resource consumption standpoint.
3. Evaluation Criteria for Enterprise HCI
3.1 Distributed Storage Layer
The distributed storage layer provides primary data storage service for virtual machines and is a crucial component of every HCI solution. Depending on the exposed protocol, they are typically presented as a virtual network-attached storage (NAS) or storage area network (SAN) and contain all of the data.
There are three distributed storage layer approaches for HCI:
Virtual storage appliance (VSA): A virtual machine administered by the same hypervisor as the other virtual machines in the node. A VSA is more flexible and can typically support multiple hypervisors, but this method may result in increased latency.
Integrated within the hypervisor or the Operating System (OS): The storage layer is an extension of the hypervisor and does not require the preceding approach's components (VM and guest OS). The tight integration boosts overall performance, enhances workload telemetry, and fully exploits hypervisor characteristics, but the storage layer is not portable.
Specialized storage nodes: The distributed storage layer is comprised of specialized nodes in order to achieve optimal performance consistency and scalability for both internal and external storage consumption. This strategy, which is typically more expensive than the alternatives for lesser configurations, is utilized.
3.2 Data Security
Currently, all vendors offer sophisticated data protection against multiple failures, such as full node, single, and multiple-component issues. Distributed erasure coding safeguards information by balancing performance and data footprint efficiency. This equilibrium is made possible by modern CPUs with sophisticated instruction sets, new hardware such as NVMe and storage-class memory (SCM) devices, and data path optimizations.
In addition, the evolution of storage technologies has played a pivotal role in enhancing data protection strategies. The introduction of high-capacity SSDs (Solid-State Drives) and advancements in storage virtualization have further strengthened the ability to withstand failures and ensure uninterrupted data availability. These technological innovations, combined with the relentless pursuit of redundancy and fault tolerance, have elevated the resilience of modern data storage systems.
Furthermore, for data protection and security, compliance with rules, regulations, and laws is paramount. Governments and regulatory bodies across the globe have established stringent frameworks to safeguard sensitive information and ensure privacy. Adherence to laws such as the General Data Protection Regulation (GDPR) in Europe, the Health Insurance Portability and Accountability Act (HIPAA) in the United States, and various industry-specific regulations is non-negotiable. Organizations must fortify their data against technical vulnerabilities and align their practices with legal requirements to prevent costly fines, legal repercussions, and reputational damage.
3.3 Data Reduction
Optimization of the data footprint is a crucial aspect of hyper-converged infrastructures. Deduplication, compression, and other techniques, such as thin provisioning, can significantly improve capacity utilization in virtualized environments, particularly for Virtual desktop infrastructure (VDI) use cases. Moreover, in order to optimize rack space utilization and achieve server balance, the number of storage devices that can be deployed on a single HCI node is restricted.
4. Assessing Vendor Stability: Ensuring Long-Term Reliability of Partners
Here are some key factors that contribute to ensuring long-term reliability:
4.1 Vendor Track Record
Assessing the vendor's track record and reputation in the industry is crucial. Look for established vendors with a history of delivering reliable products and services. A vendor that has been operating in the market for a significant period of time and has a strong customer base indicates stability.
4.2 Financial Stability
Consider factors such as the vendor's profitability, revenue growth, and ability to invest in research and development. Financial stability ensures the vendor's ability to support their products and services over the long term.
4.3 Customer Base and References
Look at the size and diversity of the vendor's customer base. A large and satisfied customer base indicates that the vendor's solutions have been adopted successfully by organizations. Request references from existing customers to get insights into their experience with the vendor's stability and support.
4.4 Product Roadmap and Innovation
Assess the vendor's product roadmap and commitment to ongoing innovation. A vendor that actively invests in research and development, regularly updates their products, and introduces new features and enhancements demonstrates a long-term commitment to their solution's reliability and advancement.
4.5 Support and Maintenance
Evaluate the vendor's support and maintenance services. Look for comprehensive support offerings, including timely bug fixes, security patches, and firmware updates. Understand the vendor's service-level agreements (SLAs), response times, and availability of technical support to ensure they can address any issues that may arise.
4.6 Partnerships and Ecosystem
Consider the vendor's partnerships and ecosystem. A strong network of partners, including technology alliances and integrations with other industry-leading vendors, can contribute to long-term reliability. Partnerships demonstrate collaboration, interoperability, and a wider ecosystem that enhances the vendor's solution.
4.7 Industry Recognition and Analyst Reports
Assess the vendor's industry recognition and performance in analyst reports. Look for accolades, awards, and positive evaluations from reputable industry analysts. These assessments provide independent validation of the vendor's stability and the reliability of their HCI solution.
4.8 Contracts and SLAs
Review the vendor's contracts, service-level agreements, and warranties carefully. Ensure they provide appropriate guarantees for support, maintenance, and ongoing product updates throughout the expected lifecycle of the HCI solution.
5. Final Takeaway
Evaluating a vendor's financial stability is crucial before entering into contractual commitments to ensure their ability to fulfill obligations. Hyper-converged infrastructure overcomes infrastructural challenges by simplifying operations, enabling cloud-like environments, and facilitating data and application migration. The HCI market offers enterprise, small/medium enterprise, and vertical solutions, each catering to different needs and requirements.
Analysing enterprise HCI solutions requires careful consideration of various criteria. Each approach has its own advantages and considerations related to flexibility, performance, and cost.
The mentioned techniques can significantly reduce the data footprint, particularly in use cases like VDI, while maintaining performance and efficiency. Organizations take decisions that align with their specific storage, security, and efficiency requirements by considering the evaluation criteria for enterprise HCI solutions.
By considering these factors, organizations can make informed decisions and choose a vendor with a strong foundation of reliability, stability, and long-term commitment, ensuring the durability of their HCI infrastructure and minimizing risks associated with vendor instability.
Read More