Cisco Brings AI/ML Workloads to Hyperconverged Infrastructure

A few months ago at NVIDIA GTC we discussed how you could run your AI/ML workloads on Cisco infrastructure and get the benefits of integrated management and solve your data gravity problem. At the same time, we introduced our collaboration with Google to bring Kubeflow deployments on-premises. Now we are bringing AI/ML workloads to Cisco hyperconverged infrastructure with release 3.5 of Cisco HyperFlex, with support for NVIDIA’s most advanced data center GPU, the Tesla V100.

Spotlight

Achab

Achab distributes software enabling SMBs to create ICT infrastructures that are flexible, effective, and cost efficient. Our mission is to always simplify the lives of our customers, our resellers, and the end user.

OTHER ARTICLES
Hyper-Converged Infrastructure

Adapting Hybrid Architectures for Digital Transformation Implementation

Article | July 13, 2023

For the majority of businesses, digital transformation (DX) has emerged as a significant priority. By incorporating digital technologies into all aspects of an organization's operations, digital transformation is a continuous process that alters how organizations operate as well as how they supply goods and services to customers and connect with them. Employing hybrid network infrastructures can aid businesses in putting DX strategies into action. An IT architecture and environment is a hybrid infrastructure that combines on-premises data centers with private or public clouds. Operating systems and applications can be deployed anywhere in this environment, depending on the needs and specifications of the firm. Managing and keeping an eye on an organization's whole IT infrastructure requires the use of hybrid IT infrastructure services, sometimes referred to as cloud services. Given the complexity of IT environments and needs, this is essential for digital transformation. What Does Hybrid Network Infrastructure Have To Offer? Flexibility Companies can employ the appropriate tools for the job, thanks to flexibility. For instance, a business needs access to a lot of data if it wants to use machine learning (ML) or artificial intelligence (AI). Utilizing public cloud services like AWS or Azure can help with this. However, these services might be pricey and not provide the performance required for some applications. Durability Hybrid networks are more tolerant of interruptions. For instance, a business can continue to function if there is a problem with its public cloud by using its private data center. This is due to the fact that the outage in the public cloud has no impact on the private data center. Security Businesses can utilize a hybrid cloud strategy to protect sensitive data while utilizing the resources and services of a public cloud, potentially lowering the chance of crucial information being compromised. While analytics and applications that use data kept in a private environment will probably still need to function in a public cloud, you can use encryption techniques to reduce security breaches. Scalability and Efficiency Traditional networks can't match the performance and scalability of hybrid networks. This is due to the fact that public clouds offer enormous bandwidth and storage that may be used as needed. By using a hybrid architecture, a company can benefit from the public cloud's flexibility and capacity while still keeping its business-critical data and operations in the private cloud or on-premises data center. Conclusion A cultural shift toward more flexible and intelligent ways of conducting business, supported by cutting-edge technology, involves integrating digital technologies throughout all company activities, improving current processes, developing new operational procedures, and offering higher value to clients. Infrastructures for hybrid networks are necessary for the success of digital transformation.

Read More
Hyper-Converged Infrastructure

A Look at Trends in IT infrastructure and Operations for 2022

Article | October 3, 2023

We’re all hoping that 2022 will finally end the unprecedented challenges brought by the global pandemic and things will return to a new normalcy. For IT infrastructure and operations organizations, the rising trends that we are seeing today will likely continue, but there are still a few areas that will need special attention from IT leaders over the next 12 to 18 months. In no particular order, they include: The New Edge Edge computing is now at the forefront. Two primary factors that make it business-critical are the increased prevalence of remote and hybrid workplace models where employees will continue working remotely, either from home or a branch office, resulting in an increased adoption of cloud-based businesses and communications services. With the rising focus on remote and hybrid workplace cultures, Zoom, Microsoft Teams, and Google Meet have continued to expand their solutions and add new features. As people start moving back to office, they are likely to want the same experience they had from home. In a typical enterprise setup, branch office traffic is usually backhauled all the way to the data center. This architecture severely impacts the user experience, so enterprises will have to review their network architectures and come up with a roadmap to accommodate local egress between branch offices and headquarters. That’s where the edge can help, bringing it closer to the workforce. This also brings an opportunity to optimize costs by migrating from some of the expensive multi-protocol label switching (MPLS) or private circuits to relatively low-cost direct internet circuits, which is being addressed by the new secure access service edge (SASE) architecture that is being offered by many established vendors. I anticipate some components of SASE, specifically those related to software-defined wide area network (SD-WAN), local egress, and virtual private network (VPN), will drive a lot of conversation this year. Holistic Cloud Strategy Cloud adoption will continue to grow, and along with software as a service (SaaS), there will be renewed interest in infrastructure as a service (IaaS), albeit for specific workloads. For a medium-to-large-sized enterprise with a substantial development environment, it will still be cost-prohibitive to move everything to the cloud, so any cloud strategy would need to be holistic and forward-looking to maximize its business value. Another pandemic-induced shift is from using virtual machines (VMs) as a consumption unit of compute to containers as a consumption unit of software. For on-premises or private cloud deployment architectures that require sustainable management, organizations will have to orchestrate containers and deploy efficient container security and management tools. Automation Now that cloud adoption, migration, and edge computing architectures are becoming more prevalent, the legacy methods of infrastructure provisioning and management will not be scalable. By increasing infrastructure automation, enterprises can optimize costs and be more flexible and efficient—but only if they are successful at developing new skills. To achieve the goal of “infrastructure as a code” will require a shift in the perspective on infrastructure automation to one that focuses on developing and sustaining skills and roles that improve efficiency and agility across on-premises, cloud, and edge infrastructures. Defining the roles of designers and architects to support automation is essential to ensure that automation works as expected, avoids significant errors, and complements other technologies. AIOps (Artificial Intelligence for IT Operations) Alongside complementing automation trends, the implementation of AIOps to effectively automate IT operations processes such as event correlation, anomaly detection, and causality determination will also be important. AIOps will eliminate the data silos in IT by bringing all types of data under one roof so it can be used to execute machine learning (ML)-based methods to develop insights for responsive enhancements and corrections. AIOps can also help with probable cause analytics by focusing on the most likely source of a problem. The concept of site reliability engineering (SRE) is being increasingly adopted by SaaS providers and will gain importance in enterprise IT environments due to the trends listed above. AIOps is a key component that will enable site reliability engineers (SREs) to respond more quickly—and even proactively—by resolving issues without manual intervention. These focus areas are by no means an exhaustive list. There are a variety of trends that will be more prevalent in specific industry areas, but a common theme in the post-pandemic era is going to be superior delivery of IT services. That’s also at the heart of the Autonomous Digital Enterprise, a forward-focused business framework designed to help companies make technology investments for the future.

Read More
Hyper-Converged Infrastructure, IT Systems Management

How to Rapidly Scale IT Infrastructure For Remote Workers

Article | September 14, 2023

Rapid IT infrastructure scaling is always challenging. In March 2020, the coronavirus caused a surge in remote workers as organizations switched overwhelmingly to work-from-home policies. Scaling IT infrastructure to support this sudden shift proved to be a struggle for IT teams, resulting in a migration to cloud-based applications and solutions, a rush on hardware that can support a remote environment, and challenges scaling VPNs to support remote worker security. Here are some of the insights and lessons learned from IT professionals.

Read More
Hyper-Converged Infrastructure

Adapting to Changing Landscape: Challenges and Solutions in HCI

Article | October 3, 2023

Navigating the complex terrain of Hyper-Converged Infrastructure: Unveiling the best practices and innovative strategies to harness the maximum benefits of HCI for transformation of business. Contents 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI 1.2 Importance of Adapting to the Changing HCI Environment 2. Challenges in HCI 2.1 Integration & Compatibility: Legacy System Integration 2.2 Efficient Lifecycle: Firmware & Software Management 2.3 Resource Forecasting: Scalability Planning 2.4 Workload Segregation: Performance Optimization 2.5 Latency Optimization: Data Access Efficiency 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability 3.2 Lifecycle Management 3.3 Capacity Planning 3.4 Performance Isolation 3.5 Data Locality 4. Importance of Ongoing Adaptation in the HCI Domain 4.1 Evolving Technology 4.2 Performance Optimization 4.3 Scalability and Flexibility 4.4 Security and Compliance 4.5 Business Transformation 5. Key Takeaways from the Challenges and Solutions Discussed 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI Hyper-Converged Infrastructure has transformed by providing a consolidated and software-defined approach to data center infrastructure. HCI combines virtualization, storage, and networking into a single integrated system, simplifying management and improving scalability. It has gained widespread adoption due to its ability to address the challenges of data center consolidation, virtualization, and resource efficiency. HCI solutions have evolved to offer advanced features like hybrid and multi-cloud support, data deduplication, and disaster recovery, making them suitable for various workloads. The HCI market has experienced significant growth, with a diverse ecosystem of vendors offering turnkey appliances and software-defined solutions. It has become the preferred infrastructure for running workloads like VDI, databases, and edge computing. HCI's ability to simplify operations, improve resource utilization, and support diverse workloads ensures its continued relevance. 1.2 Importance of Adapting to the Changing HCI Environment Adapting to the changing Hyper-Converged Infrastructure is of utmost importance for businesses, as it offers a consolidated and software-defined approach to IT infrastructure, enabling streamlined management, improved scalability, and cost-effectiveness. Staying up-to-date with evolving HCI technologies and trends ensures businesses to leverage the latest advancements for optimizing their operations. Embracing HCI enables organizations to enhance resource utilization, accelerate deployment times, and support a wide range of workloads. In accordance with enhancement, it facilitates seamless integration with emerging technologies like hybrid and multi-cloud environments, containerization, and data analytics. Businesses can stay competitive, enhance their agility, and unlock the full potential of their IT infrastructure. 2. Challenges in HCI 2.1 Integration and Compatibility: Legacy System Integration Integrating Hyper-Converged Infrastructure with legacy systems can be challenging due to differences in architecture, protocols, and compatibility issues. Existing legacy systems may not seamlessly integrate with HCI solutions, leading to potential disruptions, data silos, and operational inefficiencies. This may hinder the organization's ability to fully leverage the benefits of HCI and limit its potential for streamlined operations and cost savings. 2.2 Efficient Lifecycle: Firmware and Software Management Managing firmware and software updates across the HCI infrastructure can be complex and time-consuming. Ensuring that all components within the HCI stack, including compute, storage, and networking, are running the latest firmware and software versions is crucial for security, performance, and stability. However, coordinating and applying updates across the entire infrastructure can pose challenges, resulting in potential vulnerabilities, compatibility issues, and suboptimal system performance. 2.3 Resource Forecasting: Scalability Planning Forecasting resource requirements and planning for scalability in an HCI environment is as crucial as efficiently implementing HCI systems. As workloads grow or change, accurately predicting the necessary computing, storage, and networking resources becomes essential. Without proper resource forecasting and scalability planning, organizations may face underutilization or overprovisioning of resources, leading to increased costs, performance bottlenecks, or inefficient resource allocation. 2.4 Workload Segregation: Performance Optimization In an HCI environment, effectively segregating workloads to optimize performance can be challenging. Workloads with varying resource requirements and performance characteristics may coexist within the HCI infrastructure. Ensuring that high-performance workloads receive the necessary resources and do not impact other workloads' performance is critical. Failure to segregate workloads properly can result in resource contention, degraded performance, and potential bottlenecks, affecting the overall efficiency and user experience. 2.5 Latency Optimization: Data Access Efficiency Optimizing data access latency in an HCI environment is a rising challenge. HCI integrates computing and storage into a unified system, and data access latency can significantly impact performance. Inefficient data retrieval and processing can lead to increased response times, reduced user satisfaction, and potential productivity losses. Failure to ensure the data access patterns, caching mechanisms, and optimized network configurations to minimize latency and maximize data access efficiency within the HCI infrastructure leads to such latency. 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability Achieved by: Standards-based Integration and API HCI solutions should prioritize adherence to industry standards and provide robust support for APIs. By leveraging standardized protocols and APIs, HCI can seamlessly integrate with legacy systems, ensuring compatibility and smooth data flow between different components. This promotes interoperability, eliminates data silos, and enables organizations to leverage their existing infrastructure investments while benefiting from the advantages of HCI. 3.2 Lifecycle Management Achieved by: Centralized Firmware and Software Management Efficient Lifecycle Management in Hyper-Converged Infrastructure can be achieved by implementing a centralized management system that automates firmware and software updates across the HCI infrastructure. This solution streamlines the process of identifying, scheduling, and deploying updates, ensuring that all components are running the latest versions. Centralized management reduces manual efforts, minimizes the risk of compatibility issues, and enhances security, stability, and overall system performance. 3.3 Capacity Planning Achieved by: Analytics-driven Resource Forecasting HCI solutions should incorporate analytics-driven capacity planning capabilities. By analyzing historical and real-time data, HCI systems can accurately predict resource requirements and assist organizations in scaling their infrastructure proactively. This solution enables efficient resource utilization, avoids underprovisioning or overprovisioning, and optimizes cost savings while ensuring that performance demands are met. 3.4 Performance Isolation Achieved by: Quality of Service and Resource Allocation Policies To achieve effective workload segregation and performance optimization, HCI solutions should provide robust Quality of Service (QoS) mechanisms and flexible resource allocation policies. QoS settings allow organizations to prioritize critical workloads, allocate resources based on predefined policies, and enforce performance guarantees for specific applications or users. This solution ensures that high-performance workloads receive the necessary resources while preventing resource contention and performance degradation for other workloads. 3.5 Data Locality Achieved by: Data Tiering and Caching Mechanisms Addressing latency optimization and data access efficiency, HCI solutions must incorporate data tiering and caching mechanisms. By intelligently placing frequently accessed data closer to the compute resources, such as utilizing flash storage or caching algorithms, HCI systems can minimize data access latency and improve overall performance. This solution enhances data locality, reduces network latency, and ensures faster data retrieval, resulting in optimized application response times and improved user experience. 4. Importance of Ongoing Adaptation in the HCI Domain continuous adaptation is of the utmost importance in the HCI domain. HCI is a swiftly advancing technology that continues to provide new capabilities. Organizations are able to maximize the benefits of HCI and maintain a competitive advantage if they stay apprised of the most recent advancements and adapt to the changing environment. Here are key reasons highlighting the significance of ongoing adaptation in the HCI domain: 4.1 Evolving Technology HCI is constantly changing, with new features, functionalities, and enhancements being introduced regularly. Ongoing adaptation allows organizations to take advantage of these advancements and incorporate them into their infrastructure. It ensures that businesses stay up-to-date with the latest technological trends and can make informed decisions to optimize their HCI deployments. 4.2 Performance Optimization Continuous adaptation enables organizations to fine-tune their HCI environments for optimal performance. By staying informed about performance best practices and emerging optimization techniques, businesses can make necessary adjustments to maximize resource utilization, improve workload performance, and enhance overall system efficiency. Ongoing adaptation ensures that HCI deployments are continuously optimized to meet evolving business requirements. 4.3 Scalability and Flexibility Adapting to the changing HCI landscape facilitates scalability and flexibility. As business needs evolve, organizations may require the ability to scale their infrastructure, accommodate new workloads, or adopt hybrid or multi-cloud environments. Ongoing adaptation allows businesses to assess and implement the necessary changes to their HCI deployments, ensuring they can seamlessly scale and adapt to evolving demands. 4.4 Security and Compliance The HCI domain is not immune to security threats and compliance requirements. Ongoing adaptation helps organizations stay vigilant and up-to-date with the latest security practices, threat landscapes, and regulatory changes. It enables businesses to implement robust security measures, proactively address vulnerabilities, and maintain compliance with industry standards and regulations. Ongoing adaptation ensures that HCI deployments remain secure and compliant in the face of evolving cybersecurity challenges. 4.5 Business Transformation Ongoing adaptation in the HCI domain supports broader business transformation initiatives. Organizations undergoing digital transformation may need to adopt new technologies, integrate with cloud services, or embrace emerging trends like edge computing. Adapting the HCI infrastructure allows businesses to align their IT infrastructure with strategic objectives, enabling seamless integration, improved agility, and the ability to capitalize on emerging opportunities. The adaptation is thus crucial in the HCI domain as it enables organizations to stay current with technological advancements, optimize performance, scale infrastructure, enhance security, and align with business transformation initiatives. By continuously adapting to the evolving HCI, businesses can maximize the value and benefits derived from their HCI investments. 5. Key Takeaways from Challenges and Solutions Discussed Hyper-Converged Infrastructure poses several challenges during the implementation and execution of systems that organizations need to address for optimal performance. Integration and compatibility issues arise when integrating HCI with legacy systems, requiring standards-based integration and API support. Efficient lifecycle management is crucial, involving centralized firmware and software management to automate updates and enhance security and stability. Accurate resource forecasting is vital for capacity planning, enabling organizations to scale their HCI infrastructure effectively. Workload segregation demands QOS mechanisms and flexible resource allocation policies to optimize performance. Apart from these, latency optimization requires data tiering and caching mechanisms to minimize data access latency and improve application response times. By tackling these challenges and implementing appropriate solutions, businesses can harness the full potential of HCI, streamlining operations, maximizing resource utilization, and ensuring exceptional performance and user experience.

Read More

Spotlight

Achab

Achab distributes software enabling SMBs to create ICT infrastructures that are flexible, effective, and cost efficient. Our mission is to always simplify the lives of our customers, our resellers, and the end user.

Related News

Hyper-Converged Infrastructure, Application Infrastructure

Nutanix Accelerates Kubernetes Adoption in the Enterprise

Nutanix | October 27, 2022

Nutanix , a leader in hybrid multicloud computing, today announced new features in its Cloud Platform to accelerate the adoption of Kubernetes running both at scale and cost-effectively. The company announced broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services for modern applications. These new features allow DevOps teams to accelerate application delivery with the performance, governance, and flexibility of the Nutanix Cloud Platform while allowing customers to maintain control of their IT operating costs. “Kubernetes deployments are inherently dynamic and challenging to manage at scale. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resources—both on-premises and in the public cloud—is now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.” Thomas Cornely, SVP, Product Management, Nutanix According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 20211. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effective. The Nutanix Cloud Platform enables enterprises to run Kubernetes in a software-defined infrastructure environment that can linearly scale. Additionally, whether running Kubernetes on-premises or in the public cloud, Nutanix delivers a cost-effective solution that can help lower total cost of ownership by up to 53% when compared to other native cloud deployment solutions. New capabilities, including broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services, make Nutanix an even stronger proposition for enterprises looking to deploy Kubernetes at scale. Specifically, new enhancements include: Broad Kubernetes Ecosystem: The Nutanix Cloud Platform, with the built-in AHV hypervisor, now supports most leading Kubernetes container platforms with the addition of Amazon EKS-A. This builds on a large ecosystem including Red Hat OpenShift, SUSE Rancher, as well as Google Anthos, and Microsoft Azure Arc for edge deployments, along with the native Nutanix Kubernetes, Nutanix Kubernetes Engine (NKE). Built-In Infrastructure as Code Operating Model: Nutanix also announced an updated API family along with its SDKs in Java, JS, Go and Python, currently under development. This will enable automation at scale and consistent operations regardless of location—in the datacenter, on the public cloud, or at edge—both of key importance to enterprises. Additionally, when combined with Red Hat Ansible Certified Content or the Nutanix Terraform provider, a DevOps methodology can be brought to infrastructure through automation leveraging Infrastructure as Code. Strengthened Data Services for Modern Applications: Nutanix Cloud Platform’s web-scale architecture enables customers to start small and scale to multi-PB sized deployments as application needs grow. It is the only platform to unify delivery of integrated data services with file, object, and now adding database services on the same platform for Kubernetes-based applications. Today Nutanix launched Nutanix Database Service Operator for Kubernetes, which enables developers to quickly and easily provision and attach databases to their application stacks directly from development environments. The open source operator is available via artifacthub.io as well as by direct download at GitHub. Additionally, Nutanix Objects now supports a reference implementation of Container Object Storage Interface (COSI) for ease of orchestration and self-service provisioning. It also adds support for observability using Prometheus. Lastly, Objects is now validated with modern analytics applications including Presto, Dremio, and Vertica, along with Confluent Kafka to efficiently enable large-scale data pipelines often used in real-time streaming applications. These new features build on the Nutanix Cloud Platform’s ability to handle the dynamic demands of Kubernetes applications at scale. With the Nutanix hyperconverged infrastructure, performance and capacity scale linearly, resilience is delivered from the ground up with self-healing nodes, and persistent storage is natively integrated. Additionally, the Nutanix Cloud Platform can help deliver cost efficiencies by eliminating unused compute and storage resources. For customers looking at cloud integrations, the same Nutanix value is delivered across hybrid multicloud endpoints with full license portability across edge, datacenter, service provider, and hyperscaler points of presence. “When we decided to bring the core platform for our solutions in-house, we decided to take a modular containerized approach to give us the desired flexibility and simplify management by maintaining customization as configurations,” said Larry McClanahan, Chief Product Officer, Nymbus. “Our partnership with Red Hat and Nutanix gives us the flexibility to innovate, the speed to get to market fast, and the tremendous scalability to support ongoing growth. We’re thrilled that we can better help our customers succeed in the digital banking market with unique solutions.” "Container development platforms promise faster application development speed, but will only be deployed by organizations who can maintain compliance, day 2 operations, and cost management control at scale,” said Paul Nashawaty, Senior Analyst at ESG. “Nutanix offers a compelling path to speed the deployment of modern applications at scale and in a cost-effective manner, with full choice of Kubernetes container development environments and cloud endpoints." About Nutanix Nutanix is a global leader in cloud software and a pioneer in hyperconverged infrastructure solutions, making clouds invisible, freeing customers to focus on their business outcomes. Organizations around the world use Nutanix software to leverage a single platform to manage any app at any location for their hybrid multicloud environments.

Read More

Application Infrastructure, IT Systems Management

Scale Computing Tops CRN’s 2022 Annual Report Card for Edge Computing and Converged/Hyperconverged Infrastructure

Scale Computing | August 23, 2022

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced that CRN®, a brand of The Channel Company, has named it a winner of the 2022 CRN Annual Report (ARC) Awards in both the Edge Computing category and the Converged/Hyperconverged Infrastructure category. This is the fourth consecutive year Scale Computing has been recognized as a CRN ARC Award winner, and the company once again swept all of the subcategories including Product Innovation, Support, Partnership, and Managed & Cloud Services in both Edge Computing and Converged/Hyperconverged Infrastructure. The company also announced a “save the date” for the 2023 Scale Computing Platform Partner Summit, February 15-16 in Las Vegas, NV. Feel free to download the dates to your Google Calendar, Outlook Calendar, or iCal. With a 37-year history, CRN’s ARC Awards recognize best-in-class vendors that are devoted to boosting IT channel growth through innovation in technology and partner strategy. Through the ARC Awards — known as one of the most prestigious honors in the IT industry — solution providers offer key feedback that commends technology manufacturers for designing channel-friendly product offerings, developing strong partner programs, and building long-term successful relationships with solution providers. “This recognition represents the ‘Voice of the Partner’ and we are very proud to be named the leader in both Edge Computing and Converged/Hyperconverged Infrastructure. “This year is particularly meaningful as vendor survey participation was mandatory, ranking us number one above all of our competition. When we founded Scale Computing, we set out to create a company that would be the best vendor our customers and partners would ever work with. Sweeping all subcategories over our competitors for years in-a-row proves we are delivering on that promise.” Jeff Ready, CEO and co-founder, Scale Computing Scale Computing Platform brings simplicity, high availability, and scalability together, replacing the existing infrastructure and running applications in a single, easy-to-manage platform. Bringing faster time to value than competing solutions, SC//Platform enables organizations to run applications in a unified environment that scales from 1 to 50,000 servers. Regardless of hardware requirements, the same innovative software and simple user interface provides the power to run infrastructure efficiently at the edge, in the distributed enterprise, and in the data center. The ARC Awards are based on an invitation-only research survey conducted by The Channel Company. Responses from 3,000 solution providers across North America were evaluated in this year’s survey, rating 82 vendor partners across four criteria: product innovation, support, partnership, and managed cloud services. Scores were awarded in 25 major product categories in technology areas that are critical to channel partner success. “It’s our pleasure to honor vendors that consistently deliver top-performing products and services to establish and foster successful channel partner relationships,” said Blaine Raddon, CEO, The Channel Company. “In addition to highlighting our winners, CRN’s Annual Report Card Awards provide vendors with actionable feedback and insight into their current standing with partners that can be incorporated into their channel strategies in the future. We look forward to offering our congratulations to all the award recipients at XChange 2022 in August.” Winners will be featured throughout The Channel Company’s XChange 2022 conference, taking place August 21-23 in Denver, CO. Coverage of the CRN 2022 ARC results can be found online at www.CRN.com/ARC and will be featured in the October 2022 issue of CRN Magazine. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing’s products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease-of-use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequalled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace.

Read More

Hyper-Converged Infrastructure

Sunlight.io launches first hyperconverged stack supporting the NVIDIA Jetson-based Lenovo ThinkEdge SE70 to make edge AI deployable at scale

Sunlight | June 30, 2022

Sunlight.io, the edge infrastructure company, today announced support for the NVIDIA Jetson™ edge AI platform, and the Lenovo SE70, with the launch of its beta program — ‘Project Rosie.’ Sunlight NexVisor is the first full hyperconverged stack to support the Arm-CPU-based NVIDIA Jetson. Sunlight NexVisor coupled with the Lenovo SE70 makes it easy to deploy AI applications anywhere at the edge. Application developers can be the first to access the technology and test their AI applications by applying here. AI is a ‘killer application’ at the edge where it is bringing real-time “insight to action” across a wide range of use cases. For example, computer vision — combining cameras, video streaming and analytics — is being implemented at drive-thrus nationwide for faster and more personalized food ordering; on manufacturing production lines to instantly identify and remove faulty items; and across smart cities to enhance population and crowd security. These sorts of AI applications need high levels of processing power with low latency and reliable networking in order to give real-time results. Enterprises want to replicate the simplicity of the hyperconverged infrastructure they enjoy in their core data centers for their edge AI applications. However, datacenter HCI isn’t able to run in the constrained environments that exist at the edge due to their large RAM and CPU overhead and lack of edge management capabilities. This makes edge deployments extremely resource intensive to manage and hard to scale. Sunlight NexVisor is the only hyperconverged stack that is able to run on both x86 and Arm architectures and with a tiny footprint suitable for constrained edge environments. It includes centralized management and application deployment capabilities. NVIDIA Jetson is the world's leading platform for AI at the edge. NVIDIA Jetson modules are small form-factor, high-performance computers containing an Arm processor and GPU. The combination of Sunlight NexVisor and the NVIDIA Jetson-powered Lenovo ThinkEdge SE70 makes it possible to run demanding edge AI applications in harsh environments that span hundreds or thousands of sites with easy single-pane-of-glass management, low TCO and tiny power and space requirements. Sunlight is a member of NVIDIA Inception, a global program designed to nurture cutting-edge startups. Scott Tease, Lenovo’s VP for HPC and AI said, "Our customers realize the advantages of edge AI and deploying solutions closer to the point of data capture to run real-time inferencing. That is why we are so excited to be partnering up with Sunlight as they support our edge portfolio to significantly improve the efficiency and economics of AI deployments for customers worldwide." “We are excited to launch this exclusive beta program for users who need to run efficient, manageable AI out where the data is generated — at the edge. “Sunlight already offers full support for the Lenovo ThinkEdge and ThinkSystem range, including the Intel-based SE30, SE50, SE350 and SE450. Together, we’ve been able to produce a truly industry-first solution by combining Sunlight’s turn-key, edge-as-a-service offering with Lenovo’s leading AI edge platform powered by NVIDIA Jetson. Sunlight was born out of a collaboration with Arm back in 2013 to build a lightweight hypervisor, and we’re seeing huge demand for the use of Arm-based servers at the edge due to their performance and power-efficiency.” Julian Chesterfield, Founder and CEO of Sunlight About Sunlight The Sunlight Edge is a reliable, secure, zero-touch and economic infrastructure that helps turn your critical edge data into real-time insight and action across your retail stores, manufacturing lines and smart cities. Sunlight makes running and managing applications and infrastructure at the edge as easy as in the cloud. Sunlight works with efficient, ruggedized edge hardware — so you can consolidate all of your in-location edge applications with full isolation, security and high availability.

Read More

Hyper-Converged Infrastructure, Application Infrastructure

Nutanix Accelerates Kubernetes Adoption in the Enterprise

Nutanix | October 27, 2022

Nutanix , a leader in hybrid multicloud computing, today announced new features in its Cloud Platform to accelerate the adoption of Kubernetes running both at scale and cost-effectively. The company announced broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services for modern applications. These new features allow DevOps teams to accelerate application delivery with the performance, governance, and flexibility of the Nutanix Cloud Platform while allowing customers to maintain control of their IT operating costs. “Kubernetes deployments are inherently dynamic and challenging to manage at scale. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resources—both on-premises and in the public cloud—is now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.” Thomas Cornely, SVP, Product Management, Nutanix According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 20211. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effective. The Nutanix Cloud Platform enables enterprises to run Kubernetes in a software-defined infrastructure environment that can linearly scale. Additionally, whether running Kubernetes on-premises or in the public cloud, Nutanix delivers a cost-effective solution that can help lower total cost of ownership by up to 53% when compared to other native cloud deployment solutions. New capabilities, including broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services, make Nutanix an even stronger proposition for enterprises looking to deploy Kubernetes at scale. Specifically, new enhancements include: Broad Kubernetes Ecosystem: The Nutanix Cloud Platform, with the built-in AHV hypervisor, now supports most leading Kubernetes container platforms with the addition of Amazon EKS-A. This builds on a large ecosystem including Red Hat OpenShift, SUSE Rancher, as well as Google Anthos, and Microsoft Azure Arc for edge deployments, along with the native Nutanix Kubernetes, Nutanix Kubernetes Engine (NKE). Built-In Infrastructure as Code Operating Model: Nutanix also announced an updated API family along with its SDKs in Java, JS, Go and Python, currently under development. This will enable automation at scale and consistent operations regardless of location—in the datacenter, on the public cloud, or at edge—both of key importance to enterprises. Additionally, when combined with Red Hat Ansible Certified Content or the Nutanix Terraform provider, a DevOps methodology can be brought to infrastructure through automation leveraging Infrastructure as Code. Strengthened Data Services for Modern Applications: Nutanix Cloud Platform’s web-scale architecture enables customers to start small and scale to multi-PB sized deployments as application needs grow. It is the only platform to unify delivery of integrated data services with file, object, and now adding database services on the same platform for Kubernetes-based applications. Today Nutanix launched Nutanix Database Service Operator for Kubernetes, which enables developers to quickly and easily provision and attach databases to their application stacks directly from development environments. The open source operator is available via artifacthub.io as well as by direct download at GitHub. Additionally, Nutanix Objects now supports a reference implementation of Container Object Storage Interface (COSI) for ease of orchestration and self-service provisioning. It also adds support for observability using Prometheus. Lastly, Objects is now validated with modern analytics applications including Presto, Dremio, and Vertica, along with Confluent Kafka to efficiently enable large-scale data pipelines often used in real-time streaming applications. These new features build on the Nutanix Cloud Platform’s ability to handle the dynamic demands of Kubernetes applications at scale. With the Nutanix hyperconverged infrastructure, performance and capacity scale linearly, resilience is delivered from the ground up with self-healing nodes, and persistent storage is natively integrated. Additionally, the Nutanix Cloud Platform can help deliver cost efficiencies by eliminating unused compute and storage resources. For customers looking at cloud integrations, the same Nutanix value is delivered across hybrid multicloud endpoints with full license portability across edge, datacenter, service provider, and hyperscaler points of presence. “When we decided to bring the core platform for our solutions in-house, we decided to take a modular containerized approach to give us the desired flexibility and simplify management by maintaining customization as configurations,” said Larry McClanahan, Chief Product Officer, Nymbus. “Our partnership with Red Hat and Nutanix gives us the flexibility to innovate, the speed to get to market fast, and the tremendous scalability to support ongoing growth. We’re thrilled that we can better help our customers succeed in the digital banking market with unique solutions.” "Container development platforms promise faster application development speed, but will only be deployed by organizations who can maintain compliance, day 2 operations, and cost management control at scale,” said Paul Nashawaty, Senior Analyst at ESG. “Nutanix offers a compelling path to speed the deployment of modern applications at scale and in a cost-effective manner, with full choice of Kubernetes container development environments and cloud endpoints." About Nutanix Nutanix is a global leader in cloud software and a pioneer in hyperconverged infrastructure solutions, making clouds invisible, freeing customers to focus on their business outcomes. Organizations around the world use Nutanix software to leverage a single platform to manage any app at any location for their hybrid multicloud environments.

Read More

Application Infrastructure, IT Systems Management

Scale Computing Tops CRN’s 2022 Annual Report Card for Edge Computing and Converged/Hyperconverged Infrastructure

Scale Computing | August 23, 2022

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced that CRN®, a brand of The Channel Company, has named it a winner of the 2022 CRN Annual Report (ARC) Awards in both the Edge Computing category and the Converged/Hyperconverged Infrastructure category. This is the fourth consecutive year Scale Computing has been recognized as a CRN ARC Award winner, and the company once again swept all of the subcategories including Product Innovation, Support, Partnership, and Managed & Cloud Services in both Edge Computing and Converged/Hyperconverged Infrastructure. The company also announced a “save the date” for the 2023 Scale Computing Platform Partner Summit, February 15-16 in Las Vegas, NV. Feel free to download the dates to your Google Calendar, Outlook Calendar, or iCal. With a 37-year history, CRN’s ARC Awards recognize best-in-class vendors that are devoted to boosting IT channel growth through innovation in technology and partner strategy. Through the ARC Awards — known as one of the most prestigious honors in the IT industry — solution providers offer key feedback that commends technology manufacturers for designing channel-friendly product offerings, developing strong partner programs, and building long-term successful relationships with solution providers. “This recognition represents the ‘Voice of the Partner’ and we are very proud to be named the leader in both Edge Computing and Converged/Hyperconverged Infrastructure. “This year is particularly meaningful as vendor survey participation was mandatory, ranking us number one above all of our competition. When we founded Scale Computing, we set out to create a company that would be the best vendor our customers and partners would ever work with. Sweeping all subcategories over our competitors for years in-a-row proves we are delivering on that promise.” Jeff Ready, CEO and co-founder, Scale Computing Scale Computing Platform brings simplicity, high availability, and scalability together, replacing the existing infrastructure and running applications in a single, easy-to-manage platform. Bringing faster time to value than competing solutions, SC//Platform enables organizations to run applications in a unified environment that scales from 1 to 50,000 servers. Regardless of hardware requirements, the same innovative software and simple user interface provides the power to run infrastructure efficiently at the edge, in the distributed enterprise, and in the data center. The ARC Awards are based on an invitation-only research survey conducted by The Channel Company. Responses from 3,000 solution providers across North America were evaluated in this year’s survey, rating 82 vendor partners across four criteria: product innovation, support, partnership, and managed cloud services. Scores were awarded in 25 major product categories in technology areas that are critical to channel partner success. “It’s our pleasure to honor vendors that consistently deliver top-performing products and services to establish and foster successful channel partner relationships,” said Blaine Raddon, CEO, The Channel Company. “In addition to highlighting our winners, CRN’s Annual Report Card Awards provide vendors with actionable feedback and insight into their current standing with partners that can be incorporated into their channel strategies in the future. We look forward to offering our congratulations to all the award recipients at XChange 2022 in August.” Winners will be featured throughout The Channel Company’s XChange 2022 conference, taking place August 21-23 in Denver, CO. Coverage of the CRN 2022 ARC results can be found online at www.CRN.com/ARC and will be featured in the October 2022 issue of CRN Magazine. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing’s products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease-of-use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequalled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace.

Read More

Hyper-Converged Infrastructure

Sunlight.io launches first hyperconverged stack supporting the NVIDIA Jetson-based Lenovo ThinkEdge SE70 to make edge AI deployable at scale

Sunlight | June 30, 2022

Sunlight.io, the edge infrastructure company, today announced support for the NVIDIA Jetson™ edge AI platform, and the Lenovo SE70, with the launch of its beta program — ‘Project Rosie.’ Sunlight NexVisor is the first full hyperconverged stack to support the Arm-CPU-based NVIDIA Jetson. Sunlight NexVisor coupled with the Lenovo SE70 makes it easy to deploy AI applications anywhere at the edge. Application developers can be the first to access the technology and test their AI applications by applying here. AI is a ‘killer application’ at the edge where it is bringing real-time “insight to action” across a wide range of use cases. For example, computer vision — combining cameras, video streaming and analytics — is being implemented at drive-thrus nationwide for faster and more personalized food ordering; on manufacturing production lines to instantly identify and remove faulty items; and across smart cities to enhance population and crowd security. These sorts of AI applications need high levels of processing power with low latency and reliable networking in order to give real-time results. Enterprises want to replicate the simplicity of the hyperconverged infrastructure they enjoy in their core data centers for their edge AI applications. However, datacenter HCI isn’t able to run in the constrained environments that exist at the edge due to their large RAM and CPU overhead and lack of edge management capabilities. This makes edge deployments extremely resource intensive to manage and hard to scale. Sunlight NexVisor is the only hyperconverged stack that is able to run on both x86 and Arm architectures and with a tiny footprint suitable for constrained edge environments. It includes centralized management and application deployment capabilities. NVIDIA Jetson is the world's leading platform for AI at the edge. NVIDIA Jetson modules are small form-factor, high-performance computers containing an Arm processor and GPU. The combination of Sunlight NexVisor and the NVIDIA Jetson-powered Lenovo ThinkEdge SE70 makes it possible to run demanding edge AI applications in harsh environments that span hundreds or thousands of sites with easy single-pane-of-glass management, low TCO and tiny power and space requirements. Sunlight is a member of NVIDIA Inception, a global program designed to nurture cutting-edge startups. Scott Tease, Lenovo’s VP for HPC and AI said, "Our customers realize the advantages of edge AI and deploying solutions closer to the point of data capture to run real-time inferencing. That is why we are so excited to be partnering up with Sunlight as they support our edge portfolio to significantly improve the efficiency and economics of AI deployments for customers worldwide." “We are excited to launch this exclusive beta program for users who need to run efficient, manageable AI out where the data is generated — at the edge. “Sunlight already offers full support for the Lenovo ThinkEdge and ThinkSystem range, including the Intel-based SE30, SE50, SE350 and SE450. Together, we’ve been able to produce a truly industry-first solution by combining Sunlight’s turn-key, edge-as-a-service offering with Lenovo’s leading AI edge platform powered by NVIDIA Jetson. Sunlight was born out of a collaboration with Arm back in 2013 to build a lightweight hypervisor, and we’re seeing huge demand for the use of Arm-based servers at the edge due to their performance and power-efficiency.” Julian Chesterfield, Founder and CEO of Sunlight About Sunlight The Sunlight Edge is a reliable, secure, zero-touch and economic infrastructure that helps turn your critical edge data into real-time insight and action across your retail stores, manufacturing lines and smart cities. Sunlight makes running and managing applications and infrastructure at the edge as easy as in the cloud. Sunlight works with efficient, ruggedized edge hardware — so you can consolidate all of your in-location edge applications with full isolation, security and high availability.

Read More

Events