Hyperconverged Infrastructure is at an Inflection Point

There are several dynamics that support this assertion. First, hyperconverged infrastructure is now being adopted for mainstream use cases, such as data center consolidation, and not just for project specific use cases such as VDI and ROBO. As a proof point to this shift, Doron shared the story of a Global 50 Financial Services firm that has adopted SimpliVity hyperconverged infrastructure. This enterprise had a complex multivendor IT environment deployed across six, non-optimized data centers that was far too costly and complex to maintain and scale. After extensive evaluation of legacy infrastructure alternatives, including traditional storage solutions, and other hyperconverged infrastructure vendors, this company chose SimpliVity’s OmniStack solution with Cisco UCS Servers.

Spotlight

Digital Lumens

As an OSRAM business, Digital Lumens is driving the industrial and commercial smart building revolution through superior software, products, and system integration. Its cloud-based intelligence platform, SiteWorx, brings the tangible benefits of the Internet of Things (IoT) to commercial and industrial environments worldwide, and leverages the power of connected lighting, IoT sensors, and software to deliver business intelligence from a unique vantage point—overhead.

OTHER ARTICLES
Hyper-Converged Infrastructure

Data Center as a Service Is the Way of the Future

Article | October 10, 2023

Data Center as a Service (DCaaS) is a hosting service that gives clients access to actual data center infrastructure and amenities. Through a Wide-Area Network, DCaaS enables clients to remotely access the provider's storage, server, and networking capabilities (WAN). Businesses can tackle their on-site data center's logistical and financial issues by outsourcing to a service provider. Many enterprises rely on DCaaS to overcome the physical constraints of their on-site infrastructure or to offload the hosting and management of non-mission-critical applications. Businesses that require robust data management solutions but lack the necessary internal resources can adopt DCaaS. DCaaS is the perfect answer for companies that are struggling with a lack of IT help or a lack of funding for system maintenance. Added benefits data Center as a Service allows businesses to be independent of their physical infrastructure: A single-provider API Data centers without Staff Effortlessly handle the influx of data Data centers in regions with more stable climates Data Center as a Service helps democratize the data center itself, allowing companies that could never afford the huge investments that have gotten us this far to benefit from these developments. This is perhaps the most important, as Infrastructure-as-a-Service enables smaller companies to get started without a huge investment. Conclusion Data center as a service (DCaaS) enables clients to access a data center remotely and its features, whereas data center services might include complete management of an organization's on-premises infrastructure resources. IT can be outsourced using data center services to manage an organization's network, storage, computing, cloud, and maintenance. The infrastructure of many businesses is outsourced to increase operational effectiveness, size, and cost-effectiveness. It might be challenging to manage your existing infrastructure while keeping up with the pace of innovation, but it's critical to be on the cutting edge of technology. Organizations may stay future-ready by working with a vendor that can supply DCaaS and data center services.

Read More
Hyper-Converged Infrastructure

Securing the 5G edge

Article | September 14, 2023

The rollout of 5G networks coupled with edge compute introduces new security concerns for both the network and the enterprise. Security at the edge presents a unique set of security challenges that differ from those faced by traditional data centers. Today new concerns emerge from the combination of distributed architectures and a disaggregated network, creating new challenges for service providers. Many mission critical applications enabled by 5G connectivity, such as smart factories, are better off hosted at the edge because it's more economical and delivers better Quality of Service (QoS). However, applications must also be secured; communication service providers need to ensure that applications operate in an environment that is both safe and provides isolation. This means that secure designs and protocols are in place to pre-empt threats, avoid incidents and minimize response time when incidents do occur. As enterprises adopt private 5G networks to drive their Industry 4.0 strategies, these new enterprise 5G trends demand a new approach to security. Companies must find ways to reduce their exposure to cyberattacks that could potentially disrupt mission critical services, compromise industrial assets and threaten the safety of their workforce. Cybersecurity readiness is essential to ensure private network investments are not devalued. The 5G network architecture, particularly at the edge, introduces new levels of service decomposition now evolving beyond the virtual machine and into the space of orchestrated containers. Such disaggregation requires the operation of a layered technology stack, from the physical infrastructure to resource abstraction, container enablement and orchestration, all of which present attack surfaces which require addressing from a security perspective. So how can CSPs protect their network and services from complex and rapidly growing threats? Addressing vulnerability points of the network layer by layer As networks grow and the number of connected nodes at the edge multiply, so do the vulnerability points. The distributed nature of the 5G edge increases vulnerability threats, just by having network infrastructure scattered across tens of thousands of sites. The arrival of the Internet of Things (IoT) further complicates the picture: with a greater number of connected and mobile devices, potentially creating new network bridging connection points, questions around network security have become more relevant. As the integrity of the physical site cannot be guaranteed in the same way as a supervised data center, additional security measures need to be taken to protect the infrastructure. Transport and application control layers also need to be secured, to enable forms of "isolation" preventing a breach from propagating to other layers and components. Each layer requires specific security measures to ensure overall network security: use of Trusted Platform Modules (TPM) chipsets on motherboards, UEFI Secure OS boot process, secure connections in the control plane and more. These measures all contribute to and are integral part of an end-to-end network security design and strategy. Open RAN for a more secure solution The latest developments in open RAN and the collaborative standards-setting process related to open interfaces and supply chain diversification are enhancing the security of 5G networks. This is happening for two reasons. First, traditional networks are built using vendor proprietary technology – a limited number of vendors dominate the telco equipment market and create vendor lock-in for service providers that forces them to also rely on vendors' proprietary security solutions. This in turn prevents the adoption of "best-of-breed" solutions and slows innovation and speed of response, potentially amplifying the impact of a security breach. Second, open RAN standardization initiatives employ a set of open-source standards-based components. This has a positive effect on security as the design embedded in components is openly visible and understood; vendors can then contribute to such open-source projects where tighter security requirements need to be addressed. Aside from the inherent security of the open-source components, open RAN defines a number of open interfaces which can be individually assessed in their security aspects. The openness intrinsically present in open RAN means that service components can be seamlessly upgraded or swapped to facilitate the introduction of more stringent security characteristics, or they can simultaneously swiftly address identified vulnerabilities. Securing network components with AI Monitoring the status of myriad network components, particularly spotting a security attack taking place among a multitude of cooperating application functions, requires resources that transcend the capabilities of a finite team of human operators. This is where advances in AI technology can help to augment the abilities of operations teams. AI massively scales the ability to monitor any number of KPIs, learn their characteristic behavior and identify anomalies – this makes it the ideal companion in the secure operation of the 5G edge. The self-learning aspect of AI supports not just the identification of known incident patterns but also the ability to learn about new, unknown and unanticipated threats. Security by design Security needs to be integral to the design of the network architecture and its services. The adoption of open standards caters to the definition of security best practices in both the design and operation of the new 5G network edge. The analytics capabilities embedded in edge hyperconverged infrastructure components provide the platform on which to build an effective monitoring and troubleshooting toolkit, ensuring the secure operation of the intelligent edge.

Read More
Hyper-Converged Infrastructure

Network Security: The Safety Net in the Digital World

Article | October 3, 2023

Every business or organization has spent a lot of time and energy building its network infrastructure. The right resources have taken countless hours to establish, ensuring that their network offers connectivity, operation, management, and communication. Their complex hardware, software, service architecture, and strategies are all working for optimum and dependable use. Setting up a security strategy for your network requires ongoing, consistent work. Therefore, the first step in implementing a security technique is to do so. The underlying architecture of your network should consider a range of implementation, upkeep, and continuous active procedures. Network infrastructure security requires a comprehensive strategy that includes best practices and continuing procedures to guarantee that the underlying infrastructure is always safe. A company's choice of security measures is determined by: Appropriate legal requirements Rules unique to the industry The specific network and security needs Security for network infrastructure has numerous significant advantages. For example, a business or institution can cut expenses, boost output, secure internal communications, and guarantee the security of sensitive data. Hardware, software, and services are vital, but they could all have flaws that unintentional or intentional acts could take advantage of. Security for network infrastructure is intended to provide sophisticated, comprehensive resources for defense against internal and external threats. Infrastructures are susceptible to assaults like denial-of-service, ransomware, spam, and illegal access. Implementing and maintaining a workable security plan for your network architecture can be challenging and time-consuming. Experts can help with this crucial and continuous process. A robust infrastructure lowers operational costs, boosts output, and protects sensitive data from hackers. While no security measure will be able to prevent all attack attempts, network infrastructure security can help you lessen the effects of a cyberattack and guarantee that your business is back up and running as soon as feasible.

Read More
Application Infrastructure

A Look at Trends in IT infrastructure and Operations for 2022

Article | May 9, 2022

We’re all hoping that 2022 will finally end the unprecedented challenges brought by the global pandemic and things will return to a new normalcy. For IT infrastructure and operations organizations, the rising trends that we are seeing today will likely continue, but there are still a few areas that will need special attention from IT leaders over the next 12 to 18 months. In no particular order, they include: The New Edge Edge computing is now at the forefront. Two primary factors that make it business-critical are the increased prevalence of remote and hybrid workplace models where employees will continue working remotely, either from home or a branch office, resulting in an increased adoption of cloud-based businesses and communications services. With the rising focus on remote and hybrid workplace cultures, Zoom, Microsoft Teams, and Google Meet have continued to expand their solutions and add new features. As people start moving back to office, they are likely to want the same experience they had from home. In a typical enterprise setup, branch office traffic is usually backhauled all the way to the data center. This architecture severely impacts the user experience, so enterprises will have to review their network architectures and come up with a roadmap to accommodate local egress between branch offices and headquarters. That’s where the edge can help, bringing it closer to the workforce. This also brings an opportunity to optimize costs by migrating from some of the expensive multi-protocol label switching (MPLS) or private circuits to relatively low-cost direct internet circuits, which is being addressed by the new secure access service edge (SASE) architecture that is being offered by many established vendors. I anticipate some components of SASE, specifically those related to software-defined wide area network (SD-WAN), local egress, and virtual private network (VPN), will drive a lot of conversation this year. Holistic Cloud Strategy Cloud adoption will continue to grow, and along with software as a service (SaaS), there will be renewed interest in infrastructure as a service (IaaS), albeit for specific workloads. For a medium-to-large-sized enterprise with a substantial development environment, it will still be cost-prohibitive to move everything to the cloud, so any cloud strategy would need to be holistic and forward-looking to maximize its business value. Another pandemic-induced shift is from using virtual machines (VMs) as a consumption unit of compute to containers as a consumption unit of software. For on-premises or private cloud deployment architectures that require sustainable management, organizations will have to orchestrate containers and deploy efficient container security and management tools. Automation Now that cloud adoption, migration, and edge computing architectures are becoming more prevalent, the legacy methods of infrastructure provisioning and management will not be scalable. By increasing infrastructure automation, enterprises can optimize costs and be more flexible and efficient—but only if they are successful at developing new skills. To achieve the goal of “infrastructure as a code” will require a shift in the perspective on infrastructure automation to one that focuses on developing and sustaining skills and roles that improve efficiency and agility across on-premises, cloud, and edge infrastructures. Defining the roles of designers and architects to support automation is essential to ensure that automation works as expected, avoids significant errors, and complements other technologies. AIOps (Artificial Intelligence for IT Operations) Alongside complementing automation trends, the implementation of AIOps to effectively automate IT operations processes such as event correlation, anomaly detection, and causality determination will also be important. AIOps will eliminate the data silos in IT by bringing all types of data under one roof so it can be used to execute machine learning (ML)-based methods to develop insights for responsive enhancements and corrections. AIOps can also help with probable cause analytics by focusing on the most likely source of a problem. The concept of site reliability engineering (SRE) is being increasingly adopted by SaaS providers and will gain importance in enterprise IT environments due to the trends listed above. AIOps is a key component that will enable site reliability engineers (SREs) to respond more quickly—and even proactively—by resolving issues without manual intervention. These focus areas are by no means an exhaustive list. There are a variety of trends that will be more prevalent in specific industry areas, but a common theme in the post-pandemic era is going to be superior delivery of IT services. That’s also at the heart of the Autonomous Digital Enterprise, a forward-focused business framework designed to help companies make technology investments for the future.

Read More

Spotlight

Digital Lumens

As an OSRAM business, Digital Lumens is driving the industrial and commercial smart building revolution through superior software, products, and system integration. Its cloud-based intelligence platform, SiteWorx, brings the tangible benefits of the Internet of Things (IoT) to commercial and industrial environments worldwide, and leverages the power of connected lighting, IoT sensors, and software to deliver business intelligence from a unique vantage point—overhead.

Related News

Hyper-Converged Infrastructure, Application Infrastructure

Nutanix Accelerates Kubernetes Adoption in the Enterprise

Nutanix | October 27, 2022

Nutanix , a leader in hybrid multicloud computing, today announced new features in its Cloud Platform to accelerate the adoption of Kubernetes running both at scale and cost-effectively. The company announced broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services for modern applications. These new features allow DevOps teams to accelerate application delivery with the performance, governance, and flexibility of the Nutanix Cloud Platform while allowing customers to maintain control of their IT operating costs. “Kubernetes deployments are inherently dynamic and challenging to manage at scale. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resources—both on-premises and in the public cloud—is now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.” Thomas Cornely, SVP, Product Management, Nutanix According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 20211. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effective. The Nutanix Cloud Platform enables enterprises to run Kubernetes in a software-defined infrastructure environment that can linearly scale. Additionally, whether running Kubernetes on-premises or in the public cloud, Nutanix delivers a cost-effective solution that can help lower total cost of ownership by up to 53% when compared to other native cloud deployment solutions. New capabilities, including broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services, make Nutanix an even stronger proposition for enterprises looking to deploy Kubernetes at scale. Specifically, new enhancements include: Broad Kubernetes Ecosystem: The Nutanix Cloud Platform, with the built-in AHV hypervisor, now supports most leading Kubernetes container platforms with the addition of Amazon EKS-A. This builds on a large ecosystem including Red Hat OpenShift, SUSE Rancher, as well as Google Anthos, and Microsoft Azure Arc for edge deployments, along with the native Nutanix Kubernetes, Nutanix Kubernetes Engine (NKE). Built-In Infrastructure as Code Operating Model: Nutanix also announced an updated API family along with its SDKs in Java, JS, Go and Python, currently under development. This will enable automation at scale and consistent operations regardless of location—in the datacenter, on the public cloud, or at edge—both of key importance to enterprises. Additionally, when combined with Red Hat Ansible Certified Content or the Nutanix Terraform provider, a DevOps methodology can be brought to infrastructure through automation leveraging Infrastructure as Code. Strengthened Data Services for Modern Applications: Nutanix Cloud Platform’s web-scale architecture enables customers to start small and scale to multi-PB sized deployments as application needs grow. It is the only platform to unify delivery of integrated data services with file, object, and now adding database services on the same platform for Kubernetes-based applications. Today Nutanix launched Nutanix Database Service Operator for Kubernetes, which enables developers to quickly and easily provision and attach databases to their application stacks directly from development environments. The open source operator is available via artifacthub.io as well as by direct download at GitHub. Additionally, Nutanix Objects now supports a reference implementation of Container Object Storage Interface (COSI) for ease of orchestration and self-service provisioning. It also adds support for observability using Prometheus. Lastly, Objects is now validated with modern analytics applications including Presto, Dremio, and Vertica, along with Confluent Kafka to efficiently enable large-scale data pipelines often used in real-time streaming applications. These new features build on the Nutanix Cloud Platform’s ability to handle the dynamic demands of Kubernetes applications at scale. With the Nutanix hyperconverged infrastructure, performance and capacity scale linearly, resilience is delivered from the ground up with self-healing nodes, and persistent storage is natively integrated. Additionally, the Nutanix Cloud Platform can help deliver cost efficiencies by eliminating unused compute and storage resources. For customers looking at cloud integrations, the same Nutanix value is delivered across hybrid multicloud endpoints with full license portability across edge, datacenter, service provider, and hyperscaler points of presence. “When we decided to bring the core platform for our solutions in-house, we decided to take a modular containerized approach to give us the desired flexibility and simplify management by maintaining customization as configurations,” said Larry McClanahan, Chief Product Officer, Nymbus. “Our partnership with Red Hat and Nutanix gives us the flexibility to innovate, the speed to get to market fast, and the tremendous scalability to support ongoing growth. We’re thrilled that we can better help our customers succeed in the digital banking market with unique solutions.” "Container development platforms promise faster application development speed, but will only be deployed by organizations who can maintain compliance, day 2 operations, and cost management control at scale,” said Paul Nashawaty, Senior Analyst at ESG. “Nutanix offers a compelling path to speed the deployment of modern applications at scale and in a cost-effective manner, with full choice of Kubernetes container development environments and cloud endpoints." About Nutanix Nutanix is a global leader in cloud software and a pioneer in hyperconverged infrastructure solutions, making clouds invisible, freeing customers to focus on their business outcomes. Organizations around the world use Nutanix software to leverage a single platform to manage any app at any location for their hybrid multicloud environments.

Read More

Application Infrastructure, IT Systems Management

Scale Computing Tops CRN’s 2022 Annual Report Card for Edge Computing and Converged/Hyperconverged Infrastructure

Scale Computing | August 23, 2022

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced that CRN®, a brand of The Channel Company, has named it a winner of the 2022 CRN Annual Report (ARC) Awards in both the Edge Computing category and the Converged/Hyperconverged Infrastructure category. This is the fourth consecutive year Scale Computing has been recognized as a CRN ARC Award winner, and the company once again swept all of the subcategories including Product Innovation, Support, Partnership, and Managed & Cloud Services in both Edge Computing and Converged/Hyperconverged Infrastructure. The company also announced a “save the date” for the 2023 Scale Computing Platform Partner Summit, February 15-16 in Las Vegas, NV. Feel free to download the dates to your Google Calendar, Outlook Calendar, or iCal. With a 37-year history, CRN’s ARC Awards recognize best-in-class vendors that are devoted to boosting IT channel growth through innovation in technology and partner strategy. Through the ARC Awards — known as one of the most prestigious honors in the IT industry — solution providers offer key feedback that commends technology manufacturers for designing channel-friendly product offerings, developing strong partner programs, and building long-term successful relationships with solution providers. “This recognition represents the ‘Voice of the Partner’ and we are very proud to be named the leader in both Edge Computing and Converged/Hyperconverged Infrastructure. “This year is particularly meaningful as vendor survey participation was mandatory, ranking us number one above all of our competition. When we founded Scale Computing, we set out to create a company that would be the best vendor our customers and partners would ever work with. Sweeping all subcategories over our competitors for years in-a-row proves we are delivering on that promise.” Jeff Ready, CEO and co-founder, Scale Computing Scale Computing Platform brings simplicity, high availability, and scalability together, replacing the existing infrastructure and running applications in a single, easy-to-manage platform. Bringing faster time to value than competing solutions, SC//Platform enables organizations to run applications in a unified environment that scales from 1 to 50,000 servers. Regardless of hardware requirements, the same innovative software and simple user interface provides the power to run infrastructure efficiently at the edge, in the distributed enterprise, and in the data center. The ARC Awards are based on an invitation-only research survey conducted by The Channel Company. Responses from 3,000 solution providers across North America were evaluated in this year’s survey, rating 82 vendor partners across four criteria: product innovation, support, partnership, and managed cloud services. Scores were awarded in 25 major product categories in technology areas that are critical to channel partner success. “It’s our pleasure to honor vendors that consistently deliver top-performing products and services to establish and foster successful channel partner relationships,” said Blaine Raddon, CEO, The Channel Company. “In addition to highlighting our winners, CRN’s Annual Report Card Awards provide vendors with actionable feedback and insight into their current standing with partners that can be incorporated into their channel strategies in the future. We look forward to offering our congratulations to all the award recipients at XChange 2022 in August.” Winners will be featured throughout The Channel Company’s XChange 2022 conference, taking place August 21-23 in Denver, CO. Coverage of the CRN 2022 ARC results can be found online at www.CRN.com/ARC and will be featured in the October 2022 issue of CRN Magazine. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing’s products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease-of-use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequalled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace.

Read More

Hyper-Converged Infrastructure

Sunlight.io launches first hyperconverged stack supporting the NVIDIA Jetson-based Lenovo ThinkEdge SE70 to make edge AI deployable at scale

Sunlight | June 30, 2022

Sunlight.io, the edge infrastructure company, today announced support for the NVIDIA Jetson™ edge AI platform, and the Lenovo SE70, with the launch of its beta program — ‘Project Rosie.’ Sunlight NexVisor is the first full hyperconverged stack to support the Arm-CPU-based NVIDIA Jetson. Sunlight NexVisor coupled with the Lenovo SE70 makes it easy to deploy AI applications anywhere at the edge. Application developers can be the first to access the technology and test their AI applications by applying here. AI is a ‘killer application’ at the edge where it is bringing real-time “insight to action” across a wide range of use cases. For example, computer vision — combining cameras, video streaming and analytics — is being implemented at drive-thrus nationwide for faster and more personalized food ordering; on manufacturing production lines to instantly identify and remove faulty items; and across smart cities to enhance population and crowd security. These sorts of AI applications need high levels of processing power with low latency and reliable networking in order to give real-time results. Enterprises want to replicate the simplicity of the hyperconverged infrastructure they enjoy in their core data centers for their edge AI applications. However, datacenter HCI isn’t able to run in the constrained environments that exist at the edge due to their large RAM and CPU overhead and lack of edge management capabilities. This makes edge deployments extremely resource intensive to manage and hard to scale. Sunlight NexVisor is the only hyperconverged stack that is able to run on both x86 and Arm architectures and with a tiny footprint suitable for constrained edge environments. It includes centralized management and application deployment capabilities. NVIDIA Jetson is the world's leading platform for AI at the edge. NVIDIA Jetson modules are small form-factor, high-performance computers containing an Arm processor and GPU. The combination of Sunlight NexVisor and the NVIDIA Jetson-powered Lenovo ThinkEdge SE70 makes it possible to run demanding edge AI applications in harsh environments that span hundreds or thousands of sites with easy single-pane-of-glass management, low TCO and tiny power and space requirements. Sunlight is a member of NVIDIA Inception, a global program designed to nurture cutting-edge startups. Scott Tease, Lenovo’s VP for HPC and AI said, "Our customers realize the advantages of edge AI and deploying solutions closer to the point of data capture to run real-time inferencing. That is why we are so excited to be partnering up with Sunlight as they support our edge portfolio to significantly improve the efficiency and economics of AI deployments for customers worldwide." “We are excited to launch this exclusive beta program for users who need to run efficient, manageable AI out where the data is generated — at the edge. “Sunlight already offers full support for the Lenovo ThinkEdge and ThinkSystem range, including the Intel-based SE30, SE50, SE350 and SE450. Together, we’ve been able to produce a truly industry-first solution by combining Sunlight’s turn-key, edge-as-a-service offering with Lenovo’s leading AI edge platform powered by NVIDIA Jetson. Sunlight was born out of a collaboration with Arm back in 2013 to build a lightweight hypervisor, and we’re seeing huge demand for the use of Arm-based servers at the edge due to their performance and power-efficiency.” Julian Chesterfield, Founder and CEO of Sunlight About Sunlight The Sunlight Edge is a reliable, secure, zero-touch and economic infrastructure that helps turn your critical edge data into real-time insight and action across your retail stores, manufacturing lines and smart cities. Sunlight makes running and managing applications and infrastructure at the edge as easy as in the cloud. Sunlight works with efficient, ruggedized edge hardware — so you can consolidate all of your in-location edge applications with full isolation, security and high availability.

Read More

Hyper-Converged Infrastructure, Application Infrastructure

Nutanix Accelerates Kubernetes Adoption in the Enterprise

Nutanix | October 27, 2022

Nutanix , a leader in hybrid multicloud computing, today announced new features in its Cloud Platform to accelerate the adoption of Kubernetes running both at scale and cost-effectively. The company announced broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services for modern applications. These new features allow DevOps teams to accelerate application delivery with the performance, governance, and flexibility of the Nutanix Cloud Platform while allowing customers to maintain control of their IT operating costs. “Kubernetes deployments are inherently dynamic and challenging to manage at scale. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resources—both on-premises and in the public cloud—is now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.” Thomas Cornely, SVP, Product Management, Nutanix According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 20211. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effective. The Nutanix Cloud Platform enables enterprises to run Kubernetes in a software-defined infrastructure environment that can linearly scale. Additionally, whether running Kubernetes on-premises or in the public cloud, Nutanix delivers a cost-effective solution that can help lower total cost of ownership by up to 53% when compared to other native cloud deployment solutions. New capabilities, including broad support for leading Kubernetes container platforms, built-in infrastructure as code capabilities, and enhanced data services, make Nutanix an even stronger proposition for enterprises looking to deploy Kubernetes at scale. Specifically, new enhancements include: Broad Kubernetes Ecosystem: The Nutanix Cloud Platform, with the built-in AHV hypervisor, now supports most leading Kubernetes container platforms with the addition of Amazon EKS-A. This builds on a large ecosystem including Red Hat OpenShift, SUSE Rancher, as well as Google Anthos, and Microsoft Azure Arc for edge deployments, along with the native Nutanix Kubernetes, Nutanix Kubernetes Engine (NKE). Built-In Infrastructure as Code Operating Model: Nutanix also announced an updated API family along with its SDKs in Java, JS, Go and Python, currently under development. This will enable automation at scale and consistent operations regardless of location—in the datacenter, on the public cloud, or at edge—both of key importance to enterprises. Additionally, when combined with Red Hat Ansible Certified Content or the Nutanix Terraform provider, a DevOps methodology can be brought to infrastructure through automation leveraging Infrastructure as Code. Strengthened Data Services for Modern Applications: Nutanix Cloud Platform’s web-scale architecture enables customers to start small and scale to multi-PB sized deployments as application needs grow. It is the only platform to unify delivery of integrated data services with file, object, and now adding database services on the same platform for Kubernetes-based applications. Today Nutanix launched Nutanix Database Service Operator for Kubernetes, which enables developers to quickly and easily provision and attach databases to their application stacks directly from development environments. The open source operator is available via artifacthub.io as well as by direct download at GitHub. Additionally, Nutanix Objects now supports a reference implementation of Container Object Storage Interface (COSI) for ease of orchestration and self-service provisioning. It also adds support for observability using Prometheus. Lastly, Objects is now validated with modern analytics applications including Presto, Dremio, and Vertica, along with Confluent Kafka to efficiently enable large-scale data pipelines often used in real-time streaming applications. These new features build on the Nutanix Cloud Platform’s ability to handle the dynamic demands of Kubernetes applications at scale. With the Nutanix hyperconverged infrastructure, performance and capacity scale linearly, resilience is delivered from the ground up with self-healing nodes, and persistent storage is natively integrated. Additionally, the Nutanix Cloud Platform can help deliver cost efficiencies by eliminating unused compute and storage resources. For customers looking at cloud integrations, the same Nutanix value is delivered across hybrid multicloud endpoints with full license portability across edge, datacenter, service provider, and hyperscaler points of presence. “When we decided to bring the core platform for our solutions in-house, we decided to take a modular containerized approach to give us the desired flexibility and simplify management by maintaining customization as configurations,” said Larry McClanahan, Chief Product Officer, Nymbus. “Our partnership with Red Hat and Nutanix gives us the flexibility to innovate, the speed to get to market fast, and the tremendous scalability to support ongoing growth. We’re thrilled that we can better help our customers succeed in the digital banking market with unique solutions.” "Container development platforms promise faster application development speed, but will only be deployed by organizations who can maintain compliance, day 2 operations, and cost management control at scale,” said Paul Nashawaty, Senior Analyst at ESG. “Nutanix offers a compelling path to speed the deployment of modern applications at scale and in a cost-effective manner, with full choice of Kubernetes container development environments and cloud endpoints." About Nutanix Nutanix is a global leader in cloud software and a pioneer in hyperconverged infrastructure solutions, making clouds invisible, freeing customers to focus on their business outcomes. Organizations around the world use Nutanix software to leverage a single platform to manage any app at any location for their hybrid multicloud environments.

Read More

Application Infrastructure, IT Systems Management

Scale Computing Tops CRN’s 2022 Annual Report Card for Edge Computing and Converged/Hyperconverged Infrastructure

Scale Computing | August 23, 2022

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced that CRN®, a brand of The Channel Company, has named it a winner of the 2022 CRN Annual Report (ARC) Awards in both the Edge Computing category and the Converged/Hyperconverged Infrastructure category. This is the fourth consecutive year Scale Computing has been recognized as a CRN ARC Award winner, and the company once again swept all of the subcategories including Product Innovation, Support, Partnership, and Managed & Cloud Services in both Edge Computing and Converged/Hyperconverged Infrastructure. The company also announced a “save the date” for the 2023 Scale Computing Platform Partner Summit, February 15-16 in Las Vegas, NV. Feel free to download the dates to your Google Calendar, Outlook Calendar, or iCal. With a 37-year history, CRN’s ARC Awards recognize best-in-class vendors that are devoted to boosting IT channel growth through innovation in technology and partner strategy. Through the ARC Awards — known as one of the most prestigious honors in the IT industry — solution providers offer key feedback that commends technology manufacturers for designing channel-friendly product offerings, developing strong partner programs, and building long-term successful relationships with solution providers. “This recognition represents the ‘Voice of the Partner’ and we are very proud to be named the leader in both Edge Computing and Converged/Hyperconverged Infrastructure. “This year is particularly meaningful as vendor survey participation was mandatory, ranking us number one above all of our competition. When we founded Scale Computing, we set out to create a company that would be the best vendor our customers and partners would ever work with. Sweeping all subcategories over our competitors for years in-a-row proves we are delivering on that promise.” Jeff Ready, CEO and co-founder, Scale Computing Scale Computing Platform brings simplicity, high availability, and scalability together, replacing the existing infrastructure and running applications in a single, easy-to-manage platform. Bringing faster time to value than competing solutions, SC//Platform enables organizations to run applications in a unified environment that scales from 1 to 50,000 servers. Regardless of hardware requirements, the same innovative software and simple user interface provides the power to run infrastructure efficiently at the edge, in the distributed enterprise, and in the data center. The ARC Awards are based on an invitation-only research survey conducted by The Channel Company. Responses from 3,000 solution providers across North America were evaluated in this year’s survey, rating 82 vendor partners across four criteria: product innovation, support, partnership, and managed cloud services. Scores were awarded in 25 major product categories in technology areas that are critical to channel partner success. “It’s our pleasure to honor vendors that consistently deliver top-performing products and services to establish and foster successful channel partner relationships,” said Blaine Raddon, CEO, The Channel Company. “In addition to highlighting our winners, CRN’s Annual Report Card Awards provide vendors with actionable feedback and insight into their current standing with partners that can be incorporated into their channel strategies in the future. We look forward to offering our congratulations to all the award recipients at XChange 2022 in August.” Winners will be featured throughout The Channel Company’s XChange 2022 conference, taking place August 21-23 in Denver, CO. Coverage of the CRN 2022 ARC results can be found online at www.CRN.com/ARC and will be featured in the October 2022 issue of CRN Magazine. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing’s products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease-of-use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequalled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace.

Read More

Hyper-Converged Infrastructure

Sunlight.io launches first hyperconverged stack supporting the NVIDIA Jetson-based Lenovo ThinkEdge SE70 to make edge AI deployable at scale

Sunlight | June 30, 2022

Sunlight.io, the edge infrastructure company, today announced support for the NVIDIA Jetson™ edge AI platform, and the Lenovo SE70, with the launch of its beta program — ‘Project Rosie.’ Sunlight NexVisor is the first full hyperconverged stack to support the Arm-CPU-based NVIDIA Jetson. Sunlight NexVisor coupled with the Lenovo SE70 makes it easy to deploy AI applications anywhere at the edge. Application developers can be the first to access the technology and test their AI applications by applying here. AI is a ‘killer application’ at the edge where it is bringing real-time “insight to action” across a wide range of use cases. For example, computer vision — combining cameras, video streaming and analytics — is being implemented at drive-thrus nationwide for faster and more personalized food ordering; on manufacturing production lines to instantly identify and remove faulty items; and across smart cities to enhance population and crowd security. These sorts of AI applications need high levels of processing power with low latency and reliable networking in order to give real-time results. Enterprises want to replicate the simplicity of the hyperconverged infrastructure they enjoy in their core data centers for their edge AI applications. However, datacenter HCI isn’t able to run in the constrained environments that exist at the edge due to their large RAM and CPU overhead and lack of edge management capabilities. This makes edge deployments extremely resource intensive to manage and hard to scale. Sunlight NexVisor is the only hyperconverged stack that is able to run on both x86 and Arm architectures and with a tiny footprint suitable for constrained edge environments. It includes centralized management and application deployment capabilities. NVIDIA Jetson is the world's leading platform for AI at the edge. NVIDIA Jetson modules are small form-factor, high-performance computers containing an Arm processor and GPU. The combination of Sunlight NexVisor and the NVIDIA Jetson-powered Lenovo ThinkEdge SE70 makes it possible to run demanding edge AI applications in harsh environments that span hundreds or thousands of sites with easy single-pane-of-glass management, low TCO and tiny power and space requirements. Sunlight is a member of NVIDIA Inception, a global program designed to nurture cutting-edge startups. Scott Tease, Lenovo’s VP for HPC and AI said, "Our customers realize the advantages of edge AI and deploying solutions closer to the point of data capture to run real-time inferencing. That is why we are so excited to be partnering up with Sunlight as they support our edge portfolio to significantly improve the efficiency and economics of AI deployments for customers worldwide." “We are excited to launch this exclusive beta program for users who need to run efficient, manageable AI out where the data is generated — at the edge. “Sunlight already offers full support for the Lenovo ThinkEdge and ThinkSystem range, including the Intel-based SE30, SE50, SE350 and SE450. Together, we’ve been able to produce a truly industry-first solution by combining Sunlight’s turn-key, edge-as-a-service offering with Lenovo’s leading AI edge platform powered by NVIDIA Jetson. Sunlight was born out of a collaboration with Arm back in 2013 to build a lightweight hypervisor, and we’re seeing huge demand for the use of Arm-based servers at the edge due to their performance and power-efficiency.” Julian Chesterfield, Founder and CEO of Sunlight About Sunlight The Sunlight Edge is a reliable, secure, zero-touch and economic infrastructure that helps turn your critical edge data into real-time insight and action across your retail stores, manufacturing lines and smart cities. Sunlight makes running and managing applications and infrastructure at the edge as easy as in the cloud. Sunlight works with efficient, ruggedized edge hardware — so you can consolidate all of your in-location edge applications with full isolation, security and high availability.

Read More

Events