Intent-Based Networking

Networks are at the heart of the unstoppable evolution to a digital economy. Digitalization is changing the way businesses, partners, employees, and consumers interact at an unprecedented pace. Products and services can be customized, ordered and delivered at the click of a button using web-based applications. Business data can be acquired, analyzed and exchanged in near-real time. Geographic boundaries between businesses and consumers are diminishing. And the network is at the center of communication to and between the applications driving the digital economy.

Spotlight

Ovex Technologies Pakistan (Pvt.) Ltd.

Ovex Technologies (Pvt.) Ltd. was incorporated in January 2003, with a vision to become one of the region’s leading companies in providing business process outsourcing solutions and IT Solutions.

OTHER ARTICLES
Hyper-Converged Infrastructure

Accelerating DevOps and Continuous Delivery with IaaS Virtualization

Article | October 10, 2023

Adopting DevOps and CD in IaaS environments is a strategic imperative for organizations seeking to achieve agility, competitiveness, and customer satisfaction in their software delivery processes. Contents 1. Introduction 2. What is IaaS Virtualization? 3. Virtualization Techniques for DevOps and Continuous Delivery 4. Integration of IaaS with CI/CD Pipelines 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait 5.2. CPU System/Wait Time for VKernel: 5.3. Memory Balloon 5.4.Memory Swap Rate: 5.5. Memory Usage: 5.6. Disk/Network Latency: 6. Industry tips for IaaS Virtualization Implementation 6.1. Infrastructure Testing 6.2. ApplicationTesting 6.3. Security Monitoring 6.4. Performance Monitoring 6.5. Cost Optimization 7. Conclusion 1. Introduction Infrastructure as a Service (IaaS) virtualization presents significant advantages for organizations seeking to enhance their agility, flexibility, and speed to market within the DevOps and continuous delivery frameworks. Addressing the associated risks and challenges is crucial and can be achieved by employing the appropriate monitoring and testing techniques, enlisted further, in this blog. IaaS virtualization allows organizations to provision and de-provision resources as needed, eliminating the need for long-term investments in hardware and data centers. Furthermore, IaaS virtualization offers the ability to operate with multiple operating systems, databases, and programming languages, empowering teams to select the tools and technologies that best suit their requirements. However, organizations must implement comprehensive testing and monitoring strategies, ensure proper security and compliance controls, and adopt the best resource optimization and management practices to leverage the full potential of virtualized IaaS. To achieve high availability and fault tolerance along with advanced networking, enabling complex application architectures in IaaS virtualization, the blog mentions five industry tips. 2. What is IaaS Virtualization? IaaS virtualization involves simultaneously running multiple operating systems with different configurations. To run virtual machines on a system, a software layer known as the virtual machine monitor (VMM) or hypervisor is required. Virtualization in IaaS handles website hosting, application development and testing, disaster recovery, and data storage and backup. Startups and small businesses with limited IT resources and budgets can benefit greatly from virtualized IaaS, enabling them to provide the necessary infrastructure resources quickly and without significant capital expenditures. Virtualized IaaS is a potent tool for businesses and organizations of all sizes, enabling greater infrastructure resource flexibility, scalability, and efficiency. 3. Virtualization Techniques for DevOps and Continuous Delivery Virtualization is a vital part of the DevOps software stack. Virtualization in DevOps process allows teams to create, test, and implement code in simulated environments without wasting valuable computing resources. DevOps teams can use the virtual services for thorough testing, preventing bottlenecks that could slow down release time. It heavily relies on virtualization for building intricate cloud, API, and SOA systems. In addition, virtual machines benefit test-driven development (TDD) teams that prefer to begin their troubleshooting at the API level. 4. Integration of IaaS with CI/CD Pipelines Continuous integration is a coding practice that frequently implements small code changes and checks them into a version control repository. This process not only packages software and database components but also automatically executes unit tests and other tests to provide developers with vital feedback on any potential breakages caused by code changes. Continuous testing integrates automated tests into the CI/CD pipeline. For example, unit and functionality tests identify issues during continuous integration, while performance and security tests are executed after a build is delivered in continuous delivery. Continuous delivery is the process of automating the deployment of applications to one or more delivery environments. IaaS provides access to computing resources through a virtual server instance, which replicates the capabilities of an on-premise data center. It also offers various services, including server space, security, load balancing, and additional bandwidth. In modern software development and deployment, it's common to integrate IaaS with CI/CD pipelines. This helps automate the creation and management of infrastructure using infrastructure-as-code (IAC) tools. Templates can be created to provision resources on the IaaS platform, ensuring consistency and meeting software requirements. Additionally, containerization technologies like Docker and Kubernetes can deploy applications on IaaS platforms. 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait The CPU swap wait is when the virtual system waits while the hypervisor swaps parts of the VM memory back in from the disk. This happens when the hypervisor needs to swap, which can be due to a lack of balloon drivers or a memory shortage. This can affect the application's response time. One can install the balloon driver and/or reduce the number of VMs on the physical machine to resolve this issue. 5.2. CPU System/Wait Time for VKernel Virtualization systems often report CPU or wait time for the virtualization kernel used by each virtual machine to measure CPU resource overhead. While this metric can't be directly linked to response time, it can impact both ready and swap times if it increases significantly. If this occurs, it could indicate that the system is either misconfigured or overloaded, and reducing the number of VMs on the machine may be necessary. 5.3. Memory Balloon Memory ballooning is a memory management technique used in virtualized IaaS environments. It works by injecting a software balloon into the VM's memory space. The balloon is designed to consume memory within the VM, causing it to request more memory from the hypervisor. As a result, if the host system is experiencing low memory, it will take memory from its virtual infrastructures, thus negatively affecting the guest's performance, causing swapping, reduced file-system buffers, and smaller system caches. 5.4. Memory Swap Rate Memory swap rate is a performance metric used in virtualized IaaS environments to measure the amount of memory being swapped to disk. When the swap rate is high, it leads to longer CPU swap times and negatively affects application performance. In addition, when a VM is running, it may require more memory than is physically available on the server. In such cases, the hypervisor may use disk space as a temporary storage area for excess memory. Therefore, to optimize, it is important to ensure that VMs have sufficient memory resources allocated. 5.5. Memory Usage Memory usage refers to the amount of memory being used by a VM at any given time. Memory usage is assessed by analyzing the host level, VM level, and granted memory. When memory usage exceeds the available physical memory on the server, the hypervisor may use disk space as a temporary storage area for excess memory, leading to performance issues. The disparity between used and granted memory indicates the overcommitment rate, which can be adjusted through ballooning. 5.6. Disk/Network Latency Some virtualization providers provide integrated utilities for assessing the latency of disks and network interfaces utilized by a virtual machine. Since latency directly affects response time, increased latency at the hypervisor level will also impact the application. An excessive amount of latency indicates the system is overloaded and requires reconfiguration. These metrics enable us to monitor and detect any negative impact a virtualized system might have on our application. 6. Industry tips for IaaS Virtualization Implementation Testing, compliance management and security arecritical aspects of managing virtualized IaaS environments . By implementing a comprehensive strategy, organizations ensure their infrastructure and applications' reliability, security, and performance. 6.1. Infrastructure Testing This involves testing the infrastructure components of the IaaS environment, such as the virtual machines, networks, and storage, aiming to ensure the infrastructure is functioning correctly and that there are no performance bottlenecks, security vulnerabilities, or configuration issues. Testing the virtualized environment, storage testing (testing data replication and backup and recovery processes), and network testing are some of the techniques to be performed. 6.2. Application Testing Applications running on the IaaS virtual environment should be thoroughly tested to ensure they perform as expected. This includes functional testing to ensure that the application meets its requirements and performance testing to ensure that the application can handle anticipated user loads. 6.3. Security Monitoring Security monitoring is critical in IaaS environments, owing to the increased risks and threats. This involves monitoring the infrastructure and applications for potential security threats, vulnerabilities, or breaches. In addition, regular vulnerability assessments and penetration testing help identify and address potential security issues before they become significant problems. 6.4. Performance Monitoring Performance monitoring is essential to ensuring that the underlying infrastructure meets performance expectations and has no performance bottlenecks. This comprises monitoring metrics such as CPU usage, memory usage, network traffic, and disk utilization. This information is used to identify performance issues and optimize resource usage. 6.5. Cost Optimization Cost optimization is a critical aspect of a virtualized IaaS environment with optimized efficiency and resource allocation. Organizations reduce costs and optimize resource usage by identifying and monitoring usage patterns and optimizing elastic and scalable resources. It involves right-sizing resources, utilizing infrastructure automation, reserved instances, spot instances (unused compute capacity purchased at a discount), and optimizing storage usage. 7. Conclusion IaaS virtualization has become a critical component of DevOps and continuous delivery practices. To rapidly develop, test, and deploy applications with greater agility and efficiency by providing on-demand access to scalable infrastructure resources to Devops teams, IaaS virtualization comes into picture. As DevOps teams continue to seek ways to streamline processes and improve efficiency, automation will play an increasingly important role. Automated deployment, testing, and monitoring processes will help reduce manual intervention and increase the speed and accuracy of development cycles. In addition, containers will offer a lightweight and flexible alternative to traditional virtualization, allowing DevOps teams to package applications and their dependencies into portable, self-contained units that can be easily moved between different environments. This can reduce the complexity of managing virtualized infrastructure environments and enable greater flexibility and scalability. By embracing these technologies and integrating them into their workflows, DevOps teams can achieve greater efficiency and accelerate their delivery of high-quality software products.

Read More
Hyper-Converged Infrastructure

WIRELESS DATA CENTERS AND CLOUD COMPUTING

Article | September 14, 2023

One of the most exciting areas of Vubiq Network’s innovative millimeter wave technology is in the application of ultra high-speed, short-range communications as applied to solving the scaling constraints and costs for internal data center connectivity and switching. Today’s limits of cabled and centralized switching architectures are eliminated by leveraging the wide bandwidths of the millimeter wave spectrum for the high-density communications requirements inside the modern data center. Our patented technology has the ability to provide more than one terabit per second of wireless uplink capacity from a single server rack through an innovative approach to create a millimeter wave massive mesh network. The elimination of all inter-rack cabling – as well as the elimination of all aggregation and core switches – is combined with higher throughput, lower latency, lower power, higher reliability, and lower cost by using millimeter wave wireless connectivity.

Read More
Application Infrastructure, Application Storage

How to Scale IT Infrastructure

Article | July 19, 2023

IT infrastructure scaling is when the size and power of an IT system are scaled to accommodate changes in storage and workflow demands. Infrastructure scaling can be horizontal or vertical. Vertical scaling, or scaling up, adds more processing power and memory to a system, giving it an immediate boost. Horizontal scaling, or scaling out, adds more servers to the cloud, easing the bottleneck in the long run, but also adding more complexity to the system.

Read More
Application Infrastructure

All You Need to Know About IaaS Vs. PaaS Vs. SaaS

Article | August 8, 2022

Nowadays, SaaS, IaaS, and PaaS are some of the most common names across the B2B and B2C sectors. This is because they have become the most efficient and go-to tool for starting a business. Together, they are significantly changing business operations around the globe and have emerged as separate sectors, revamping concepts of various product development, building and delivery processes. SaaS Vs PaaS Vs IaaS Each cloud computing model offers specific features and functionalities. Therefore, your organization must understand the differences. Whether you require cloud-based software to create customized applications, get complete control over your entire infrastructure without physically maintaining it, or simply for storage options, there is a cloud service for you. No matter what you choose, migrating to the cloud is the future of your business and technology. What is the Difference? IaaS: Aka Infrastructure as a Service IaaS allows organizations to manage their business resources such as their servers, network, and data storage on the cloud. PaaS: Aka Platform as a Service allows businesses and developers to build, host, and deploy consumer-facing apps. SaaS: Aka Software as a Service offers businesses and consumers cloud-based tools and applications for everyday use. You can easily access all three cloud computing tools on the internet browser or online apps. A great example would be Google Docs; Instead of working on one MS Word document and sending it around to each other, Google Docs allows your team to work and simultaneously collaborate online. The Market Value A recent report says that by 2028, the global SaaS market will be worth $716.52 billion, and by 2030, the global PaaS market will be worth $319 billion. Moreover, the global IaaS market is expected to be worth $292.58 billion by 2028, giving market players many opportunities. XaaS: Everything as a Service Another term more frequently used in IT is XaaS, short for Everything as a Service. It has emerged as a critical enabler of the Autonomous Digital Enterprise. XaaS is a term for highly customized, responsive, data-driven products and services that are entirely in the hands of the customer and based on the information they give through everyday IoT devices like cell phones and thermostats. Businesses can utilize this data generated over the cloud to deepen their customer relationships, sustain the sale beyond the initial product purchase and innovate faster. Conclusion Cloud computing is not restricted by physical hardware or office space. On the contrary, it allows your remote teams to work more effectively and seamlessly than ever, boosting productivity. Therefore, it offers maximum flexibility and scalability. IaaS, SaaS, PaaS; whichever solution you choose, options are always available to help you and your team move into cloud computing.

Read More

Spotlight

Ovex Technologies Pakistan (Pvt.) Ltd.

Ovex Technologies (Pvt.) Ltd. was incorporated in January 2003, with a vision to become one of the region’s leading companies in providing business process outsourcing solutions and IT Solutions.

Related News

Application Infrastructure, Windows Server OS

Palisade Infrastructure Announces Transaction with Consolidated Communications

businesswire | August 07, 2023

Palisade Infrastructure (“Palisade”) and Consolidated Communications, Inc. (“Consolidated”) have entered into an agreement whereby Palisade, on behalf of its managed funds, will acquire Consolidated’s assets in Washington state. The transaction includes Consolidated’s incumbent networks in Ellensburg and Yelm comprising a mixture of fiber-to-the-home and DSL technologies. Palisade intends to accelerate the build out of the fiber network in these markets, providing high speed, low latency connectivity to households and businesses. This is Palisade’s second broadband investment in Washington State following the announcement of the transaction to acquire Rainier Connect in December 2022. Palisade aims to develop a regional platform for fiber and high-speed broadband connectivity by investing in these markets to benefit all stakeholders including employees, customers and communities. Mike Reynolds, managing director at Palisade Infrastructure said, “We are excited to expand our fiber broadband platform in Washington State, in attractive markets that are in proximity to the Rainier Connect network. We look forward to continuing to grow the platform in the future.” This represents Palisade’s fourth transaction in North America and follows the closing of its investment in the PureSky Energy community solar platform in June 2023. Palisade is planning to launch a new fund focused on investing in digital connectivity and the energy transition later this year. Houlihan Lokey served as exclusive financial advisor and Morgan, Lewis & Bockius LLP served as legal counsel to Palisade. Lazard served as the exclusive financial advisor to Consolidated Communications on the transaction. The transaction remains subject to federal, state and local regulatory approvals and customary closing conditions. About Palisade Infrastructure Palisade Infrastructure forms part of the Palisade Group, a global independent, specialist infrastructure and real assets manager. Palisade Group has 30 active investments in its portfolio covering a broad range of sectors. Palisade Infrastructure’s North American capability focuses on the energy transition, digitization and transport infrastructure sectors. Palisade Infrastructure has a partnership-focused approach with a long-term investment horizon. For more information visit palisadegroup.com. About Consolidated Communications Consolidated Communications Holdings, Inc. (Nasdaq: CNSL) is dedicated to moving people, businesses and communities forward by delivering the most reliable fiber communications solutions. Consumers, businesses and wireless and wireline carriers depend on Consolidated for a wide range of high-speed internet, data, phone, security, cloud and wholesale carrier solutions. With a network spanning more than 57,500 fiber route miles, Consolidated is a top 10 U.S. fiber provider, turning technology into solutions that are backed by exceptional customer support.

Read More

Hyper-Converged Infrastructure, Application Infrastructure

Edgecore Networks Introduces New Scalable and Feature-Rich Entry-Level Ethernet Switches for IDC, Enterprise, and Campus Access Networking

businesswire | May 31, 2023

Edgecore Networks, a leading provider of traditional and open network solutions for enterprises, data centers, and telecommunication service providers, is pleased to announce the launch of its newest high-performance enterprise product family, the EPS120 Series. These optimized 1Gbps open switches are ideal for large retailers, campuses, and enterprise branches, offering robust 1G switching performance with high data transmission bandwidth and a large packet buffer to absorb traffic bursts. The EPS120 Series is powered by the latest Broadcom Trident3-X2 chipset family and a COMe board with an Intel Atom CPU, boasting expanded RAM and SSD capacity for improved switch control-plane performance. The switch series enables container-based NOS architectures, such as SONiC, to create feature-rich, unified, and scalable 1G networks, bringing BGP EVPN-VxLAN deployments to enterprise and campus environments. Its full line-rate L2/L3 forwarding and switching, multi-homing, telemetry, and large packet buffer make it ideal for both access and management networks. The switch series’ PoE model, the EPS122, features 6 x 10G uplinks and 48 x 1G downlinks with non-blocking capacity, of which eight can deliver up to 90 Watts of power per port, and the remaining 40 can deliver up to 30 Watts to each connected powered device. This flexibility allows for seamless deployment of Power-over-Ethernet wireless access networks, security applications, and campus networks utilizing existing Cat. 6/Cat. 6A cable infrastructure to power and connect surveillance cameras, wireless access points, and VoIP Phones. Additionally, the enhanced PoE budget of up to 1850W makes it effortless for network administrators to plan and deploy powered devices in ultra-high-density retail or warehouse environments for security and surveillance purposes. The EPS121 model combines 48 x 1G downlinks and 6 x 10G uplinks with non-blocking capacity, and SONiC for a simplified and unified deployment across data and management planes, connecting to 48 x 1G RJ-45 management ports of switches, servers, and storage devices per rack in a cloud data center or enterprise data center. With the EPS121 acting as a management switch and running the same SONiC stack as the ToR, leaf, and spine switches, the operation, management, monitoring, and control of the entire network is greatly simplified. "Edgecore is dedicated to advancing open networking towards the edge of the enterprise," said Powen Tsai, Product Line Manager. "The EPS120 switches running SONiC software provide the performance required for access networks. And, by utilizing Edgecore’s cutting-edge and proven robust designs, enterprises are able to build networks that minimize operational expenses and total cost of ownership. At Edgecore, we continue to prioritize user experience first." About Edgecore Networks Edgecore Networks Corporation is a wholly owned subsidiary of Accton Technology Corporation, the leading network ODM. Edgecore Networks delivers wired and wireless networking products and solutions through channel partners and system integrators worldwide for Data Center, Service Provider, Enterprise and SMB customers. Edgecore Networks is the leader in open networking, providing a full line of open 1G-400G Ethernet OCP Accepted™ switches, core routers, cell site gateways, virtual PON OLTs, packet transponders, and Wi-Fi access points that offer choice of commercial and open source NOS and SDN software.

Read More

Hyper-Converged Infrastructure, Storage Management, IT Systems Management

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling -- Reduces Data Center Power Costs by Up to 40%

Prnewswire | May 22, 2023

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized. "Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time." To learn more about Supermicro's GPU servers, visit: https://www.supermicro.com/en/products/gpu AI-optimized racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customized based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centers by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30x performance and efficiency in today's large transformer models with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. State-of-the-art eight NVIDIA H100 SXM5 Tensor Core GPU servers from Supermicro for today's largest scale AI models include: SYS-821GE-TNHR – (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/SYS-821GE-TNHR AS -8125GS-TNHR – (Dual 4th Gen AMD EPYC CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/AS-8125GS-TNHR Supermicro also designs a range of GPU servers customizable for fast AI training, vast volume AI inferencing, or AI-fused HPC workloads, including the systems with four NVIDIA H100 SXM5 Tensor Core GPUs. SYS-421GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 4U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-421GU-TNXR SYS-521GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 5U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-521GU-TNXR Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Learn more about the Supermicro Liquid Cooling system at: https://www.supermicro.com/en/solutions/liquid-cooling Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organizations, configurations from the server level to the entire data center must be optimized and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide. Read the Supermicro Large Scale AI Solution Brief - https://www.supermicro.com/solutions/Solution-Brief_Rack_Scale_AI.pdf Supermicro at ISC To explore these technologies and meet with our experts, plan on visiting the Supermicro Booth D405 at ISC High Performance 2023 event in Hamburg, Germany, May 21 – 25, 2023. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Read More

Application Infrastructure, Windows Server OS

Palisade Infrastructure Announces Transaction with Consolidated Communications

businesswire | August 07, 2023

Palisade Infrastructure (“Palisade”) and Consolidated Communications, Inc. (“Consolidated”) have entered into an agreement whereby Palisade, on behalf of its managed funds, will acquire Consolidated’s assets in Washington state. The transaction includes Consolidated’s incumbent networks in Ellensburg and Yelm comprising a mixture of fiber-to-the-home and DSL technologies. Palisade intends to accelerate the build out of the fiber network in these markets, providing high speed, low latency connectivity to households and businesses. This is Palisade’s second broadband investment in Washington State following the announcement of the transaction to acquire Rainier Connect in December 2022. Palisade aims to develop a regional platform for fiber and high-speed broadband connectivity by investing in these markets to benefit all stakeholders including employees, customers and communities. Mike Reynolds, managing director at Palisade Infrastructure said, “We are excited to expand our fiber broadband platform in Washington State, in attractive markets that are in proximity to the Rainier Connect network. We look forward to continuing to grow the platform in the future.” This represents Palisade’s fourth transaction in North America and follows the closing of its investment in the PureSky Energy community solar platform in June 2023. Palisade is planning to launch a new fund focused on investing in digital connectivity and the energy transition later this year. Houlihan Lokey served as exclusive financial advisor and Morgan, Lewis & Bockius LLP served as legal counsel to Palisade. Lazard served as the exclusive financial advisor to Consolidated Communications on the transaction. The transaction remains subject to federal, state and local regulatory approvals and customary closing conditions. About Palisade Infrastructure Palisade Infrastructure forms part of the Palisade Group, a global independent, specialist infrastructure and real assets manager. Palisade Group has 30 active investments in its portfolio covering a broad range of sectors. Palisade Infrastructure’s North American capability focuses on the energy transition, digitization and transport infrastructure sectors. Palisade Infrastructure has a partnership-focused approach with a long-term investment horizon. For more information visit palisadegroup.com. About Consolidated Communications Consolidated Communications Holdings, Inc. (Nasdaq: CNSL) is dedicated to moving people, businesses and communities forward by delivering the most reliable fiber communications solutions. Consumers, businesses and wireless and wireline carriers depend on Consolidated for a wide range of high-speed internet, data, phone, security, cloud and wholesale carrier solutions. With a network spanning more than 57,500 fiber route miles, Consolidated is a top 10 U.S. fiber provider, turning technology into solutions that are backed by exceptional customer support.

Read More

Hyper-Converged Infrastructure, Application Infrastructure

Edgecore Networks Introduces New Scalable and Feature-Rich Entry-Level Ethernet Switches for IDC, Enterprise, and Campus Access Networking

businesswire | May 31, 2023

Edgecore Networks, a leading provider of traditional and open network solutions for enterprises, data centers, and telecommunication service providers, is pleased to announce the launch of its newest high-performance enterprise product family, the EPS120 Series. These optimized 1Gbps open switches are ideal for large retailers, campuses, and enterprise branches, offering robust 1G switching performance with high data transmission bandwidth and a large packet buffer to absorb traffic bursts. The EPS120 Series is powered by the latest Broadcom Trident3-X2 chipset family and a COMe board with an Intel Atom CPU, boasting expanded RAM and SSD capacity for improved switch control-plane performance. The switch series enables container-based NOS architectures, such as SONiC, to create feature-rich, unified, and scalable 1G networks, bringing BGP EVPN-VxLAN deployments to enterprise and campus environments. Its full line-rate L2/L3 forwarding and switching, multi-homing, telemetry, and large packet buffer make it ideal for both access and management networks. The switch series’ PoE model, the EPS122, features 6 x 10G uplinks and 48 x 1G downlinks with non-blocking capacity, of which eight can deliver up to 90 Watts of power per port, and the remaining 40 can deliver up to 30 Watts to each connected powered device. This flexibility allows for seamless deployment of Power-over-Ethernet wireless access networks, security applications, and campus networks utilizing existing Cat. 6/Cat. 6A cable infrastructure to power and connect surveillance cameras, wireless access points, and VoIP Phones. Additionally, the enhanced PoE budget of up to 1850W makes it effortless for network administrators to plan and deploy powered devices in ultra-high-density retail or warehouse environments for security and surveillance purposes. The EPS121 model combines 48 x 1G downlinks and 6 x 10G uplinks with non-blocking capacity, and SONiC for a simplified and unified deployment across data and management planes, connecting to 48 x 1G RJ-45 management ports of switches, servers, and storage devices per rack in a cloud data center or enterprise data center. With the EPS121 acting as a management switch and running the same SONiC stack as the ToR, leaf, and spine switches, the operation, management, monitoring, and control of the entire network is greatly simplified. "Edgecore is dedicated to advancing open networking towards the edge of the enterprise," said Powen Tsai, Product Line Manager. "The EPS120 switches running SONiC software provide the performance required for access networks. And, by utilizing Edgecore’s cutting-edge and proven robust designs, enterprises are able to build networks that minimize operational expenses and total cost of ownership. At Edgecore, we continue to prioritize user experience first." About Edgecore Networks Edgecore Networks Corporation is a wholly owned subsidiary of Accton Technology Corporation, the leading network ODM. Edgecore Networks delivers wired and wireless networking products and solutions through channel partners and system integrators worldwide for Data Center, Service Provider, Enterprise and SMB customers. Edgecore Networks is the leader in open networking, providing a full line of open 1G-400G Ethernet OCP Accepted™ switches, core routers, cell site gateways, virtual PON OLTs, packet transponders, and Wi-Fi access points that offer choice of commercial and open source NOS and SDN software.

Read More

Hyper-Converged Infrastructure, Storage Management, IT Systems Management

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling -- Reduces Data Center Power Costs by Up to 40%

Prnewswire | May 22, 2023

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized. "Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time." To learn more about Supermicro's GPU servers, visit: https://www.supermicro.com/en/products/gpu AI-optimized racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customized based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centers by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30x performance and efficiency in today's large transformer models with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. State-of-the-art eight NVIDIA H100 SXM5 Tensor Core GPU servers from Supermicro for today's largest scale AI models include: SYS-821GE-TNHR – (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/SYS-821GE-TNHR AS -8125GS-TNHR – (Dual 4th Gen AMD EPYC CPUs, NVIDIA HGX H100 8 GPUs, 8U)https://www.supermicro.com/en/products/system/GPU/8U/AS-8125GS-TNHR Supermicro also designs a range of GPU servers customizable for fast AI training, vast volume AI inferencing, or AI-fused HPC workloads, including the systems with four NVIDIA H100 SXM5 Tensor Core GPUs. SYS-421GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 4U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-421GU-TNXR SYS-521GU-TNXR - (Dual 4th Gen Intel Xeon Scalable CPUs, NVIDIA HGX H100 4 GPUs, 5U)https://www.supermicro.com/en/products/system/GPU/4U/SYS-521GU-TNXR Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Learn more about the Supermicro Liquid Cooling system at: https://www.supermicro.com/en/solutions/liquid-cooling Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organizations, configurations from the server level to the entire data center must be optimized and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide. Read the Supermicro Large Scale AI Solution Brief - https://www.supermicro.com/solutions/Solution-Brief_Rack_Scale_AI.pdf Supermicro at ISC To explore these technologies and meet with our experts, plan on visiting the Supermicro Booth D405 at ISC High Performance 2023 event in Hamburg, Germany, May 21 – 25, 2023. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Read More

Events