Unprecedented Change Transforming Data Centers.

We are in an era of transformational change in the data center industry and this is new. Historically, these dedicated buildings and rooms -- originally designed to house mainframe computers -- saw little change for more than 40 years and were highly regulated by the US and other governments. As Winston Churchill once said, “There is nothing wrong with change, if it is in the right direction.” And change is upon us! Over the past few years, several strong forces have built up momentum directly affecting data center design and architecture in 2017 and beyond. The first is cloud computing: internet giants were not satisfied with legacy data centers' speed of deployment, high cost and ability to scale, and started their own designs. The second is the Internet of Things (IoT), which is driving data centers to deal with massive amounts of data and moving computational power and delivery of high bandwidth content closer to the user, or the network edge.

Spotlight

Cato Networks

Cato Networks provides organizations with a cloud-based and secure global SD-WAN. Cato delivers an integrated networking and security platform that securely connects all enterprise locations, people and data.

OTHER ARTICLES
Hyper-Converged Infrastructure

Network Security: The Safety Net in the Digital World

Article | October 3, 2023

Every business or organization has spent a lot of time and energy building its network infrastructure. The right resources have taken countless hours to establish, ensuring that their network offers connectivity, operation, management, and communication. Their complex hardware, software, service architecture, and strategies are all working for optimum and dependable use. Setting up a security strategy for your network requires ongoing, consistent work. Therefore, the first step in implementing a security technique is to do so. The underlying architecture of your network should consider a range of implementation, upkeep, and continuous active procedures. Network infrastructure security requires a comprehensive strategy that includes best practices and continuing procedures to guarantee that the underlying infrastructure is always safe. A company's choice of security measures is determined by: Appropriate legal requirements Rules unique to the industry The specific network and security needs Security for network infrastructure has numerous significant advantages. For example, a business or institution can cut expenses, boost output, secure internal communications, and guarantee the security of sensitive data. Hardware, software, and services are vital, but they could all have flaws that unintentional or intentional acts could take advantage of. Security for network infrastructure is intended to provide sophisticated, comprehensive resources for defense against internal and external threats. Infrastructures are susceptible to assaults like denial-of-service, ransomware, spam, and illegal access. Implementing and maintaining a workable security plan for your network architecture can be challenging and time-consuming. Experts can help with this crucial and continuous process. A robust infrastructure lowers operational costs, boosts output, and protects sensitive data from hackers. While no security measure will be able to prevent all attack attempts, network infrastructure security can help you lessen the effects of a cyberattack and guarantee that your business is back up and running as soon as feasible.

Read More
Hyper-Converged Infrastructure

How NSPs Prepare to Thrive in the 5G Era

Article | September 14, 2023

In my last blog in this series, we looked at the present state of 5G. Although it’s still early and it’s impossible to fully comprehend the potential impact of 5G use cases that haven’t been built yet, opportunities to monetize 5G with little additional investment are out there for network service providers (NSPs) who know where to look. Now, it’s time to look toward the future. Anyone who’s been paying attention knows that 5G technology will be revolutionary across many industry use cases, but I’m not sure everyone understands just how revolutionary, and how quickly it will go down. According to Gartner®, “While 10% of CSPs in 2020 provided commercializable 5G services, which could achieve multiregional availability, this number will increase to 60% by 2024”.[i] With so many recognizing the value of 5G and acting to capitalize on it, NSPs that fail to prepare for future 5G opportunities today are doing themselves and their enterprise customers a serious disservice. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier. 5G is so challenging for NSPs and their customers because it is so revolutionary. Mobile radio networks were built with consumer use cases in mind, which means the traffic from those networks is generally dumped straight to the internet. 5G is the first generation of wireless technology capable of supporting enterprise-class business applications, which means it’s also forcing many NSPs to consider alternatives to the public internet to support those applications. User plane function breakout helps put traffic near the app In my last article, I mentioned that one of the key steps mobile network operators (MNOs) could take to enable 5G monetization in the short term would be to bypass the public internet by enabling user traffic functions in the data center. This is certainly a step in the right direction, but to prepare themselves for future 5G and multicloud opportunities, they must go further by enabling user plane function (UPF) breakout. The 5G opportunities of tomorrow will rely on wireless traffic residing as close as possible to business applications, to reduce the distance data must travel and keep latency as low as possible. This is a similar challenge to the one NSPs faced in the past with their wireline networks. To address that challenge, they typically deployed virtual network functions (VNFs) on their own equipment. This helped them get the network capabilities they needed, when and where they needed them, but it also required them to buy colocation capacity and figure out how to interconnect their VNFs with the rest of their digital infrastructure. Instead, Equinix customers have the option to do UPF breakout with Equinix Metal®, our automated bare-metal-as-a-service offering, or Network Edge virtual network services on Platform Equinix®. Both options provide a simple, cost-effective way to get the edge infrastructure needed to support 5G business applications. Since both offerings are integrated with Equinix Fabric™, they allow NSPs to create secure software-defined interconnection with a rich ecosystem of partners. This streamlines the process of setting up hybrid deployments. Working with Equinix can help make UPF breakout less daunting. Instead of investing massive amounts of money to create 5G-ready infrastructure everywhere they need it, they can take advantage of more than 235 Equinix International Business Exchange™ (IBX®) data centers spread across 65 metros in 27 countries on five continents. This allows them to shift from a potentially debilitating up-front CAPEX investment to an OPEX investment spread over time, making the economics around 5G infrastructure much more manageable. Support MEC with a wide array of partners Multiaccess edge compute (MEC) will play a key role in enabling advanced 5G use cases, but first enterprises need a digital infrastructure capable of supporting it. This gets more complicated when they need to modernize their infrastructure while maintaining existing application-level partnerships. To put it simply, NSPs and their enterprise customers need an infrastructure provider that can not only partner with them, but also partner with their partners. With Equinix Metal, organizations can deploy the physical infrastructure they need to support MEC at software speed, while also supporting capabilities from a diverse array of partners. For instance, Equinix Metal provides support for Google Anthos, Amazon Elastic Container Service (ECS) Anywhere and Amazon Elastic Kubernetes Service (EKS) Anywhere. These are just a few examples of how Equinix interconnection offerings make it easier to collaborate with leading cloud providers to deploy MEC-driven applications. Provision reliable network slicing in a matter of minutes Network slicing is another important 5G capability that can help NSPs differentiate their offerings and unlock new business opportunities. On the surface, it sounds simple: slicing up network traffic into different classes of service, so that the most important traffic is optimized for factors such as high throughput, low latency and security. However, NSPs won’t always know exactly what slices their customers will want to send or where they’ll want to send them, making network slice mapping a serious challenge. Preparing for a 5G future may seem daunting but working with a trusted interconnection partner like Equinix can help make it easier.” Equinix Fabric offers a quicker, more cost-effective way to map network slices, with no need for cross connects to be set on the fly. With software-defined interconnection, the counterparty that receives the network slice essentially becomes an automated function that NSPs can easily control. This means NSPs can provision network slicing in a matter of minutes, not days, even when they don’t know who the counterparty is going to be. Service automation enabled by Equinix Fabric can be a critical element of an NSP’s multidomain orchestration architecture. 5G use case: Reimagining the live event experience As part of the MEF 3.0 Proof of Concept showcase, Equinix partnered with Spectrum Enterprise, Adva, and Juniper Networks to create a proof of concept (PoC) for a differentiated live event experience. The PoC showed how event promoters such as minor league sports teams could ingest multiple video feeds into an AI/ML-driven GPU farm that lives in an Equinix facility, and then process those feeds to present fans with custom content on demand. With the help of network slicing and high-performance MEC, fans can build their own unique experience of the event, looking at different camera angles or following a particular player throughout the game. Event promoters can offer this personalized experience even without access to the on-site data centers that are more common in major league sports venues. DISH taps Equinix for digital infrastructure services in support of 5G rollout As DISH looks to build out the first nationwide 5G network in the U.S., they will partner with Equinix to gain access to critical digital infrastructure services in our IBX data centers. This is a great example of how Equinix is equipped to help its NSP partners access the modern digital infrastructure needed to capitalize on 5G—today and into the future. DISH is taking the lead in delivering on the promise of 5G in the U.S., and our partnership with Equinix will enable us to secure critical interconnections for a nationwide 5G network. With proximity to large population centers, as well as network and cloud density, Equinix is the right partner to connect our cloud-native 5G network.” - Jeff McSchooler, DISH executive vice president of wireless network operations

Read More
Hyper-Converged Infrastructure, IT Systems Management

Transforming Data Management by Modernized Storage Solutions Using HCI

Article | September 14, 2023

Revolutionize data management with HCI: Unveil the modernized storage solutions and implementation strategies for enhanced efficiency, scalability, sustainable growth and future-ready performance. Contents 1. Introduction to Modernized Storage Solutions and HCI 2. Software-Defined Storage in HCI 3. Benefits of Modern Storage HCI in Data Management 3.1 Data Security and Privacy in HCI Storage 3.2 Data Analytics and Business Intelligence Integration 3.3 Hybrid and Multi-Cloud Data Management 4. Implementation Strategies for Modern Storage HCI 4.1 Workload Analysis 4.2 Software-Defined Storage 4.3 Advanced Networking 4.4 Data Tiering and Caching 4.5 Continuous Monitoring and Optimization 5. Future Trends in HCI Storage and Data Management 1. Introduction to Modernized Storage Solutions and HCI Modern businesses face escalating data volumes, necessitating efficient and scalable storage solutions. Modernized storage solutions, such as HCI, integrate computing, networking, and storage resources into a unified system, streamlining operations and simplifying data management. By embracing modernized storage solutions and HCI, organizations can unlock numerous benefits, including enhanced agility, simplified management, improved performance, robust data protection, and optimized costs. As technology evolves, leveraging these solutions will be instrumental in achieving competitive advantages and future-proofing the organization's IT infrastructure. 2. Software-Defined Storage in HCI By embracing software-defined storage in HCI, organizations can benefit from simplified storage management, scalability, improved performance, cost efficiency, and seamless integration with hybrid cloud environments. These advantages empower businesses to optimize their storage infrastructure, increase agility, and effectively manage growing data demands, ultimately driving success in the digital era. Software-defined storage in HCI revolutionizes traditional, hardware-based storage arrays by replacing them with virtualized storage resources managed through software. This centralized approach simplifies data storage management, allowing IT teams to allocate and oversee storage resources efficiently. With software-defined storage, organizations can seamlessly scale their storage infrastructure as needed without the complexities associated with traditional hardware setups. By abstracting storage from physical hardware, software-defined storage brings greater agility and flexibility to the storage infrastructure, enabling organizations to adapt quickly to changing business demands. Software-defined storage in HCI empowers organizations with seamless data mobility, allowing for the smooth movement of workloads and data across various infrastructure environments, including private and public clouds. This flexibility enables organizations to implement hybrid cloud strategies, leveraging the advantages of both on-premises and cloud environments. With software-defined storage, data migration, replication, and synchronization between different data storage locations become simplified tasks. This simplification enhances data availability and accessibility, facilitating efficient data management across other storage platforms and enabling organizations to make the most of their hybrid cloud deployments. 3. Benefits of Modern Storage HCI in Data Management Software-defined storage HCI simplifies hybrid and multi-cloud data management. Its single platform lets enterprises easily move workloads and data between on-premises infrastructure, private clouds, and public clouds. The centralized management interface of software-defined storage HCI ensures comprehensive data governance, unifies control, ensures compliance, and improves visibility across the data management ecosystem, complementing this flexibility and scalability optimization. 3.1 Data Security and Privacy in HCI Storage Modern software-defined storage HCI solutions provide robust data security measures, including encryption, access controls, and secure replication. By centralizing storage management through software-defined storage, organizations can implement consistent security policies across all storage resources, minimizing the risk of data breaches. HCI platforms offer built-in features such as snapshots, replication, and disaster recovery capabilities, ensuring data integrity, business continuity, and resilience against potential threats. 3.2 Data Analytics and Business Intelligence Integration These HCI platforms seamlessly integrate with data analytics and business intelligence tools, enabling organizations to gain valuable insights and make informed decisions. By consolidating storage, compute, and analytics capabilities, HCI minimizes data movement and latency, enhancing the efficiency of data analysis processes. The scalable architecture of software-defined storage HCI supports processing large data volumes, accelerating data analytics, predictive modeling, and facilitating data-driven strategies for enhanced operational efficiency and competitiveness. 3.3 Hybrid and Multi-Cloud Data Management Software-defined storage HCI simplifies hybrid and multi-cloud data management by providing a unified platform for seamless data movement across different environments. Organizations can easily migrate workloads and data between on-premises infrastructure, private clouds, and public clouds, optimizing flexibility and scalability. The centralized management interface of software-defined storage HCI enables consistent data governance, ensuring control, compliance, and visibility across the entire data management ecosystem. 4. Implementation Strategies for Modern Storage Using HCI 4.1 Workload Analysis A comprehensive workload analysis is essential before embarking on an HCI implementation journey. Start by thoroughly assessing the organization's workloads, delving into factors like application performance requirements, data access patterns, and peak usage times. Prioritize workloads based on their criticality to business operations, ensuring that those directly impacting revenue or customer experiences are addressed first. 4.2 Software-Defined Storage Software-defined storage (SDS) offers flexibility and abstraction of storage resources from hardware. SDS solutions are often vendor-agnostic, enabling organizations to choose storage hardware that aligns best with their needs. Scalability is a hallmark of SDS, as it can easily adapt to accommodate growing data volumes and evolving performance requirements. Adopt SDS for a wide range of data services, including snapshots, deduplication, compression, and automated tiering, all of which enhance storage efficiency. 4.3 Advanced Networking Leverage Software-Defined Networking technologies within the HCI environment to enhance agility, optimize network resource utilization, and support dynamic workload migrations. Implementing network segmentation allows organizations to isolate different workload types or security zones within the HCI infrastructure, bolstering security and compliance. Quality of Service (QoS) controls come into play to prioritize network traffic based on specific application requirements, ensuring optimal performance for critical workloads. 4.4 Data Tiering and Caching Intelligent data tiering and caching strategies play a pivotal role in optimizing storage within the HCI environment. These strategies automate the movement of data between different storage tiers based on usage patterns, ensuring that frequently accessed data resides on high-performance storage while less-accessed data is placed on lower-cost storage. Caching techniques, such as read and write caching, accelerate data access by storing frequently accessed data on high-speed storage media. Consider hybrid storage configurations, combining solid-state drives (SSDs) for caching and traditional hard disk drives (HDDs) for cost-effective capacity storage. 4.5 Continuous Monitoring and Optimization Implement real-time monitoring tools to provide visibility into the HCI environment's performance, health, and resource utilization, allowing IT teams to address potential issues proactively. Predictive analytics come into play to forecast future resource requirements and identify potential bottlenecks before they impact performance. Resource balancing mechanisms automatically allocate compute, storage, and network resources to workloads based on demand, ensuring efficient resource utilization. Continuous capacity monitoring and planning help organizations avoid resource shortages in anticipation of future growth. 5. Future Trends in HCI Storage and Data Management Modernized storage solutions using HCI have transformed data management practices, revolutionizing how organizations store, protect, and utilize their data. HCI offers a centralized and software-defined approach to storage, simplifying management, improving scalability, and enhancing operational efficiency. The abstraction of storage from physical hardware grants organizations greater agility and flexibility in their storage infrastructure, adapting to evolving business needs. With HCI, organizations implement consistent security policies across their storage resources, reducing the risk of data breaches and ensuring data integrity. This flexibility empowers organizations to optimize resource utilization scale as needed. This drives informed decision-making, improves operational efficiency, and fosters data-driven strategies for organizational growth. The future of Hyper-Converged Infrastructure storage and data management promises exciting advancements that will revolutionize the digital landscape. As edge computing gains momentum, HCI solutions will adapt to support edge deployments, enabling organizations to process and analyze data closer to the source. Composable infrastructure will enable organizations to build flexible and adaptive IT infrastructures, dynamically allocating compute, storage, and networking resources as needed. Data governance and compliance will be paramount, with HCI platforms providing robust data classification, encryption, and auditability features to ensure regulatory compliance. Optimized hybrid and multi-cloud integration will enable seamless data mobility, empowering organizations to leverage the benefits of different cloud environments. By embracing these, organizations can unlock the full potential of HCI storage and data management, driving innovation and achieving sustainable growth in the ever-evolving digital landscape.

Read More
Application Infrastructure

Securing the 5G edge

Article | November 11, 2021

The rollout of 5G networks coupled with edge compute introduces new security concerns for both the network and the enterprise. Security at the edge presents a unique set of security challenges that differ from those faced by traditional data centers. Today new concerns emerge from the combination of distributed architectures and a disaggregated network, creating new challenges for service providers. Many mission critical applications enabled by 5G connectivity, such as smart factories, are better off hosted at the edge because it's more economical and delivers better Quality of Service (QoS). However, applications must also be secured; communication service providers need to ensure that applications operate in an environment that is both safe and provides isolation. This means that secure designs and protocols are in place to pre-empt threats, avoid incidents and minimize response time when incidents do occur. As enterprises adopt private 5G networks to drive their Industry 4.0 strategies, these new enterprise 5G trends demand a new approach to security. Companies must find ways to reduce their exposure to cyberattacks that could potentially disrupt mission critical services, compromise industrial assets and threaten the safety of their workforce. Cybersecurity readiness is essential to ensure private network investments are not devalued. The 5G network architecture, particularly at the edge, introduces new levels of service decomposition now evolving beyond the virtual machine and into the space of orchestrated containers. Such disaggregation requires the operation of a layered technology stack, from the physical infrastructure to resource abstraction, container enablement and orchestration, all of which present attack surfaces which require addressing from a security perspective. So how can CSPs protect their network and services from complex and rapidly growing threats? Addressing vulnerability points of the network layer by layer As networks grow and the number of connected nodes at the edge multiply, so do the vulnerability points. The distributed nature of the 5G edge increases vulnerability threats, just by having network infrastructure scattered across tens of thousands of sites. The arrival of the Internet of Things (IoT) further complicates the picture: with a greater number of connected and mobile devices, potentially creating new network bridging connection points, questions around network security have become more relevant. As the integrity of the physical site cannot be guaranteed in the same way as a supervised data center, additional security measures need to be taken to protect the infrastructure. Transport and application control layers also need to be secured, to enable forms of "isolation" preventing a breach from propagating to other layers and components. Each layer requires specific security measures to ensure overall network security: use of Trusted Platform Modules (TPM) chipsets on motherboards, UEFI Secure OS boot process, secure connections in the control plane and more. These measures all contribute to and are integral part of an end-to-end network security design and strategy. Open RAN for a more secure solution The latest developments in open RAN and the collaborative standards-setting process related to open interfaces and supply chain diversification are enhancing the security of 5G networks. This is happening for two reasons. First, traditional networks are built using vendor proprietary technology – a limited number of vendors dominate the telco equipment market and create vendor lock-in for service providers that forces them to also rely on vendors' proprietary security solutions. This in turn prevents the adoption of "best-of-breed" solutions and slows innovation and speed of response, potentially amplifying the impact of a security breach. Second, open RAN standardization initiatives employ a set of open-source standards-based components. This has a positive effect on security as the design embedded in components is openly visible and understood; vendors can then contribute to such open-source projects where tighter security requirements need to be addressed. Aside from the inherent security of the open-source components, open RAN defines a number of open interfaces which can be individually assessed in their security aspects. The openness intrinsically present in open RAN means that service components can be seamlessly upgraded or swapped to facilitate the introduction of more stringent security characteristics, or they can simultaneously swiftly address identified vulnerabilities. Securing network components with AI Monitoring the status of myriad network components, particularly spotting a security attack taking place among a multitude of cooperating application functions, requires resources that transcend the capabilities of a finite team of human operators. This is where advances in AI technology can help to augment the abilities of operations teams. AI massively scales the ability to monitor any number of KPIs, learn their characteristic behavior and identify anomalies – this makes it the ideal companion in the secure operation of the 5G edge. The self-learning aspect of AI supports not just the identification of known incident patterns but also the ability to learn about new, unknown and unanticipated threats. Security by design Security needs to be integral to the design of the network architecture and its services. The adoption of open standards caters to the definition of security best practices in both the design and operation of the new 5G network edge. The analytics capabilities embedded in edge hyperconverged infrastructure components provide the platform on which to build an effective monitoring and troubleshooting toolkit, ensuring the secure operation of the intelligent edge.

Read More

Spotlight

Cato Networks

Cato Networks provides organizations with a cloud-based and secure global SD-WAN. Cato delivers an integrated networking and security platform that securely connects all enterprise locations, people and data.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events