Micro Data Centers Infrastructure

Micro data centers can be beneficial when you need to install in harsh or outdoor environments. They offer reduced latency and increased physical security, and enable you to deploy IT equipment in close proximity to a data intensive application.

Spotlight

Avnet

Avnet guides today’s ideas into tomorrow’s technology. We design and make for start-ups – the technology dreamers poised to be the next big thing. And we supply and deliver for the contract manufacturers and OEMs who need to stock shelves around the globe.

OTHER ARTICLES
Hyper-Converged Infrastructure, Application Infrastructure

Why are Investments in Network Monitoring Necessary for Businesses?

Article | July 19, 2023

Businesses are depending more and more on information technology to accomplish daily objectives. The viability and profitability of a firm are directly impacted by the necessity of putting the appropriate technological processes in place. The misunderstanding that "the Internet is down" is often associated with poor internet connectivity shows how crucial network maintenance is since troubleshooting should always begin and conclude with a network expert. In actuality, though, that employee will spend time out of their day to "repair the Internet," and the money spent on that time is the result of the company's failure to implement a dependable network monitoring system. The direct financial loss increases with network unreliability. Because expanding wide area network (WAN) infrastructure and cloud networking have now become a significant component of today's enterprise computing, networks have grown much more virtualized and are no longer restricted to either physical location or hardware. While networks themselves are evolving, there is a growing need for IT network management. As organizations modernize their IT infrastructure, they should think about purchasing a network management system for several reasons. Creating More Effective, Less Redundant Systems Every network has to deal with data transfer through significant hubs and the flow of information. In order to avoid slowing down data transfer, not using up more IP addresses in a network scheme than necessary, and avoiding dead loops, networking engineers have had to carefully route networking equipment to end devices over the years. An effective IT management solution can analyze how your network is operating and provide immediate insights into the types of changes you need to make to cut down on redundancy and improve workflow. More productivity and less time spent troubleshooting delayed data transfers result from increased efficiency. Increasing Firewall Defense Given that more apps are being utilized for internal and external massive data transfers, every network must have adequate firewalls and access control setup. In addition to screen sharing and remote desktop services, more companies require team meeting software with live video conferencing choices. Programs with these features can be highly vulnerable to hackers and other vulnerabilities; thus, it's crucial that firewalls stop attackers from utilizing the software to access restricted sections of corporate networks. Your network management tools can set up your firewalls and guarantee that only secure network connections and programs are used in critical parts of your system. The bottom line is that your company network will constantly require security and development, and your underlying network must be quick and dependable to satisfy demands for both workplace productivity and customer experience. Which IT network management system, nevertheless, is best for your company? Effectiveness doesn't require a lot of complexity, and if it works with well-known network providers, there's a good chance the cost will be justified. Rock-solid security will be the most crucial factor, but you should also search for a system that can operate on physical, cloud, and hybrid infrastructure.

Read More
Hyper-Converged Infrastructure

The Future of Computing: Why IaaS is Leading the Way

Article | October 10, 2023

Firms face challenges with managing their resources, and ensuring security & cost optimization, adding complexity to their operations. IaaS solves this need to maintain and manage IT infrastructure. Contents 1. Infrastructure as a Service: Future of Cloud Computing 2. Upcoming Trends in IaaS 2.1 The Rise of Edge Computing 2.2 Greater Focus on Security 2.3 Enhancement in Serverless Architecture 2.4 Evolution of Green Computing 2.5 Emergence of Containerization 3. Final Thoughts 1. Infrastructure as a Service: Future of Cloud Computing As digital transformation continues to reshape the business landscape, cloud computing is emerging as a critical enabler for companies of all sizes. With infrastructure-as-a-service (IaaS), businesses can outsource their hardware and data center management to a third-party provider, freeing up resources and allowing them to focus on their core competencies, reducing operational costs while maintaining the agility to adapt to changing market conditions. With the increasing need for scalable computing solutions, IaaS is set to become a pivotal player in shaping the future of computing. IaaS is already emerging as a prominent solution for organizations looking to modernize their computing capabilities. This article will delve into the recent trends of IaaS and its potential impact on the computing industry, implying why IaaS is important for emerging businesses. 2. Upcoming Trends in IaaS 2.1 The Rise of Edge Computing The rise in IoT and mobile computing has led to a challenge in the amount of data that can be transferred across a network in a certain period. Due to its many uses, such as improving reaction times for self-driving cars and safeguarding confidential health information, the market for edge computing infrastructure is expected to reach a value of $450 billion. (Source: CB Insights) Edge computing is a technology that enables data processing to occur closer to its origin, thereby reducing the volume of data that needs to be transmitted to and from the cloud. A mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository in a footprint of less than 100 square feet. (Source: IDC) Edge computing represents the fourth major paradigm shift in modern computing, following mainframes, client/server models, and the cloud. A hybrid architecture of interconnected IaaS services allows for low latency through edge computing and high performance, security, and flexibility through a private cloud. Connecting edge devices to an IaaS platform streamlines location management and enables remote work, thus looking forward to smoother future of IaaS. An edge layer (fog computing) is required to optimize the architecture model with high-speed and reliable 5G connectivity, connecting edge devices with the cloud. This layer acts as autonomous distributed nodes, capable of analyzing and acting on real-time data. Doing so sends only the data required to the central infrastructure in an IaaS instance. By combining the advantages of edge computing in data capture with the storage and processing capabilities of the cloud, companies can take full advantage of the benefits of data analytics to leverage their innovation and optimization capabilities while simultaneously and effectively managing IoT devices on the edge. IoT devices, also known as edge devices, possess the ability to analyze data in real time through the use of AI, ML, and algorithms, even in the absence of an internet connection. This technology yields numerous advantages, including superior decision-making, early detection of issues, and heightened efficiency. However, an IaaS infrastructure with top-notch computing and storage capabilities is an absolute necessity to analyze the data effectively. 2.2 Greater Focus on Security Hackers might use cloud-based services to host malware through malware-as-a-service (MaaS) platforms or to distribute malware payloads using cloud-based apps and services. In addition, organizations often need more than they can secure in their IaaS footprint, leading to increased misconfigurations and vulnerabilities. Recognizing and reacting to an attack is called reactive security, whereas anticipating a dangerous event before it happens and intervening to prevent it is predictive safety. Predictive security is the future of cloud security. The cybersecurity mesh involves setting up a distributed network and infrastructure to create a secure perimeter. This allows companies to centrally manage access to their data while enforcing security policies across the distributed network. It is a critical component of the Zero-Trust architecture. A popular IaaS cloud security trend is the multi-cloud environment. Multi-cloud proves effective when tools like security information and event management (SIEM) and threat intelligence are deployed. DevSecOps is a methodology that incorporates security protocols at every stage of software development lifecycle (SDLC). This makes it convenient to deal with threats during the lifecycle itself. Since deploying DevOps, software releases have been shortened for every product release. DevSecOps proves to be secure and fast only with a fully automated software development lifecycle. The DevOps and security teams must collaborate to provide massive digital transformation and security. Digital services and applications need stronger and better security in exponential amounts. This methodology must be enforced in a CI/CD pipeline to make it a continuous process. Secure access service edge (SASE) is a cloud-based architecture that integrates networking and software-as-a-service (SaaS) functions, providing them as a unified cloud service. The architecture combines a software-defined wide area network (SD-WAN) or other WAN with multiple security capabilities, securing network traffic. 2.3 Enhancement in Serverless Architecture Serverless architecture apps are launched on demand when an event triggers the app code to run. The public cloud provider then assigns the resources necessary for the operation to occur. With serverless apps, containers are deployed and launched on demand when needed. This differs from the traditional IaaS cloud computing model, where users must pre-purchase capacity units for always-on server components to run their apps. The app will incur minimal charges during off-peak hours with a serverless model. When there is a surge in traffic, it can scale up seamlessly through the provider without requiring DevOps involvement. A serverless database is a type of database that operates as a fully managed database-as-a-service (DBaaS). It automatically adjusts its computing and storage resources to match the demand, making it convenient for users. A serverless database is a cloud based service that eliminates the need to manage infrastructure, scaling, and provisioning. It allows developers to concentrate on constructing applications or digital products without the burden of managing servers, storage, or backups. 2.4 Evolution of Green Computing In promoting green computing, infrastructure-as-a-service plays a significant role by allowing cloud providers to manage the infrastructure. This helps reduce the environmental impact and boosts efficiency by intelligently utilizing servers at high utilization rates. As a result, studies show that public cloud infrastructure is typically 2-4 times more efficient than traditional data centers, a giant leap forward for sustainable computing practices. 2.5 Emergence of Containerization Containerization is a type of operating system virtualization where applications are executed in distinct user spaces called containers. These containers operate on the same shared operating system, providing a complete, portable computing environment for virtualized infrastructure. Containers are self-contained software packages operating in any environment, including private data centers, public clouds, or developer laptops. They comprise all the necessary components required for the right functioning of IaaS-adopted cloud computing. 3. Final Thoughts With the expansion of multi-cloud environments, the emergence of containerization technologies like Docker and Kubernetes, and enhancements in serverless databases, IaaS is poised to become even more powerful and versatile in meeting the diverse computing needs of organizations. These advancements have enabled IaaS providers to offer a wide range of services and capabilities, such as automatic scaling, load balancing, and high availability, making it easier for businesses to build, deploy, and manage their applications swiftly in the cloud.

Read More
Application Storage, Data Storage

Attend These Events to Discover the Future of Cloud Infrastructure

Article | July 12, 2023

IT infrastructure is of utmost importance because it enables organizations to manage and deliver data & services to their employees, customers, and partners. The events mentioned in the following paragraphs cover a range of topics related to cloud infrastructure, including cloud security, hybrid cloud, multi-cloud, cloud automation, and cloud-native applications. These events are taking place in United States, Denmark, China, and Rome among others from May 2023 to December 2023. Let's take a closer look at each of these events and know what attendees can expect to gain from them. 1. Data Center World May 8-11, 2023 | Austin (Texas) Data Center World is an important virtual event focusing on digital infrastructure aimed at professionals working in data centers, technology leaders, and innovators driving the digital industry forward. It is the longest-running data center conference and expo, blending decades of experience with insight into today's and tomorrow's strategic issues. The conference will provide attendees valuable knowledge and strategies on various technologies and concepts necessary for planning, managing, and optimizing data centers. The event will feature multiple themes as edge computing, colocation, hyperscale and more. The conference will offer a platform for experts to share their insights on the latest developments and trends shaping the future of digital infrastructure. 2. Gartner IT Infrastructure, Operations and Cloud Strategies Conference 2023 November 20-21, 2023 | London (England) Gartner IT Infrastructure, Operations and Cloud Strategies Conference 2023 is a two-day event that will cover various topics related to IT infrastructure, operations, and cloud strategies. These topics include the cloud cookbook, disruptive practices, trends, technologies, and more. The conference is designed for attendees who are responsible for servers, storage & backup/recovery among others. It will provide a platform for IT professionals to share knowledge, learn from experts in the field, and discuss best practices as well as emerging trends in IT infrastructure, operations, and cloud strategies. It is an excellent opportunity to stay up-to-date with the latest industry developments and gain valuable insights that can help drive their organizations forward. 3. Stackconf: The Open Source Infrastructure Conference September 13-14, 2023 | Berlin (Germany) Stackconf is an event that will focus on open-source infrastructure solutions in continuous integration, containers, hybrid, and cloud technologies. It will provide a platform for international experts to present their ideas on bridging the gap between development, testing, and operations. The event will offer lectures on various infrastructure topics throughout the entire DevOps lifecycle, including building, CI/CD, running, and monitoring. Participants can learn about innovative technology mixes and future-oriented designs for large infrastructures. The event promises to be an exciting opportunity to explore the latest advancements in open-source infrastructure solutions. 4. 2023 5th International Conference on Hardware Security and Trust (ICHST 2023) July 8-10, 2023 | Wuxi (China) This fifth international conference is a workshop for ICSIP 2023. With the increasing use of computing and communication systems in various aspects of modern life, the importance of system security has grown significantly. This is true for the internet-of-things, which has created new attack surfaces and requirements for secure system operation. Furthermore, the design, manufacturing, and distribution of microchips, PCBs, and other electronic components have become more complex, which has led to potential security vulnerabilities. ICHST will promote the growth of hardware-based security research and development, highlighting new hardware and system security results. The conference will cover topics such as techniques, design/test methods, and more. 5. Capacity Caucasus and Central Asia 2023 June 21-22, 2023 |Baku (Azerbaijan) Capacity Media is pleased to announce the launch of a new digital infrastructure event for the Caucasus and Central Asian markets. This event is the only kind in the region and is being introduced when virgin markets open up. In addition, significant investments are being made into digital infrastructure, such as the Digital Silk Way project. This presents boundless opportunities for IP transit and content, with a growing demand for digital services ranging from e-commerce to e-learning, telemedicine, and telecommuting. The event aims to bring together infrastructure professionals and digital service providers in this emerging digital hub to explore the latest trends, technologies, and opportunities in the field of digital infrastructure. 6. DATACENTER FORUM HELSINKI 2023 June 1, 2023 | Helsinki (Finland) This eighth annual Datacenter Forum Helsinki is a highly-anticipated event that will bring together over 400 professionals from the data center sector in Finland and the Baltics. The conference is free-of-charge for those involved in managing and operating IT infrastructure, making it accessible to a wide range of professionals. Attendees can expect to network with peers, learn about the latest trends as well as technologies in the field of data centers, and participate in informative sessions and discussions led by industry experts. The event promises to be an exciting opportunity for professionals in the region to connect, collaborate, and gain valuable insights into the future of data center infrastructure. 7. DICE East May 24-25, 2023 |Virginia (US) DICE East is a highly-anticipated two-day national event focused on data centers. This premium event will allow attendees to explore the latest opportunities, challenges, and innovations in the digital infrastructure industry. Attendees expect to gain valuable insights into the future of data center technology and connect with industry experts and peers. It promises to be a must-attend event for anyone involved in the digital infrastructure industry. Some key themes that will be discussed include cloud computing, artificial intelligence, edge computing, and more. The event will also include an exhibit hall where attendees can see the latest products and solutions from leading vendors in the digital infrastructure space. 8. International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC-CIE 2023) August 20-23, 2023 | Massachusetts (United States) The International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC-CIE 2023) is a significant event in the field of design and related manufacturing. It will comprise a series of sub-conferences, providing an opportunity for researchers, academicians and professionals from around the world to present and discuss the latest advancements, trends, and challenges in the field of design and related manufacturing. The conference will feature keynote speeches, paper presentations, panel discussions, and interactive sessions, providing attendees with a comprehensive view of the latest developments in the industry. The IDETC-CIE 2023 conference is a must-attend event for professionals, researchers, and students involved in the design and related manufacturing industries. 9. International Intelligent Building and Green Technology Expo (IBG 2023) November 15-17, 2023 | Shanghai (China) The International Intelligent Building and Green Technology Expo (IBG 2023) is a specialized event dedicated to creating an intelligent and energy-efficient building ecosystem. The expo will focus on presenting the latest products & services related to fire and safety systems, intelligent building equipment and management, intelligent building management systems, building information systems, and information application systems. The IBG 2023 expo will attract attendees from various industries, where participants can gain insights into the latest trends & technologies related to intelligent building as well as green technology and learn about industry developments and advancements. This is a must-attend event for professionals and businesses looking to stay up-to-date with the latest advancements in the intelligent building. 10. ConnecTechAsia June 06, 2023 | Singapore EXPO (Singapore) ConnecTechAsia is a leading conference that will focus on the latest advancements in communication, enterprise, and broadcast technologies. The event will comprise three separate conferences, namely BroadcastAsia, CommunicAsia, and NXTAsia, covering a wide range of topics related to the respective fields. It will provide attendees with the opportunity to explore the latest trends and innovations in communication, enterprise, and broadcast technologies, and to connect with industry experts, thought leaders, and peers from around the world. The conference will also feature keynotes, panel discussions, workshops, and exhibitions that showcase the latest products and services in the industry. 11. Data Summit 2023 May 10-11, 2023 | Boston (United States) The Data Summit 2023 is a leading conference that will focus on data management and analytics. The event will feature a wide range of topics related to data and analytics, including What’s Next in data and analytics architecture, modern data strategy essentials, AI & machine learning, and data mesh and data fabric, among others. The event will also include keynotes, panel discussions as well as workshops that showcase the latest products and services in the industry. One of the main highlights of the Data Summit 2023 is the Data Solutions Showcase, which offers attendees the opportunity to explore and get demonstration on the latest data management and analytics solutions from leading vendors in the industry. 12. Advancing Data Center Construction: West 2023 July 17-19, 2023 | Washington (United States) The Advancing Data Center Construction: West 2023 conference is a three-day event that will bring together professionals from the data center construction industry to discuss the latest trends and strategies. It will focus on the latest trends and strategies in data center construction. The event will feature keynote speeches from industry leaders, panel discussions, and networking opportunities. Covering a variety of topics, such as optimizing prefabrication strategies, managing supply chain disruption, enhancing collaborative project delivery, looking into the future of data center projects, and more, the event will also cover sustainable construction approaches, including strategies for reducing energy consumption and minimizing environmental impact. 13. International Data Center and Cloud Computing Expo (CDCE 2023) November 15-17, 2023 | Shanghai (China) The International Data Center and Cloud Computing Expo (CDCE 2023) is a trade show that will offer a platform for exhibitors to showcase their latest products & services in the field of data centers and cloud computing. Featuring a wide range of products, such as data center management software, monitoring systems, power generators, air conditioning and cooling systems, security systems, and more, the event will attract attendees from various industries, such as internet service providers, financial institutions, energy companies, research institutions, hospitals, and manufacturers. The exhibition will showcase products & services in different categories, such as data center management, infrastructure solutions, cloud computing services, system integration and development, and advanced construction materials. 14. Datacenter Forum Copenhagen 2023 September 21, 2023 | Copenhagen (Denmark) This ninth edition of Datacenter Forum is an annual event that will focus on the latest trends and developments in the data center industry. The one-day event will bring together over 300 professionals from the data center sector in Denmark, including IT infrastructure managers and operators. Nordics Events, a company that specializes in putting on industry-specific events in the Nordic region, is in charge of organizing the event. Topics covered at the event will include data center design, energy efficiency, security, and more. Attendees can also visit the exhibition area, where they can meet with vendors and learn about the latest products and services in the industry. Attendance at the event is free for those who are involved in managing and operating IT infrastructure. 15. Telco Infrastructure Summit (TIS) 2023: September 21-22, 2023 | Rome (Italy) CC (Carrier Community) is a global telecom club organizing its fourth specialized annual event called CC-TIS 2023 Rome. The event is a hybrid gathering, bringing together leading industry telco and ICT players to learn, share, network, and shape industry trends related to digital transformation and telecom infrastructure development. During the two-day event, attendees will discuss market-relevant topics related to digital transformation, such as submarine and connectivity, as well as other emerging trends in the industry. Attendees can expect to engage in lively discussions and gain a deeper understanding of the opportunities and challenges facing the telecom industry. Conclusion: These events will help organizations stay ahead of the curve in today's rapidly evolving landscape and capitalize on the opportunities presented. The events mentioned above aim to facilitate collaboration, knowledge exchange, and discussions toward finding novel solutions for the computing systems of tomorrow.

Read More
Application Infrastructure

The importance of location intelligence and big data for 5G growth

Article | December 20, 2021

The pandemic has had a seismic impact on the telecom sector. This is perhaps most notably because where and how the world goes to work has been re-defined, with nearly every business deepening its commitment to mobility. Our homes suddenly became our offices, and workforces went from being centrally managed to widely distributed. This has called for a heightened need for widespread, secure and high-speed connectivity around the clock. 5G has answered the call, and 5G location intelligence and big data can provide service providers with the information they need to optimize their investments. Case in point: Juniper Research reported in its 5G Monetization study that global revenue from 5G services will reach $73 billion by the end of 2021, rising from just $20 billion last year. 5G flexes as connected devices surge Market insights firm IoT Analytics estimates there will be more than 30 billion IoT connections by 2025. That's an average of nearly four IoT devices per person. To help meet the pressure this growth in connectivity is putting on telecom providers, the Federal Communications Commission (FCC) is taking action to make additional spectrum available for 5G services and promoting the digital opportunities it provides to Americans. The FCC is urging that investments in 5G infrastructure be prioritized given the "widespread mobility opportunity" it presents, as stated by FCC Chairwoman Jessica Rosenworcel. While that's a good thing, we must also acknowledge that launching a 5G network presents high financial risk, among other challenges. The competitive pressures are significant, and network performance matters greatly when it comes to new business acquisition and retention. It's imperative to make wise decisions on network build-out to ensure investments yield the anticipated returns. Thus, telcos need not – and should not – go it blindly when considering where to invest. You don't know what you don't know, which is why 5G location intelligence and big data can provide an incredible amount of clarity (and peace of mind) when it comes to optimizing investments, increasing marketing effectiveness and improving customer satisfaction. Removing the blindfold Location data and analytics provide telcos and Communications Service Providers (CSPs) with highly-specific insights to make informed decisions on where to invest in 5G. With this information, companies can not only map strategic expansion, but also better manage assets, operations, customers and products. For example, with this intelligence, carriers can gain insight into the most desired locations of specific populations and how they want to use bandwidth. They can use this data to arm themselves with a clear understanding of customer location and mobility, mapping existing infrastructure and competitive coverage against market requirements to pinpoint new opportunities. By creating complex customer profiles rich with demographic information like age, income and lifestyle preferences, the guesswork is eliminated for where the telco should or shouldn’t deploy new 5G towers. Further, by mapping a population of consumers and businesses within a specific region and then aggregating that information by age, income or business type, for example, a vivid picture comes to life of the market opportunity for that area. This type of granular location intelligence adds important context to existing data and is a key pillar to data integrity, which describes the overall quality and completeness of a dataset. When telcos can clearly understand factors such as boundaries, movement and the customers’ surroundings, predictive insights can be made regarding demographic changes and future telecom requirements within a certain location. This then serves as the basis for a data-backed 5G expansion strategy. Without it, businesses are burdened by the trial-and-error losses that are all too common with 5G build-outs. Location precision's myriad benefits Improved location precision has many benefits for telcos looking to pinpoint where to build, market and provision 5G. Among them are: Better data: Broadening insights on commercial, residential and mixed-use locations through easy-to-consume, scalable datasets provide highly accurate in-depth analyses for marketing and meeting customer demand. Better serviceability insights: Complete and accurate location insights allow for a comprehensive view of serviceable addresses where products and services can be delivered to current and new customers causing ROI to improve and customers to be adequately served. Better subscriber returns: Companies that deploy fixed wireless services often experience plan cancellations due to inconsistencies of signal performance, which typically result from the misalignment of sites with network assets. Location-based data provides operators with the ability to adapt their networks for signal consistency and serviceability as sites and structures change. The 5G future The role of location intelligence in accelerating development of new broadband services and driving ROI in a 5G world cannot be overstated. It adds a critical element of data integrity that informs network optimization, customer targeting and service provisioning so telecom service providers can ensure their investments are not made with blind hope.

Read More

Spotlight

Avnet

Avnet guides today’s ideas into tomorrow’s technology. We design and make for start-ups – the technology dreamers poised to be the next big thing. And we supply and deliver for the contract manufacturers and OEMs who need to stock shelves around the globe.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events