Are Data Centres Able to Operate in Tropical Environments?

Given that a large percentage of the power used to run the average data centre is directly related to cooling, builders and designers do their best to locate new facilities in locales with cooler climates and lower humidity. The idea is to save money by reducing the amount of power used for temperature and humidity control. Still, the curious among us want to know if a data centre could still operate at peak performance under conditions twice the current norms. We are about to find out thanks to a test to get under way shortly in Singapore. News reports say the world's first tropical data centre is now in the planning stages and involves a number of big-name partners including Dell, Hewlett-Packard Enterprise, Intel, ERS, Fujitsu and others. The consortium will set up a controlled test environment in an existing Keppel data centre for the test.

Spotlight

Neusoft

Neusoft provides innovative information technology enabled solutions and services to meet the demands arising from social transformation, to shape new life styles for individuals and to create values for the society. Focusing on software technology, Neusoft provides industrial solutions, smart connected products, platform products and cloud and data services.

OTHER ARTICLES
Hyper-Converged Infrastructure

Cartesi creates Linux infrastructure for blockchain DApps

Article | October 10, 2023

DApps (sometimes called Dapps) are from the blockchain universe and so, logically, the apps part stands for application (obviously) and the D part stands for decentralised (only obvious once you know that we’re talking distributed immutable language here). According to the guides section at blockgeeks, DApps are open source in terms of code base, incentivised (in terms of who validates it) and essentially decentralised so that all records of the application’s operation must be stored on a public and decentralised blockchain to avoid pitfalls of centralisation. So then, Cartesi is a DApp infrastructure that runs an operating system (OS) on top of blockchains. The company has now launched a more complete ‘platform-level’ offering, which is described as a layer-2 solution

Read More
Hyper-Converged Infrastructure

Infrastructure Lifecycle Management Best Practices

Article | October 3, 2023

As your organization scales, inevitably, so too will its infrastructure needs. From physical spaces to personnel, devices to applications, physical security to cybersecurity – all these resources will continue to grow to meet the changing needs of your business operations. To manage your changing infrastructure throughout its entire lifecycle, your organization needs to implement a robust infrastructure lifecycle management program that’s designed to meet your particular business needs. In particular, IT asset lifecycle management (ITALM) is becoming increasingly important for organizations across industries. As threats to organizations’ cybersecurity become more sophisticated and successful cyberattacks become more common, your business needs (now, more than ever) to implement an infrastructure lifecycle management strategy that emphasizes the security of your IT infrastructure. In this article, we’ll explain why infrastructure management is important. Then we’ll outline steps your organization can take to design and implement a program and provide you with some of the most important infrastructure lifecycle management best practices for your business. What Is the Purpose of Infrastructure Lifecycle Management? No matter the size or industry of your organization, infrastructure lifecycle management is a critical process. The purpose of an infrastructure lifecycle management program is to protect your business and its infrastructure assets against risk. Today, protecting your organization and its customer data from malicious actors means taking a more active approach to cybersecurity. Simply put, recovering from a cyber attack is more difficult and expensive than protecting yourself from one. If 2020 and 2021 have taught us anything about cybersecurity, it’s that cybercrime is on the rise and it’s not slowing down anytime soon. As risks to cybersecurity continue to grow in number and in harm, infrastructure lifecycle management and IT asset management are becoming almost unavoidable. In addition to protecting your organization from potential cyberattacks, infrastructure lifecycle management makes for a more efficient enterprise, delivers a better end user experience for consumers, and identifies where your organization needs to expand its infrastructure. Some of the other benefits that come along with comprehensive infrastructure lifecycle management program include: More accurate planning; Centralized and cost-effective procurement; Streamlined provisioning of technology to users; More efficient maintenance; Secure and timely disposal. A robust infrastructure lifecycle management program helps your organization to keep track of all the assets running on (or attached to) your corporate networks. That allows you to catalog, identify and track these assets wherever they are, physically and digitally. While this might seem simple enough, infrastructure lifecycle management and particularly ITALM has become more complex as the diversity of IT assets has increased. Today organizations and their IT teams are responsible for managing hardware, software, cloud infrastructure, SaaS, and connected device or IoT assets. As the number of IT assets under management has soared for most organizations in the past decade, a comprehensive and holistic approach to infrastructure lifecycle management has never been more important. Generally speaking, there are four major stages of asset lifecycle management. Your organization’s infrastructure lifecycle management program should include specific policies and processes for each of the following steps: Planning. This is arguably the most important step for businesses and should be conducted prior to purchasing any assets. During this stage, you’ll need to identify what asset types are required and in what number; compile and verify the requirements for each asset; and evaluate those assets to make sure they meet your service needs. Acquisition and procurement. Use this stage to identify areas for purchase consolidation with the most cost-effective vendors, negotiate warranties and bulk purchases of SaaS and cloud infrastructure assets. This is where lack of insights into actual asset usage can potentially result in overpaying for assets that aren’t really necessary. For this reason, timely and accurate asset data is crucial for effective acquisition and procurement. Maintenance, upgrades and repair. All assets eventually require maintenance, upgrades and repairs. A holistic approach to infrastructure lifecycle management means tracking these needs and consolidating them into a single platform across all asset types. Disposal. An outdated or broken asset needs to be disposed of properly, especially if it contains sensitive information. For hardware, assets that are older than a few years are often obsolete, and assets that fall out of warranty are typically no longer worth maintaining. Disposal of cloud infrastructure assets is also critical because data stored in the cloud can stay there forever. Now that we’ve outlined the purpose and basic stages of infrastructure lifecycle management, it’s time to look at the steps your organization can take to implement it.

Read More
Hyper-Converged Infrastructure, Windows Systems and Network

Choosing the Right Tools for Hyper-Converged Management and Orchestration

Article | July 11, 2023

Streamlining operations and maximizing efficiency: Choose the right tools for managing and orchestrating hyper-converged infrastructure to unlock its full potential with Hyperconverged solutions. Managing and orchestrating hyper-converged infrastructure (HCI) is critical to modern IT operations. With the growing adoption of HCI solutions, choosing the right tools for management and orchestration is essential for organizations to optimize their infrastructure and ensure seamless operations. In this article, we will delve into the factors to consider when selecting Hyper-Converged tools for management and orchestration and explore some of the top options available in the market. 1. Symcloud Orchestrator The Symcloud platform is a webscale solution designed for metal-service automation and orchestration in telecommunications. It enables the automation and management of various network components, including RAN (Radio Access Network), packet core, and MEC (Multi-Access Edge Computing). With Symcloud, businesses can centrally manage large numbers of CNF (Cloud-Native Function) and VNF (Virtual Network function) capable Kubernetes clusters on a single Kubernetes platform. The platform allows for rapid deployment of the entire solution stack in minutes, supporting edge, far edge, and core data centers. Symcloud provides advanced monitoring, planning, and healing capabilities, enabling users to view hardware, software, services, and connectivity dependencies. The architecture of Symcloud Orchestrator combines app-aware storage, virtual networking, and application workflow automation on Kubernetes. Symcloud Storage provides advanced storage and data management capabilities for Kubernetes distributions, seamlessly integrating with native administrative tooling. Symcloud Platform is a Kubernetes infrastructure that supports containers and virtual machines, offering superior performance, features, and flexibility. 2. Morpheus Morpheus Data is a comprehensive hybrid cloud management platform that empowers enterprises to manage and modernize their applications while reducing costs and improving efficiency. With Morpheus, businesses can quickly enable on-premises private clouds, centralize access to public clouds, and orchestrate changes with advanced features like cost analytics, governance policies, and automation. It provides a unified view of virtual machines, clouds, containers, and applications in a single location, regardless of the private or public cloud environment. Morpheus offers responsive support from an expert team and features an extensible design. It helps centralize platforms, create private clouds, manage public clouds, and streamline Kubernetes deployments. This tool also enables compliance assurance through simplified authentication, access controls, policies, and security management. By automating application lifecycles, running workflows, and simplifying day-to-day operations, Morpheus helps modernize applications. The platform optimizes cloud costs by inventorying existing resources, right-sizing them, tracking cloud spending, and providing centralized visibility. 3. The Kubernetes Database-as-a-Service Platform Portworx Data Services is a Kubernetes Database-as-a-Service (DBaaS) platform that offers a single solution for deploying, operating, and managing various data services without being locked into a specific vendor. It simplifies heterogeneous databases' deployment and day-to-day operations, eliminating the need for specialized expertise. With one click, organizations can deploy enterprise-grade data services with built-in capabilities like backup, restore, high availability, data recovery, security, capacity management, and migration. The platform supports a broad catalog of data services, including SQL Server, MySQL, PostgreSQL, MongoDB, Redis, Elasticsearch, Cassandra, Couchbase, Kafka, Consul, RabbitMQ, and ZooKeeper. Portworx Data Services provides a consistent DBaaS experience on any infrastructure, whether on-premises or in the cloud, enabling seamless migration based on evolving business requirements. 4. DCImanager DCImanager- a platform for managing multivendor IT infrastructure is a comprehensive platform for providing a unified interface to oversee and control all equipment types, including racks, servers, network devices, PDUs, and virtual networks. It is suitable for servers and data centers of any size, including distributed environments. DCImanager eliminates the need for additional tools and associated maintenance costs, allowing users to work seamlessly with equipment from popular vendors. With DCImanager, users can efficiently manage servers remotely, automate maintenance tasks, monitor power consumption, configure network settings, track inventory, visualize racks, and receive timely notifications. With over 16 years of experience, DCImanager is a reliable solution trusted by thousands of companies worldwide, backed by professional support. 5. EasyDCIM EasyDCIM, a cloud-like bare metal server provisioning is a comprehensive and hassle-free data center administration solution that offers an all-in-one platform for managing daily tasks without requiring multiple software tools. It provides mobility, allowing remote management of data centers from any location and device. The system is highly expandable and customizable, allowing users to tailor the functionality to their needs. EasyDCIM excels in automated bare metal and dedicated server provisioning, streamlining the process from ordering to service delivery. It features a standalone system with a fully customizable admin control panel and user portal. The platform includes advanced data center asset lifecycle tracking, automated OS installation, network auto-discovering, and integration with billing solutions. EasyDCIM's modular architecture enables the easy extension and modification of system components. 6. Puppet Puppet-Infrastructure automation and compliance at enterprise scale offers an automation solution that allows businesses to manage and automate complex workflows using reusable blocks of self-healing infrastructure as code. With model-driven and task-based configuration management, organizations can quickly deploy infrastructure to meet their evolving needs at any scale. By automating the entire infrastructure lifecycle, Puppet increases operational efficiency, eliminates silos, reduces response time, and streamlines change management. Puppet's automated policy enforcement ensures continuous compliance and a secure posture, enabling the identification, reporting, and resolution of errors while enforcing the desired state across the infrastructure. Leveraging the vibrant Puppet community, users can benefit from pre-built content and workflows, accelerating their deployment. With deep DevOps and enterprise experience, Puppet is a trusted advisor, assisting the largest enterprise customers in rethinking and redefining their IT management practices. 7. Foreman Foreman is a robust lifecycle management tool designed for system administrators to manage physical and virtual servers efficiently. With Foreman, tasks can be automated, applications can be deployed quickly, and server management becomes proactive. It supports a wide range of providers, enabling hybrid cloud management. The tool includes features such as external node classification, Puppet and Salt configuration monitoring, and comprehensive host monitoring. Its CLI, Hammer, offers easy access to API calls for streamlined data center management. With RBAC and LDAP integration, audits, and a pluggable architecture, Foreman provides a powerful solution for server provisioning, configuration management, and monitoring. Conclusion HCI choosing the right tools for management and orchestration is paramount for organizations seeking to optimize their operations and achieve greater efficiency. Businesses can make informed decisions and select tools that align with their specific needs by considering factors such as scalability, automation capabilities, integration, and vendor support. Whether leveraging vendor-provided solutions or opting for third-party tools, the key is ensuring that the chosen tools enable effective management and orchestration of the HCI environment, allowing organizations to unlock the full potential of their infrastructure and drive business success. As HCI continues to gain prominence, selecting the appropriate Hyper-Converged tools for management and orchestration becomes crucial for organizations aiming to streamline operations and maximize the benefits of their infrastructure investment. By carefully evaluating the available options, considering key factors, and aligning with business requirements, organizations can make informed decisions that optimize their HCI environment and enable them to adapt to the evolving needs of their digital infrastructure.

Read More
IT Systems Management

What Is IaaS? A Data Center in the Cloud Packed with Services

Article | August 8, 2022

Consider IaaS (infrastructure as a service) as a virtual version of your traditional data center. IaaS is a branch of cloud computing technology that offers virtualized storage, server, and networking wrapped together as a self-service platform. It is highly cost-efficient and makes up for easier, faster workloads. Although incredibly convenient for business, it largely depends on what your company needs to use it for. What is IaaS, and How Can It Benefit Your Business? IaaS first rose to popularity in the early 2010s. Since then, it has become the standard abstraction model for many types of workloads. But with the rise of the microservices application pattern and the arrival of new technologies like containers and serverless IaaS is still a foundational service, but the field is more crowded than ever. The most common household cloud computing names—AWS (Amazon Web Services), Google Cloud and Microsoft Azure— are all IaaS providers. They all maintain giant data centers around the globe. It includes tons of storage systems, physical servers, and networking equipment under a virtualization layer. Cloud customers access these resources to deploy and run applications in a highly automated manner. Developing a cloud adoption strategy is a vital step forward for modern-day business. And this subscription-based cloud computing service, IaaS, offers a remote management solution and reduces your purchase cost at the same time. Additionally, IaaS also provides key solutions vital for any company’s future plans, such as big-data analysis. It allows businesses like yours to analyze massive data sets and see future trends, patterns, and associations that a human wouldn’t. Understanding the IaaS Architecture In an IaaS service model, your cloud provider will take over your infrastructure components, such as traditional on-premises data centers and host them on the internet. This includes virtual computing, servers, networking hardware, and infrastructure components, as well as the hypervisor layer. IaaS service providers will also provide a wide array of services to accompany those infrastructure components. Monitoring Detailed billing Security Log access Load balancing Clustering Storage resiliency Backup Replication Disaster Recovery IaaS services are automated and highly policy-driven, so you can implement all your infrastructure tasks effortlessly. How Does It Work? IaaS customers access their resources through a WAN (wide area network). Leveraging the cloud provider's services, they will install the remaining elements of an application stack. For example, you can log in to the IaaS platform to create VMs (virtual machines), install operating systems on each VM, deploy middleware like databases, create storage buckets for workloads and backups, and install the enterprise workload on that VM. Afterward, you can also use the IaaS provider's services to track costs, balance network traffic, monitor performance, troubleshoot application-related issues and manage disaster recovery. IaaS Use Cases As IaaS provides general-purpose computing resources, it can be used for any kind of use case. IaaS is most often used today for the development and testing environments, websites, and web apps that interact with customers, data storage, analytics, and data warehousing workloads. Plus, it also offers backup and disaster recovery services, especially for on-premises workloads. IaaS is also a good way to set up and run common business software and apps like SAP. Real-life Examples GE Healthcare: Reputed medical imaging facility GE Healthcare adopted Amazon EC2 from AWS to design the GE Health Cloud. GE Health Cloud platform successfully empowered its consumers by collecting, storing, accessing, and processing information worldwide from different types of medical devices to obtain value from data. Coca-Cola: The beverage giant Coca-Cola collaborated with SoftLayer adopting a pay-as-you-go architecture to manage their CRM system effectively during peak seasons. Final Thoughts Before choosing a provider, you will need to think carefully about the services, reliability, and costs. First, you should thoroughly assess the capabilities of your organization’s IT department and determine how well equipped it is to deal with the ongoing demands of IaaS implementation. Accordingly, you will be prepared to choose an alternative provider and move to the alternative infrastructure if you need to.

Read More

Spotlight

Neusoft

Neusoft provides innovative information technology enabled solutions and services to meet the demands arising from social transformation, to shape new life styles for individuals and to create values for the society. Focusing on software technology, Neusoft provides industrial solutions, smart connected products, platform products and cloud and data services.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events