Unprecedented Change Transforming Data Centers.

We are in an era of transformational change in the data center industry and this is new. Historically, these dedicated buildings and rooms -- originally designed to house mainframe computers -- saw little change for more than 40 years and were highly regulated by the US and other governments. As Winston Churchill once said, “There is nothing wrong with change, if it is in the right direction.” And change is upon us! Over the past few years, several strong forces have built up momentum directly affecting data center design and architecture in 2017 and beyond. The first is cloud computing: internet giants were not satisfied with legacy data centers' speed of deployment, high cost and ability to scale, and started their own designs. The second is the Internet of Things (IoT), which is driving data centers to deal with massive amounts of data and moving computational power and delivery of high bandwidth content closer to the user, or the network edge.

Spotlight

Drupal Association

The Drupal Association is a non-profit organization. It supports the Drupal project and community with funding, infrastructure, and events. Funded by individual and organization members, and by generous sponsors, the Association's mission is to unite a global open source community to build and promote Drupal.

OTHER ARTICLES
Hyper-Converged Infrastructure

As Edge Applications Multiply, OpenInfra Community Delivers StarlingX 5.0, Offering Cloud Infrastructure Stack for 5G, IoT

Article | October 10, 2023

StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.

Read More
Hyper-Converged Infrastructure, Windows Systems and Network

How IaaS Services Help Drive Digital Transformation of Businesses

Article | July 11, 2023

Without IaaS services, businesses face high upfront costs and slower time-to-market, hindering its growth. Embracing IaaS services with compliance to regulatory measures fosters digital transformation. Contents 1. Introduction 2. Regulatory Requirements 2.1 Adhering to Regulations Before Migration 2.2. Confirming to Standards During Migration 2.3. Complying with Requirements After Migration 3. Role of IaaS in Digital Transformation 3.1. Overview of Digital Transformation in Business 3.2. Benefits of IaaS for Digital Transformation Initiation 4. Key IaaS Services for Digital Transformation 4.1. Compute Services 4.2. Storage Services 4.3. Networking Services 4.4. Security Services 5. Use Cases of IaaS in Digital Transformation 5.1. Cloud Migration 5.2. DevOps and Continuous Integration/Continuous Deployment (CI/CD) 5.3. Big Data Analytics 5.4. Internet of Things 6. Leading Providers of IaaS 6.1. Deft 6.2. Virtuozzo 6.3. DigitalOcean 6.4. Vultr 6.5. Linode 7. Conclusion 1. Introduction The article highlights infrastructure-as-a-service (IaaS) services, which are crucial in driving digital transformation for businesses. By delivering scalable computing resources, reducing IT infrastructure costs, and enabling a greater focus on core competencies, IaaS is helping businesses innovate faster and stay competitive in the rapidly evolving digital landscape. Further, the article elaborates on the three significant regulations to be considered for regulatory requirements. As businesses continue to embrace digital transformation, IaaS has emerged as a key enabler for organizations looking to achieve their goals. IaaS allows businesses to quickly and easily scale their computing resources up or down while reducing their IT infrastructure costs. This, in turn, enables businesses to focus on their core competencies, innovate faster, and stay competitive in today's fast-paced digital landscape. In this article, we will explore the ways in which IaaS is driving digital transformation, as well as the various services offered by IaaS providers that are helping businesses achieve their objectives and the use cases that follow. 2. Regulatory Requirements During cloud adoption and migration to IaaS, organizations must comply with regulatory requirements before, during, and after migration to the cloud. 2.1 Adhering to Regulations Before Migration Organizations must identify the relevant regulations that apply to their industry and geographic location. This includes: 2.1.1. Data Protection Laws These laws define how personal and sensitive data should be handled and protected. Organizations must comply with these laws when collecting, storing, processing, and sharing private and sensitive data. Examples include the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. 2.1.2. Industry-Specific Regulations These regulations apply to specific industries like healthcare, finance, and government. In addition, these regulations may define particular security and data protection requirements that organizations must comply with. Examples are the Health Insurance Portability and Accountability Act (HIPAA) in the healthcare industry and the Payment Card Industry Data Security Standard (PCI DSS) in the finance industry. 2.1.3. International Laws These laws apply to organizations operating in multiple countries or transferring data across international borders. These laws may vary based on the countries involved and define specific data protection and privacy requirements. Examples include the General Data Protection Regulation (GDPR) in the European Union and the Cross-Border Privacy Rules (CBPR) in the Asia-Pacific region. 2.2. Confirming Standards During Migration Organizations must ensure that they meet regulatory requirements while transferring data and applications to the cloud. This involves: 2.2.1. Access Management This refers to controlling who can access data and applications in the cloud. Organizations must ensure only authorizedpersonnel can access sensitive data and specific applications during migration. This can be achieved by implementing access controls such as multi-factor authentication and role-based access control. 2.2.2. Data Encryption This refers to converting data into code to prevent unauthorized access. During migration, organizations must ensure that sensitive data is encrypted both in transit and at rest. This can be achieved by using encryption technologies, such as Transport Layer Security (TLS) and Advanced Encryption Standard (AES). 2.2.3. Data Residency This refers to the legal requirements around where data can be stored and processed. Organizations must comply with these requirements during migration to avoid potential legal and regulatory consequences. This may involve ensuring data is stored and processed within specific geographic locations or complies with industry-specific regulations. 2.3. Complying with Requirements After Migration Organizations must continue to meet regulatory requirements through ongoing monitoring and reporting. This includes: 2.3.1. Regular Review and Updation of Security Measures This refers to the ongoing process of reviewing and improving the security measures that are in place to protect data and assets from potential threats. This includes identifying vulnerabilities, updating software and hardware, implementing new security policies and procedures, and training employees on best practices. 2.3.2. Data Protection This refers to the measures taken to safeguard sensitive and confidential data from unauthorized access, use, or disclosure. Proper data protection includes using encryption, access controls, firewalls, and other security technologies to prevent unauthorized access to the data center and implementing processes and procedures for securely handling and disposing of data. 2.3.3. Audit and Reporting This refers to businesses' legal and regulatory requirements to regularly audit and report on their security practices and data protection measures. This includes complying with industry-specific standards and regulations, such as the Payment Card Industry Data Security Standard (PCI DSS) or the Health Insurance Portability and Accountability Act (HIPAA), and conducting internal and external audits to ensure compliance with these standards and regulations. 3. Role of IaaS in Digital Transformation The role of IaaS in businesses is to configure, deploy, and manage cloud infrastructure environments or applications through cross-technology administration (virtual networks, operating systems, databases), scripting, monitoring automation execution, and managing incidents with a focus on service restoration. 3.1. Overview of Digital Transformation in Business IaaS provides a flexible, scalable, and customizable infrastructure that can easily be managed and optimized, allowing organizations to focus on their core business objectives and maximize their productivity and efficiency. IaaS provides businesses access to virtualized computing resources, such as virtual machines, storage, and networking, which can be provisioned and managed through a web-based interface or API. This allows businesses to quickly deploy and scale their infrastructure without worrying about the underlying hardware and infrastructure. IaaS enables businesses to focus more on their core competencies. By outsourcing IT infrastructure management to IaaS providers, businesses can focus more on their core business functions and leave control of their IT systems to the experts. In addition, by leveraging the cloud, businesses can reduce their capital investment in buying, deploying, and managing physical servers and storage devices. A report found that companies that have embraced digital transformation are 23 times more likely to acquire new customers, 6 times more likely to retain existing customers, and 19 times more likely to be profitable. (Source: McKinsey & Company) According to a study, the top benefits of digital transformation for businesses include increased efficiency (43%), better customer satisfaction (41%), and increased profitability (36%). (Source: Accenture) 3.2. Benefits of IaaS for Digital Transformation Initiation Apart from the benefits like improved agility, robust security, quick scalability, better flexibility, and cost savings, IaaS has the following benefits: Predictable Costs: IaaS providers typically offer transparent pricing models, which enable businesses to predict their IT costs more accurately and avoid unexpected expenses. Enhanced Compliance: IaaS providers often have compliance certifications, such as SOC 2, HIPAA, and PCI DSS, which can help businesses meet their regulatory compliance requirements more efficiently. Geographic Flexibility: IaaS enables businesses to deploy their IT infrastructure across different geographic regions, allowing the customer experience to soar in other markets with low latency and high availability. Disaster Recovery: IaaS providers typically have built-in disaster recovery capabilities, allowing businesses to quickly recover from data loss or infrastructure failures without significant downtime or data loss. Increased Innovation: By outsourcing their infrastructure management to IaaS providers, businesses can focus on innovation and new product development rather than infrastructure maintenance and management. 4. Key IaaS Services for Digital Transformation 4.1. Compute Services Compute services provide the processing power and resources needed to run applications in the cloud. This includes virtual machines, containers, and serverless computing. Compute services are essential for digital transformation, allowing organizations to scale their applications and infrastructure to meet changing demands. According to a report, the global cloud computing market size is expected to grow from USD 371.4 billion in 2020 to USD 832.1 billion by 2025, at a CAGR of 17.5% during the forecast period (2020-25). The growth of the market is driven by factors such as the increasing adoption of multi-cloud strategies and the growing demand for scalable and cost-effective computing. (Source: MarketsandMarkets) 4.2. Storage Services Storage services provide the capacity and durability needed to store and manage data in the cloud. This includes object storage, block storage, and file storage. Solutions such as cloud storage services are essential for digital transformation, as they allow organizations to store and manage large amounts of data and make it easily accessible to users. According to a report, the global data sphere is expected to grow from 33 zettabytes (ZB) in 2018 to 175 ZB by 2025, at a CAGR of 61%. The growth of the data sphere is driven by factors such as the increasing use of digital technologies and the growing amount of data generated by connected devices. (Source: IDC) 4.3. Networking Services Networking services provide the connectivity and performance needed to access and use cloud resources. This includes virtual networks, load balancers, and content delivery networks. Networking services are essential for digital transformation, allowing organizations to connect their applications and infrastructure across different regions and providers. According to a research report, the global multi-cloud networking market will grow from USD 2.7 billion in 2022 to USD 7.6 billion by 2027 at a compound annual growth rate (CAGR) of 22.5% during the forecast period (2022-27). (Source: MarketsandMarkets) 4.4. Security Services Cloud security services provide the protection and compliance needed to secure cloud resources and data. This includes identity and access management (IAM), encryption, and threat detection and response. Security services are essential for digital transformation, as they allow organizations to secure their applications and data from cyber threats and comply with regulatory requirements. The Global Cloud Access Security Broker Market size is expected to reach $18 billion by 2028, rising at a market growth of 17.8% CAGR during the forecast period (2022-28). (Source: ReportLinker ) 5. Use Cases of IaaS in Digital Transformation 5.1. Cloud Migration Cloud Migration: One of the primary use cases for IaaS is cloud migration, where organizations move their existing applications and infrastructure to the cloud platform. This can help organizations reduce their IT costs, improve scalability, and increase flexibility. IaaS providers offer tools and cloud services to make the migration process easier and more efficient. For example, Accenture helped global manufacturing companies migrate its IT infrastructure to the Microsoft Azure IaaS platform. One of the migrations involved moving more than 1,200 virtual machines and 150 TB of data to the cloud. As a result, the company was able to reduce its IT infrastructure costs by 40% and improve scalability and flexibility. (Source: Accenture) 5.2. DevOps and Continuous Integration/Continuous Deployment (CI/CD) IaaS provides the infrastructure needed to support DevOps and CI/CD processes, allowing organizations to deliver software faster and more reliably. IaaS providers offer tools and services to automate deployment, testing, and monitoring, as well as to manage infrastructure as code. For example, GE Digital used the Amazon Web Services (AWS) IaaS platform to implement DevOps and CI/CD processes for its Predix Industrial Internet of Things (IIoT) platform. As a result, GE Digital reduced its mean acknowledgment time from one day to less than one hour and its mean remediation time from three days to 80 minutes. It moved from zero to a 100 percent real-time visibility. (Source: Amazon) 5.3. Big Data Analytics IaaS provides the processing power and storage needed to support big data analytics, allowing organizations to extract insights from large amounts of data. IaaS providers offer tools and services to manage and process data, as well as to enable real-time analytics and machine learning. For example, Netflix uses the AWS IaaS platform to support its big data analytics needs. Netflix processes over one billion events daily using AWS services such as Amazon Kinesis, Amazon S3, and Amazon EMR. As a result, Netflix is able to rapidly scale, operate securely, and meet capacity needs worldwide thanks to AWS's provision of computation, storage, and infrastructure. (Source: Amazon) 5.4. The Internet of Things IaaS provides the infrastructure needed to support IoT devices and applications, allowing organizations to collect and analyze data from connected devices. IaaS providers offer tools and cloud services to manage and secure IoT devices, as well as enable real-time data processing and analysis. For example, Siemens uses the Microsoft Azure IaaS platform to support its IoT initiatives. Siemens uses Azure services such as Azure IoT Hub, Azure Stream Analytics, and Azure Cosmos DB to collect and process data from over one million IoT devices. This allows Siemens to optimize its industrial processes and improve efficiency and productivity. (Source: Siemens) 6. Leading Providers of IaaS 6.1.Deft Deft is a trusted provider of managed IT services for SMBs and the Fortune 500. Deft's cloud services offer flexible, scalable, and cost-effective solutions for organizations looking to move their IT infrastructure to the cloud. Customers can choose from a range of cloud options, including public, private, and hybrid clouds, all hosted in Deft's secure data centers worldwide. Deft's cloud experts can also help customers design and implement custom solutions that meet their business requirements. 6.2. Virtuozzo Virtuozzo is a leading provider of hyperconverged cloud software and services for cloud service providers (CSPs). Virtuozzo makes cloud computing easy, accessible, and affordable for all. The company's offerings include infrastructure-as-a-service (IaaS) with its production-ready OpenStack cloud platform, a key component of its IaaS offerings. The platform is designed to reduce costs and improve margins for CSPs by providing them with a highly efficient and scalable cloud infrastructure. 6.3. DigitalOcean DigitalOcean is a cloud computing provider offering a range of solutions to simplify infrastructure management for developers and businesses. One of the key benefits of working with DigitalOcean is its simplicity. The company's solutions are designed to be easy to use and accessible to developers of all skill levels, with an intuitive user interface and straightforward pricing plans. This allows businesses to focus on building innovative applications rather than spending time managing their infrastructure. 6.4. Vultr Vultr is a leading provider of cloud computing solutions designed to simplify infrastructure deployment for developers and businesses. The company's infrastructure is built on the latest technology, with state-of-the-art data centers and advanced networking capabilities. Vultr's cloud platform is designed to provide frictionless provisioning of public cloud, storage, and single-tenant bare metal services. This allows businesses to quickly and easily deploy infrastructure wherever needed, with fast network speeds and low latency. 6.5. Linode Linode is a leading cloud computing solution provider that makes it easy, accessible, and affordable for individuals and businesses of all sizes to innovate and grow. Linode's cloud infrastructure is open-source, making it highly flexible and adaptable. They are designed to be simple and easy to use. The company offers various services, including virtual private servers (VPS), object storage, load balancing, managed Kubernetes, and more. In addition, these solutions are fully scalable and can be customized to meet each customer's specific needs. 7. Conclusion IaaS services are expected to continue to play a critical role in driving the digital transformation of businesses. IaaS services are expected to see significant growth in the fields of artificial intelligence and machine learning. With the rise of big data and the increasing importance of data-driven decision-making, IaaS providers are expected to be critical in supporting these initiatives, providing the scalable computing power required to support advanced analytics and machine learning workloads. IaaS services are also expected to support the increasing demand for edge computing. With the proliferation of IoT devices and the rise of real-time applications, IaaS providers are expected to provide the necessary infrastructure and tools to support these initiatives, enabling organizations to process data and perform analysis. As a result, many organizations have turned to IaaS to support their digital transformation efforts, leveraging cloud computing services to implement new technologies and services that enable them to serve customers better, improve operational efficiency, and drive revenue growth. The future of IaaS services looks promising and will continue to be a critical enabler of digital transformation for businesses of all sizes and industries.

Read More
Hyper-Converged Infrastructure

Ensuring Long-Term Reliability of Technology Partners using HCI

Article | September 14, 2023

Building trust through HCI by unveiling strategies to ensure the long-term reliability of technology partnerships, cementing lasting collaborations in a dynamic business landscape through vendor stability. Contents 1. Introduction 2. How HCI Overcomes Infrastructural Challenges 3. Evaluation Criteria for Enterprise HCI 3.1. Distributed Storage Layer 3.2. Data Security 3.3. Data Reduction 4. Assessing Vendor Stability: Ensuring Long-Term Reliability of Partners 4.1. Vendor Track Record 4.2. Financial Stability 4.3. Customer Base and References 4.4. Product Roadmap and Innovation 4.5. Support and Maintenance 4.6. Partnerships and Ecosystem 4.7. Industry Recognition and Analyst Reports 4.8. Contracts and SLAs 5. Final Takeaway 1. Introduction When collaborating with a vendor, it is essential to evaluate their financial stability. This ensures that they are able to fulfil their obligations and deliver the promised services or goods. Prior to making contractual commitments, it is necessary to conduct due diligence to determine a vendor's financial health. This article examines when a vendor's financial viability must be evaluated, why to do so, and how vendor and contract management software can assist businesses. IT organizations of all sizes face numerous infrastructure difficulties. On one hand, they frequently receive urgent demands from the business to keep their organization agile and proactive while implementing new digital transformation initiatives. They also struggle to keep their budget under control, provide new resources swiftly, and manage the increasing complexity while maintaining a reasonable level of efficiency. For many organizations, a cloud-only IT strategy is not a viable option; as a result, there is a growing interest in hybrid scenarios that offer the best of both realms. By combining cloud and traditional IT infrastructures, there is a real danger of creating silos, going in the incorrect direction, and further complicating the overall infrastructure, thereby introducing inefficiencies. 2. How HCI Overcomes Infrastructural Challenges Hyper-converged infrastructures (HCI) surpass conventional infrastructures in terms of simplicity and adaptability. HCI enables organizations to conceal the complexity of their IT infrastructure while reaping the benefits of a cloud-like environment. HCI simplifies operations and facilitates the migration of on-premises data and applications to the cloud. HCI is a software-defined solution that abstracts and organizes CPU, memory, networking, and storage devices as resource pools, typically utilizing commodity x86-based hardware and virtualization software. It enables the administrator to rapidly combine and provision these resources as virtual machines and, more recently, as independent storage resources such as network-attached storage (NAS) filers and object stores. Management operations are also simplified, allowing for an increase in infrastructure productivity while reducing the number of operators and system administrators per virtual machine managed. HCI market and itssolutions can be categorized into three groups: Enterprise Solutions They have an extensive feature set, high scalability, core-to-cloud integrations, and tools that extend beyond traditional virtualization platform management and up the application stack. Small/Medium Enterprise Solutions Comparable to the previous category, but simplified and more affordable. The emphasis remains on simplifying the IT infrastructure for virtualized environments, with limited core-to-cloud integrations and a limited ecosystem of solutions. Vertical Solutions Designed for particular use cases or vertical markets, they are highly competitive in edge-cloud or edge-core deployments, but typically have a limited ecosystem of solutions. These solutions incorporate open-source hypervisors, such as KVM, to provide end-to-end support at lower costs. They are typically not very scalable, but they are efficient from a resource consumption standpoint. 3. Evaluation Criteria for Enterprise HCI 3.1 Distributed Storage Layer The distributed storage layer provides primary data storage service for virtual machines and is a crucial component of every HCI solution. Depending on the exposed protocol, they are typically presented as a virtual network-attached storage (NAS) or storage area network (SAN) and contain all of the data. There are three distributed storage layer approaches for HCI: Virtual storage appliance (VSA): A virtual machine administered by the same hypervisor as the other virtual machines in the node. A VSA is more flexible and can typically support multiple hypervisors, but this method may result in increased latency. Integrated within the hypervisor or the Operating System (OS): The storage layer is an extension of the hypervisor and does not require the preceding approach's components (VM and guest OS). The tight integration boosts overall performance, enhances workload telemetry, and fully exploits hypervisor characteristics, but the storage layer is not portable. Specialized storage nodes: The distributed storage layer is comprised of specialized nodes in order to achieve optimal performance consistency and scalability for both internal and external storage consumption. This strategy, which is typically more expensive than the alternatives for lesser configurations, is utilized. 3.2 Data Security Currently, all vendors offer sophisticated data protection against multiple failures, such as full node, single, and multiple-component issues. Distributed erasure coding safeguards information by balancing performance and data footprint efficiency. This equilibrium is made possible by modern CPUs with sophisticated instruction sets, new hardware such as NVMe and storage-class memory (SCM) devices, and data path optimizations. In addition, the evolution of storage technologies has played a pivotal role in enhancing data protection strategies. The introduction of high-capacity SSDs (Solid-State Drives) and advancements in storage virtualization have further strengthened the ability to withstand failures and ensure uninterrupted data availability. These technological innovations, combined with the relentless pursuit of redundancy and fault tolerance, have elevated the resilience of modern data storage systems. Furthermore, for data protection and security, compliance with rules, regulations, and laws is paramount. Governments and regulatory bodies across the globe have established stringent frameworks to safeguard sensitive information and ensure privacy. Adherence to laws such as the General Data Protection Regulation (GDPR) in Europe, the Health Insurance Portability and Accountability Act (HIPAA) in the United States, and various industry-specific regulations is non-negotiable. Organizations must fortify their data against technical vulnerabilities and align their practices with legal requirements to prevent costly fines, legal repercussions, and reputational damage. 3.3 Data Reduction Optimization of the data footprint is a crucial aspect of hyper-converged infrastructures. Deduplication, compression, and other techniques, such as thin provisioning, can significantly improve capacity utilization in virtualized environments, particularly for Virtual desktop infrastructure (VDI) use cases. Moreover, in order to optimize rack space utilization and achieve server balance, the number of storage devices that can be deployed on a single HCI node is restricted. 4. Assessing Vendor Stability: Ensuring Long-Term Reliability of Partners Here are some key factors that contribute to ensuring long-term reliability: 4.1 Vendor Track Record Assessing the vendor's track record and reputation in the industry is crucial. Look for established vendors with a history of delivering reliable products and services. A vendor that has been operating in the market for a significant period of time and has a strong customer base indicates stability. 4.2 Financial Stability Consider factors such as the vendor's profitability, revenue growth, and ability to invest in research and development. Financial stability ensures the vendor's ability to support their products and services over the long term. 4.3 Customer Base and References Look at the size and diversity of the vendor's customer base. A large and satisfied customer base indicates that the vendor's solutions have been adopted successfully by organizations. Request references from existing customers to get insights into their experience with the vendor's stability and support. 4.4 Product Roadmap and Innovation Assess the vendor's product roadmap and commitment to ongoing innovation. A vendor that actively invests in research and development, regularly updates their products, and introduces new features and enhancements demonstrates a long-term commitment to their solution's reliability and advancement. 4.5 Support and Maintenance Evaluate the vendor's support and maintenance services. Look for comprehensive support offerings, including timely bug fixes, security patches, and firmware updates. Understand the vendor's service-level agreements (SLAs), response times, and availability of technical support to ensure they can address any issues that may arise. 4.6 Partnerships and Ecosystem Consider the vendor's partnerships and ecosystem. A strong network of partners, including technology alliances and integrations with other industry-leading vendors, can contribute to long-term reliability. Partnerships demonstrate collaboration, interoperability, and a wider ecosystem that enhances the vendor's solution. 4.7 Industry Recognition and Analyst Reports Assess the vendor's industry recognition and performance in analyst reports. Look for accolades, awards, and positive evaluations from reputable industry analysts. These assessments provide independent validation of the vendor's stability and the reliability of their HCI solution. 4.8 Contracts and SLAs Review the vendor's contracts, service-level agreements, and warranties carefully. Ensure they provide appropriate guarantees for support, maintenance, and ongoing product updates throughout the expected lifecycle of the HCI solution. 5. Final Takeaway Evaluating a vendor's financial stability is crucial before entering into contractual commitments to ensure their ability to fulfill obligations. Hyper-converged infrastructure overcomes infrastructural challenges by simplifying operations, enabling cloud-like environments, and facilitating data and application migration. The HCI market offers enterprise, small/medium enterprise, and vertical solutions, each catering to different needs and requirements. Analysing enterprise HCI solutions requires careful consideration of various criteria. Each approach has its own advantages and considerations related to flexibility, performance, and cost. The mentioned techniques can significantly reduce the data footprint, particularly in use cases like VDI, while maintaining performance and efficiency. Organizations take decisions that align with their specific storage, security, and efficiency requirements by considering the evaluation criteria for enterprise HCI solutions. By considering these factors, organizations can make informed decisions and choose a vendor with a strong foundation of reliability, stability, and long-term commitment, ensuring the durability of their HCI infrastructure and minimizing risks associated with vendor instability.

Read More
Application Infrastructure

Infrastructure Lifecycle Management Best Practices

Article | June 10, 2022

As your organization scales, inevitably, so too will its infrastructure needs. From physical spaces to personnel, devices to applications, physical security to cybersecurity – all these resources will continue to grow to meet the changing needs of your business operations. To manage your changing infrastructure throughout its entire lifecycle, your organization needs to implement a robust infrastructure lifecycle management program that’s designed to meet your particular business needs. In particular, IT asset lifecycle management (ITALM) is becoming increasingly important for organizations across industries. As threats to organizations’ cybersecurity become more sophisticated and successful cyberattacks become more common, your business needs (now, more than ever) to implement an infrastructure lifecycle management strategy that emphasizes the security of your IT infrastructure. In this article, we’ll explain why infrastructure management is important. Then we’ll outline steps your organization can take to design and implement a program and provide you with some of the most important infrastructure lifecycle management best practices for your business. What Is the Purpose of Infrastructure Lifecycle Management? No matter the size or industry of your organization, infrastructure lifecycle management is a critical process. The purpose of an infrastructure lifecycle management program is to protect your business and its infrastructure assets against risk. Today, protecting your organization and its customer data from malicious actors means taking a more active approach to cybersecurity. Simply put, recovering from a cyber attack is more difficult and expensive than protecting yourself from one. If 2020 and 2021 have taught us anything about cybersecurity, it’s that cybercrime is on the rise and it’s not slowing down anytime soon. As risks to cybersecurity continue to grow in number and in harm, infrastructure lifecycle management and IT asset management are becoming almost unavoidable. In addition to protecting your organization from potential cyberattacks, infrastructure lifecycle management makes for a more efficient enterprise, delivers a better end user experience for consumers, and identifies where your organization needs to expand its infrastructure. Some of the other benefits that come along with comprehensive infrastructure lifecycle management program include: More accurate planning; Centralized and cost-effective procurement; Streamlined provisioning of technology to users; More efficient maintenance; Secure and timely disposal. A robust infrastructure lifecycle management program helps your organization to keep track of all the assets running on (or attached to) your corporate networks. That allows you to catalog, identify and track these assets wherever they are, physically and digitally. While this might seem simple enough, infrastructure lifecycle management and particularly ITALM has become more complex as the diversity of IT assets has increased. Today organizations and their IT teams are responsible for managing hardware, software, cloud infrastructure, SaaS, and connected device or IoT assets. As the number of IT assets under management has soared for most organizations in the past decade, a comprehensive and holistic approach to infrastructure lifecycle management has never been more important. Generally speaking, there are four major stages of asset lifecycle management. Your organization’s infrastructure lifecycle management program should include specific policies and processes for each of the following steps: Planning. This is arguably the most important step for businesses and should be conducted prior to purchasing any assets. During this stage, you’ll need to identify what asset types are required and in what number; compile and verify the requirements for each asset; and evaluate those assets to make sure they meet your service needs. Acquisition and procurement. Use this stage to identify areas for purchase consolidation with the most cost-effective vendors, negotiate warranties and bulk purchases of SaaS and cloud infrastructure assets. This is where lack of insights into actual asset usage can potentially result in overpaying for assets that aren’t really necessary. For this reason, timely and accurate asset data is crucial for effective acquisition and procurement. Maintenance, upgrades and repair. All assets eventually require maintenance, upgrades and repairs. A holistic approach to infrastructure lifecycle management means tracking these needs and consolidating them into a single platform across all asset types. Disposal. An outdated or broken asset needs to be disposed of properly, especially if it contains sensitive information. For hardware, assets that are older than a few years are often obsolete, and assets that fall out of warranty are typically no longer worth maintaining. Disposal of cloud infrastructure assets is also critical because data stored in the cloud can stay there forever. Now that we’ve outlined the purpose and basic stages of infrastructure lifecycle management, it’s time to look at the steps your organization can take to implement it.

Read More

Spotlight

Drupal Association

The Drupal Association is a non-profit organization. It supports the Drupal project and community with funding, infrastructure, and events. Funded by individual and organization members, and by generous sponsors, the Association's mission is to unite a global open source community to build and promote Drupal.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events