DATA CENTER MODERNIZATION: THE TECH THAT SPEEDS TRANSFORMATION

As a Hewlett Packard Enterprise Platinum Partner, the data center specialists at Works Computing can help you strategically leverage these technologies to achieve your unique IT and business objectives. Rely on our deep pre and post sales technical expertise to speed transformation and maximize results over the long run.

Spotlight

Quantexa

Quantexa empowers organisations to drive better decisions from their data. Using the latest advancements in big data and AI, Quantexa uncovers hidden customer connections and behaviours to solve major challenges in financial crime, customer insight and data analytics.

OTHER ARTICLES
Hyper-Converged Infrastructure, Windows Systems and Network

Why Should Your Business Utilize Infrastructure as a Service (IaaS)?

Article | July 11, 2023

Flexible data access, enhanced disaster recovery, and reduced infrastructure staff burden are some of the biggest reasons businesses migrate to innovative and reliable cloud technologies. Infrastructure-as-a-service or Iaas, is one such cloud computing model that has simplified the lives of enterprises and developers by reducing their infrastructure burden. Iaas gives you access to servers, networking, storage, and virtualization features. IaaS is fast becoming one of the biggest trends in cloud computing. According to Technavio's latest report, the IaaS market projects a growth of USD 141.77 billion, registering a CAGR of 28.2% from 2021 to 2026. “So many systems end up as a big dreaded ball of mud (which is totally preventable) when designing an enforceable architecture model.” Alexander von Zitzewitz, CEO, hello2morrow Inc. But, how can IaaS technology help you grow and advance your business? Here are some key advantages of switching to IaaS: Better Performance One of the more well-known benefits of IaaS is achieving a higher performance level from your infrastructure. Rather than worrying about the latest hardware for your infrastructure, with IaaS in place, your in-house IT team will be able to focus more on working on your business goals and objectives through technology. Because the SLA (Service Level Agreement) with your IaaS cloud service provider can ensure that you are getting the best performance from your cloud provider's infrastructure. An SLA will ensure that your cloud provider is accountable for continuous upgrades and the best possible service for your business. Decreased CapEx With IaaS technology, you can choose the IaaS cloud service provider of your choice. Typically, a cloud provider has a more reliable, robust, and redundant infrastructure setup than what would be feasible and financially realistic in an office environment. This means you can save on maintenance, purchase, and operating hardware-related business expenditures. Additionally, it also decreases your overall IT-related capital expenditure (CapEx). Increased Flexibility IaaS increases your scalability and flexibility exponentially. Your business can scale up and down as needed and on-demand. For example, say your business is hosting a short-term campaign to drive more traffic to your website. IaaS will automatically provision resources to ensure your business infrastructure is well equipped to handle the sudden incoming traffic boost. Scale- Up Your Business Additionally, IaaS gives your growing business the flexibility it needs from its IT infrastructure. For example, if you’re considering opening a new office in a different location, you don’t need to spend extra on new hardware; instead, you can directly connect to your infrastructure virtually. This means you don’t need to invest in additional infrastructure for business expansion continually. Managed-Task Virtualization As IaaS supports the virtualization of management tasks, your IT is free to concentrate on other, more thought-intensive work. This will not only drive more efficiency but also help boost ROI. Disaster Recovery During disasters like an earthquake or floods, IaaS ensures smooth business operations. Disaster Recovery as a Service (DRaaS) stores and replicates data in multiple data centers in different geographical locations. So even if a disaster or mishap causes significant damage to the data center, your IaaS providers can quickly restore the data from another data center. Conclusion IaaS allows your businesses to utilize the cloud to achieve your IT goals. It is flexible, scalable, reliable, cost-effective and provides seamless access to maximize business continuity. Therefore, you should choose a reliable IaaS cloud provider who can deliver a variety of cloud infrastructure solutions.

Read More
Hyper-Converged Infrastructure, IT Systems Management

Infrastructure as code vs. platform as code

Article | September 14, 2023

With infrastructure as code (IaC), you write declarative instructions about compute, storage and network requirements for the infra and execute it. How does this compare to platform as code (PaC) and what did these two concepts develop in response to? In its simplest form, the tech stack of any application has three layers — the infra layer containing bare metal instances, virtual machines, networking, firewall, security etc.; the platform layer with the OS, runtime environment, development tools etc.; and the application layer which, of course, contains your application code and data. A typical operations team works on the provisioning, monitoring and management of the infra and platform layers, in addition to enabling the deployment of code.

Read More
Application Storage, Data Storage

Mastering Infrastructure: Hyperconvergence Courses and Certifications

Article | July 12, 2023

Unlock Courses and HCI certifications focused on hyperconvergence providing individuals with the knowledge and skills necessary to design, deploy, and manage these advanced infrastructure solutions. Hyperconvergence has become essential for professionals and beginners seeking to stay ahead in their careers and grow in infstructure sector. Hyperconvergence courses and certifications offer valuable opportunities to enhance knowledge and skills in this transformative technology. In this article, explore the significance of hyperconvergence courses and certifications, and how they enable professionals to become experts in designing, implementing, and managing hyperconverged infrastructure solutions. 1. Cloud Infrastructure and Services Version 4.0 (DCA-CIS) The Dell Technologies Proven Professional Cloud Infrastructure and Services Associate (DCA-CIS) certification is an associate level certification designed to provide participants with a comprehensive understanding of the technologies, processes, and mechanisms required to build cloud infrastructure. By following a cloud computing reference model, participants can make informed decisions when building cloud infrastructure and prepare for advanced topics in cloud solutions. The certification involves completing the recommended training and passing the DEA-2TT4 exam. Exam retake policies are in place, and exam security measures ensure the integrity and validity of certifications. Candidates receive provisional exam score reports immediately, with final scores available in their CertTracker accounts after a statistical analysis. This certification equips professionals with the necessary expertise to excel in cloud infrastructure and services. 2. DCS-SA: Systems Administrator, VxRail The Specialist – Systems Administrator, VxRail Version 2.0 (DCS-SA) certification focuses on individuals wanting to validate their expertise in effectively administering VxRail systems. VxRail clusters provide hyper-converged solutions that simplify IT operations and reduce business operational costs. This HCI certification introduces participants to the VxRail product, including its hardware and software components within a VxRail cluster. Key topics covered include cluster management, provisioning, monitoring, expansion, REST API usage, and standard maintenance activities. To attain this certification, individuals must acquire a prescribed Associate Level Certification, complete recommended training options, and pass the DES-6332 exam. This certification empowers professionals to administer VxRail systems and optimize data center operations efficiently. 3. Certified and Supported SAP HANA Hardware One among HCI certification courses, the Certified and Supported SAP HANA Hardware program provides a directory of hardware options powered by SAP HANA, accelerating implementation processes. The directory includes certified appliances, enterprise storage solutions, IaaS platforms, Hyper-Converged Infrastructure (HCI) Solutions, supported intel systems, and supported power systems. These hardware options have undergone testing by hardware partners in collaboration with SAP LinuxLab and are supported for SAP HANA certification. Valid certifications are required at purchase, and support is provided until the end of maintenance. SAP SE delivers the directory for informational purposes, and improvements or corrections may be made at their discretion. 4. Google Cloud Fundamentals: Core Infrastructure Google Cloud Fundamentals: Core Infrastructure is a comprehensive course introducing essential concepts and terminology for working with Google Cloud. It provides an overview of Google Cloud's computing and storage services and resource as well as policy management tools. Through videos and hands-on labs, learners will gain the knowledge and skills to interact with Google Cloud services, choose and deploy applications using App Engine, Google Kubernetes Engine, and Compute Engine, and utilize various storage options such as cloud storage, Cloud SQL, Cloud Bigtable, and Firestore. This beginner-level course is part of multiple specialization and professional certificate programs, including networking in Google Cloud and developing applications with Google Cloud. Upon completion, learners will receive a shareable certificate. The course is offered by Google Cloud, a trusted provider of innovative cloud technologies designed for security, reliability, and scalability. 5. Infrastructure and Application Modernization with Google Cloud The ‘Modernizing Legacy Systems and Infrastructure with Google Cloud’ course addresses the challenges faced by businesses with outdated IT infrastructure and explores how cloud technology can enable modernization. It covers various computing options available in the cloud and their benefits, as well as application modernization and API management. The course highlights Google Cloud solutions like Compute Engine, App Engine, and Apigee that assist in system development and management. By completing this beginner-level course, learners will understand the benefits of infrastructure and app modernization using cloud technology, the distinctions between virtual machines, containers, and Kubernetes, and how Google Cloud solutions support app modernization and simplify API management. The course is offered by Google Cloud, a leading provider of cloud technologies designed for security, reliability, and scalability. Upon completion, learners will receive a shareable certificate. 6. Oracle Cloud Infrastructure Foundations One of the HCI certification courses, the ‘OCI Foundations Course’ is designed to prepare learners for the Oracle Cloud Infrastructure Foundations Associate Certification. The course provides an introduction to the OCI platform and covers core topics such as compute, storage, networking, identity, databases, and security. By completing this course, learners will gain knowledge and skills in architecting solutions, understanding autonomous database concepts, and working with networking and observability tools. The course is offered by Oracle, a leading provider of integrated application suites and secure cloud infrastructure. Learners will have access to flexible deadlines and will receive a shareable certificate upon completion. Oracle's partnership with Coursera aims to increase accessibility to cloud skills training and empower individuals and enterprises to gain expertise in Oracle Cloud solutions. 7. Designing Cisco Data Center Infrastructure (DCID) The 'Designing Cisco Data Center Infrastructure (DCID) v7.0' training is designed to help learners master the design and deployment options for Cisco data center solutions. The course covers various aspects of data center infrastructure, including network, compute, virtualization, storage area networks, automation, and security. Participants will learn design practices for Cisco Unified Computing System, network management technologies, and various Cisco data center solutions. The training provides both theoretical content and design-oriented case studies through activities. By completing this training, learners can earn 40 Continuing Education credits and prepare for the 300-610 Designing Cisco Data Center Infrastructure (DCID) exam. This certification equips professionals with the knowledge and skills necessary to design scalable and reliable data center environments using Cisco technologies, making them eligible for professional-level job roles in enterprise-class data centers. Prerequisites for this training include foundational knowledge in data center networking, storage, virtualization, and Cisco UCS. Final Thoughts Mastering infrastructure in the realm of hyperconvergence is essential for IT professionals seeking to excel in their careers and drive successful deployments. Courses and HCI certifications focused on hyperconvergence provide individuals with the knowledge and skills necessary to design, deploy, and manage these infrastructure modernization solutions. By acquiring these credentials, professionals can validate their expertise, stay up-to-date with industry best practices, and position themselves as valuable assets in the rapidly evolving landscape of IT infrastructure. These courses and certifications offer IT professionals the opportunity to master the intricacies of this transformative infrastructure approach. By investing in these educational resources, individuals can enhance their skill set, broaden their career prospects, and contribute to the successful implementation and management of hyperconverged infrastructure solutions.

Read More
Application Infrastructure, Application Storage

Upcoming Events for Driving Business Success in the HCI Industry

Article | June 23, 2023

Hyperconverged Infrastructure (HCI) is a critical aspect of the ever-evolving infrastructure industry, helping ensure efficient and secure operation. It involves the continuous surveillance, analysis, and management of the digitalization in manufacturing, performance, and efficiency. To stay ahead of the latest advancements in this industry, executives and managers must attend the upcoming conferences scheduled for 2023. These events provide a crucial platform for professionals to gain in-depth insights into emerging trends, innovative technologies, and best practices. Basics and Operation of Hyperconverged Infrastructure November 21, 2023 | Online Attend the training session on hyper convergent infrastructure led by Dr. Markus Ermes. This session will address hyper-convergent infrastructure questions, including appliances or software, central data centers or smaller locations, and established manufacturers or challengers. During the training session, this Hyperconverged Infrastructure conference allows participants to gain insights into software-defined storage, the critical properties of storage technologies, changes in backup and recovery scenarios, and considerations for data center network planning. This knowledge will enable participants to evaluate the merits and drawbacks of HCI in a nuanced and informed manner. The training session will accommodate participants with varying skill levels, whether they are beginners or have advanced expertise. Attending this training session will equip participants with the knowledge to navigate hyper-convergent infrastructures' complexities effectively. TechMentor Redmond 2023 July 17-21, 2023 | Washington (US) TechMentor Redmond 2023 is an anticipated technology conference that brings together IT professionals, industry experts, and thought leaders for an immersive learning experience. Set in Redmond, Washington, the heart of the tech industry, it will offer a unique opportunity for participants to engage with leading experts from Microsoft and other prominent technology companies. These sessions will cover a wide range of topics, including cloud computing, cybersecurity, artificial intelligence, machine learning, DevOps, data analytics, IoT, and more. With a focus on practical implementation and real-world scenarios, TechMentor Redmond will equip attendees with the skills and knowledge needed to tackle the challenges of today's IT landscape. One of the highlights, is the opportunity to learn directly from industry experts and Microsoft MVPs. Advancing Data Center Construction: West 2023 July 17-19, 2023 | Washington (US) West 2023: Advancing Data Center Construction brings together Washington-to-Arizona clients, contractors, and designers to discuss industry issues. This event will provide a rare opportunity to collaborate and solve project delivery issues caused by tougher restrictions, difficult geographical conditions, supply chain interruption, and workforce shortages, offering hyper converged infrastructure solutions. The event will have sessions from keynote speakers David McCall, Michelle Stuart, Chad Labucki, and Micah Piippo. Hyperconverged Infrastructure events including this event, allow attendees to learn from over 25 hours of world-class content, 12+ hours of networking, and industry leaders like Google, Yondr, Clayco, Microsoft, and McKinstry. Participants will learn to overcome supply chain interruption, streamline approval processes, and enhance efficiency through case studies of breakthrough technology and energy-efficient, sustainable data centers. CIO Cloud Summit July 17, 2023 | Online One of the leading Hyperconverged Infrastructure events, this distinguished event caters to CIOs and IT executives strategically evaluating cloud computing solutions for their organizations. With a dedicated focus on crucial cloud computing issues, including data governance, security, private versus public cloud, and data availability, the summit offers a platform for in-depth discussions and knowledge sharing. Attendees can anticipate a curated agenda with interactive sessions, analyst-led presentations, and an exclusive environment with an average attendance of 50 C-level executives because CDM Media Summits is renowned for its ability to bring together industry leaders, analysts, and solution providers. The event is hosts renowned speakers as Chris Mattmann, Steve Rubinow, Jason Spencer, and Robert DeVito. It is an exceptional opportunity for networking, debating, and gaining insights from the latest industry research. Gartner IT Infrastructure, Operations & Cloud Strategies Conference December 5 – 7, 2023 | Las Vegas (Nevada) The Gartner IT Infrastructure, Operations, & Cloud Strategies Conference 2023 brings together global technology leaders to explore the latest trends, gain objective insights, and exchange best practices. The conference will offer attendees access to nine tracks and seven spotlight tracks, each covering specific focus areas to help I&O leaders create effective pathways for the future while networking with peers. Topics will include innovation, cloud value acceleration, engineering platforms, enhancing operations, evolving at the edge, embedding security, developing skills, transforming leadership and the organization, optimizing costs & value, and more. The event features guest speakers such as Daniel Betts, Arun Chandrasekaran, Hassan Ennaciri, among others, Gartner Magic Quadrant sessions, solution provider sessions, workshops, and facilitated sessions, providing attendees with valuable inspiration, insights, and collaborative problem-solving opportunities. stackconf 2023 September 13-14, 2023 |Berlin (Germany) Being one of the best HCI events, stackconf is a prominent open-source infrastructure event focused on various aspects of CI/CD, containers, hybrid environments, and cloud solutions. It will address the challenges faced by businesses in the rapidly evolving digital landscape, where virtual infrastructures and multi-channel platforms have become the norm. The conference aims to bridge the gap between development, testing, and operations, offering insights and solutions from multiple perspectives. Attendees will be able to stay informed about current and future trends, think creatively, and explore innovative approaches to optimize their IT infrastructure. With a diverse international audience of IT infrastructure specialists, CTOs, CIOs, SREs, system administrators, IT architects, and DevOps engineers, the event stands out for its speaker talks, which offer practical insights instead of vendor pitches, and its emphasis on fostering meaningful discussions and collaboration among participants. DatacenterDynamics (DCD) Connect | London October 2-3, 2023 | London (United Kingdom) DCD Connect | London is a highly anticipated event that brings together leaders and professionals from the data center and cloud infrastructure communities. The event will feature an exhibition floor where leading technology vendors and service providers showcase their latest products, services, and solutions. This will allow attendees to explore and evaluate the latest advancements in hardware, software, infrastructure, cooling systems, power management, and other critical aspects of data center operations. Beyond the knowledge-sharing opportunities, it will promote thought-provoking talks by Dame Dawn Childs, Val Walsh, Michael Winterson among others. Attendees will earn continuing professional development (CPD) credits by attending educational sessions and workshops, enhancing their industry credentials, and demonstrating a commitment to ongoing learning. The event will also provide a platform for career growth, with potential job openings and networking connections within the data center and cloud infrastructure sectors. Key Takeaway The conferences bring together industry experts, IT professionals, engineers, and decision-makers in the network industry. Attendees can expect a comprehensive program consisting of keynote presentations, panel discussions, case studies, and interactive workshops. The listed events will cover a wide range of topics, including the latest trends in data center design, energy efficiency, modular construction, and emerging technologies. Participating in these also offer ample networking opportunities, allowing attendees to connect with peers, share experiences, and establish valuable business connections. Leaders can stay at the forefront of the evolving data center landscape and gain a competitive edge in their respective organizations.

Read More

Spotlight

Quantexa

Quantexa empowers organisations to drive better decisions from their data. Using the latest advancements in big data and AI, Quantexa uncovers hidden customer connections and behaviours to solve major challenges in financial crime, customer insight and data analytics.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events