Modernizing data infrastructure: Hortonworks and Syncsort partner to unlock legacy system data

It’s a year and a half into the partnership between Hortonworks Inc. and Syncsort Inc. that is centered around modernizing legacy data stores and infrastructure. The results so far? The synergy between Syncsort’s high-performance data integration and Hortonworks’ modern data architecture solutions is creating a seamless experience for their common customers, according to Chief Technology Officers from both companies.

Spotlight

Telarus

Telarus is a leading Australian provider for delivering business grade solutions, incorporating managed VPNs (IP VPNs), Private Cloud Computing and Managed Security to multi-site customers. Telarus has established partnerships with SME and Enterprise businesses since 2001 through our Partner and Account Management teams with a focus on dedicated local service and support.

OTHER ARTICLES
Application Infrastructure, Application Storage

A Look at Trends in IT infrastructure and Operations for 2022

Article | July 19, 2023

We’re all hoping that 2022 will finally end the unprecedented challenges brought by the global pandemic and things will return to a new normalcy. For IT infrastructure and operations organizations, the rising trends that we are seeing today will likely continue, but there are still a few areas that will need special attention from IT leaders over the next 12 to 18 months. In no particular order, they include: The New Edge Edge computing is now at the forefront. Two primary factors that make it business-critical are the increased prevalence of remote and hybrid workplace models where employees will continue working remotely, either from home or a branch office, resulting in an increased adoption of cloud-based businesses and communications services. With the rising focus on remote and hybrid workplace cultures, Zoom, Microsoft Teams, and Google Meet have continued to expand their solutions and add new features. As people start moving back to office, they are likely to want the same experience they had from home. In a typical enterprise setup, branch office traffic is usually backhauled all the way to the data center. This architecture severely impacts the user experience, so enterprises will have to review their network architectures and come up with a roadmap to accommodate local egress between branch offices and headquarters. That’s where the edge can help, bringing it closer to the workforce. This also brings an opportunity to optimize costs by migrating from some of the expensive multi-protocol label switching (MPLS) or private circuits to relatively low-cost direct internet circuits, which is being addressed by the new secure access service edge (SASE) architecture that is being offered by many established vendors. I anticipate some components of SASE, specifically those related to software-defined wide area network (SD-WAN), local egress, and virtual private network (VPN), will drive a lot of conversation this year. Holistic Cloud Strategy Cloud adoption will continue to grow, and along with software as a service (SaaS), there will be renewed interest in infrastructure as a service (IaaS), albeit for specific workloads. For a medium-to-large-sized enterprise with a substantial development environment, it will still be cost-prohibitive to move everything to the cloud, so any cloud strategy would need to be holistic and forward-looking to maximize its business value. Another pandemic-induced shift is from using virtual machines (VMs) as a consumption unit of compute to containers as a consumption unit of software. For on-premises or private cloud deployment architectures that require sustainable management, organizations will have to orchestrate containers and deploy efficient container security and management tools. Automation Now that cloud adoption, migration, and edge computing architectures are becoming more prevalent, the legacy methods of infrastructure provisioning and management will not be scalable. By increasing infrastructure automation, enterprises can optimize costs and be more flexible and efficient—but only if they are successful at developing new skills. To achieve the goal of “infrastructure as a code” will require a shift in the perspective on infrastructure automation to one that focuses on developing and sustaining skills and roles that improve efficiency and agility across on-premises, cloud, and edge infrastructures. Defining the roles of designers and architects to support automation is essential to ensure that automation works as expected, avoids significant errors, and complements other technologies. AIOps (Artificial Intelligence for IT Operations) Alongside complementing automation trends, the implementation of AIOps to effectively automate IT operations processes such as event correlation, anomaly detection, and causality determination will also be important. AIOps will eliminate the data silos in IT by bringing all types of data under one roof so it can be used to execute machine learning (ML)-based methods to develop insights for responsive enhancements and corrections. AIOps can also help with probable cause analytics by focusing on the most likely source of a problem. The concept of site reliability engineering (SRE) is being increasingly adopted by SaaS providers and will gain importance in enterprise IT environments due to the trends listed above. AIOps is a key component that will enable site reliability engineers (SREs) to respond more quickly—and even proactively—by resolving issues without manual intervention. These focus areas are by no means an exhaustive list. There are a variety of trends that will be more prevalent in specific industry areas, but a common theme in the post-pandemic era is going to be superior delivery of IT services. That’s also at the heart of the Autonomous Digital Enterprise, a forward-focused business framework designed to help companies make technology investments for the future.

Read More
Hyper-Converged Infrastructure, Windows Systems and Network

The Drive with Direction: The Path of Enterprise IT Infrastructure

Article | July 11, 2023

Introduction It is hard to manage a modern firm without a convenient and adaptable IT infrastructure. When properly set up and networked, technology can improve back-office processes, increase efficiency, and simplify communication. IT infrastructure can be utilized to supply services or resources both within and outside of a company, as well as to its customers. IT infrastructure when adequately deployed aids organizations in achieving their objectives and increasing profits. IT infrastructure is made up of numerous components that must be integrated for your company's infrastructure to be coherent and functional. These components work in unison to guarantee that your systems and business as a whole run smoothly. Enterprise IT Infrastructure Trends Consumption-based pricing models are becoming more popular among enterprise purchasers, a trend that began with software and has now spread to hardware. This transition from capital to operational spending lowers risk, frees up capital, and improves flexibility. As a result, infrastructure as a service (IaaS) and platform as a service (PaaS) revenues increased by 53% from 2015 to 2016, making them the fastest-growing cloud and infrastructure services segments. The transition to as-a-service models is significant given that a unit of computing or storage in the cloud can be quite cheaper in terms of the total cost of ownership than a unit on-premises. While businesses have been migrating their workloads to the public cloud for years, there has been a new shift among large corporations. Many companies, including Capital One, GE, Netflix, Time Inc., and others, have downsized or removed their private data centers in favor of shifting their operations to the cloud. Cybersecurity remains a high priority for the C-suite and the board of directors. Attacks are increasing in number and complexity across all industries, with 80% of technology executives indicating that their companies are unable to construct a robust response. Due to lack of cybersecurity experts, many companies can’t get the skills they need on the inside, so they have to use managed security services. Future of Enterprise IT Infrastructure Companies can adopt the 'As-a-Service' model to lower entry barriers and begin testing future innovations on the cloud's basis. Domain specialists in areas like healthcare and manufacturing may harness AI's potential to solve some of their businesses' most pressing problems. Whether in a single cloud or across several clouds, businesses want an architecture that can expand to support the rapid evolution of their apps and industry for decades. For enterprise-class visibility and control across all clouds, the architecture must provide a common control plane that supports native cloud Application Programming Interfaces (APIs) as well as enhanced networking and security features. Conclusion The scale of disruption in the IT infrastructure sector is unparalleled, presenting enormous opportunities and hazards for industry stakeholders and their customers. Technology infrastructure executives must restructure their portfolios and rethink their go-to-market strategies to drive growth. They should also invest in the foundational competencies required for long-term success, such as digitization, analytics, and agile development. Data center companies that can solve the industry's challenges, as well as service providers that can scale quickly without limits and provide intelligent outcome-based models. This helps their clients achieve their business objectives through a portfolio of 'As-a-Service' models, will have a bright future.

Read More
Hyper-Converged Infrastructure

Cartesi creates Linux infrastructure for blockchain DApps

Article | July 13, 2023

DApps (sometimes called Dapps) are from the blockchain universe and so, logically, the apps part stands for application (obviously) and the D part stands for decentralised (only obvious once you know that we’re talking distributed immutable language here). According to the guides section at blockgeeks, DApps are open source in terms of code base, incentivised (in terms of who validates it) and essentially decentralised so that all records of the application’s operation must be stored on a public and decentralised blockchain to avoid pitfalls of centralisation. So then, Cartesi is a DApp infrastructure that runs an operating system (OS) on top of blockchains. The company has now launched a more complete ‘platform-level’ offering, which is described as a layer-2 solution

Read More
Storage Management

Data Center as a Service Is the Way of the Future

Article | July 11, 2022

Data Center as a Service (DCaaS) is a hosting service that gives clients access to actual data center infrastructure and amenities. Through a Wide-Area Network, DCaaS enables clients to remotely access the provider's storage, server, and networking capabilities (WAN). Businesses can tackle their on-site data center's logistical and financial issues by outsourcing to a service provider. Many enterprises rely on DCaaS to overcome the physical constraints of their on-site infrastructure or to offload the hosting and management of non-mission-critical applications. Businesses that require robust data management solutions but lack the necessary internal resources can adopt DCaaS. DCaaS is the perfect answer for companies that are struggling with a lack of IT help or a lack of funding for system maintenance. Added benefits data Center as a Service allows businesses to be independent of their physical infrastructure: A single-provider API Data centers without Staff Effortlessly handle the influx of data Data centers in regions with more stable climates Data Center as a Service helps democratize the data center itself, allowing companies that could never afford the huge investments that have gotten us this far to benefit from these developments. This is perhaps the most important, as Infrastructure-as-a-Service enables smaller companies to get started without a huge investment. Conclusion Data center as a service (DCaaS) enables clients to access a data center remotely and its features, whereas data center services might include complete management of an organization's on-premises infrastructure resources. IT can be outsourced using data center services to manage an organization's network, storage, computing, cloud, and maintenance. The infrastructure of many businesses is outsourced to increase operational effectiveness, size, and cost-effectiveness. It might be challenging to manage your existing infrastructure while keeping up with the pace of innovation, but it's critical to be on the cutting edge of technology. Organizations may stay future-ready by working with a vendor that can supply DCaaS and data center services.

Read More

Spotlight

Telarus

Telarus is a leading Australian provider for delivering business grade solutions, incorporating managed VPNs (IP VPNs), Private Cloud Computing and Managed Security to multi-site customers. Telarus has established partnerships with SME and Enterprise businesses since 2001 through our Partner and Account Management teams with a focus on dedicated local service and support.

Related News

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Data Storage

CoolIT Systems Partners with Switch Datacenters to Launch Advanced Energy-Efficient Data Centers

PRWeb | October 12, 2023

CoolIT Systems, a global leader in advanced cooling technology, and Switch Datacenters, a leading sustainable data center operator and developer, are thrilled to unveil a strategic partnership that will benefit an industry seeking to improve the sustainability of data centers. Following the recent release of the World Economic Forum's Top 10 Emerging Technologies featuring "Sustainable Computing" as the 9th-ranked emerging technology, the collaboration between Switch Datacenters and CoolIT facilitates data center space and the necessary technology to significantly curtail energy and water consumption inherent in modern data centers. With a history spanning more than a decade, Switch Datacenters has consistently demonstrated a commitment to environmental responsibility and sustainability. Their latest 45MW AMS6 data center near the Schiphol airport area features an HPC/AI-ready design that uses data center heat to warm adjacent greenhouses. Currently under development, their AMS5s is designed to make a significant contribution to the Amsterdam municipal heat grid with green, CO2-neutral heat. For both data centers, there's a marked preference for liquid cooling because it allows heat extraction at temperatures higher than traditional air cooling, offering enhanced economic value. CoolIT Systems is the industry-leading provider of efficient Direct Liquid Cooling (DLC) and Rear Door Heat Exchangers (RDHx) that enable heat reuse and help customers meet key Environmental, Social, and Governance (ESG) targets. CoolIT DLC technology is featured as a factory-installed, warranty-approved feature from most major servers OEMs. "CoolIT's DLC and RDHx technologies have been instrumental in various data center heat reuse projects for years, with customers reporting at minimum a savings of 10% on energy bills (OPEX), more than 50% on CAPEX spends, and examples of PUE lowered from 1.30 to 1.02," expressed Peggy Burroughs, Director of CoolIT Next. "Our collaborations with most major server OEMs have cultivated an expansive ecosystem for clients aspiring to achieve both business and ESG goals." CoolIT is the right company to help make our vision a reality at an industrial scale. Both CoolIT and Switch Datacenters have shared the same passion for sustainable innovation for years and truly want to elevate the industry's adoption of liquid cooling. We believe liquid cooling will be the game-changer in the next wave of sustainable data center designs, and CoolIT is one of the very few companies that can lead this upcoming demand, thanks to their long history of innovation, reliability, breadth of portfolio, and capabilities to scale with their numerous IT partners worldwide, says Gregor Snip, CEO of Switch Datacenters. Data centers are projected to account for 8% of the global electricity consumption by 20301. Technologies such as Direct Liquid Cooling can significantly reduce data center energy consumption by 25-40% and deliver water savings of 70-97%, depending on local climate and specific implementations2. Switch Datacenters is leading the charge in embracing sustainable alternatives for heating by reusing data center-generated heat. With their latest project, Switch Datacenters AMS6, they will revolutionize the way nearby greenhouses are heated by providing high-temperature heat from their data center. This innovative solution will replace traditional fossil fuel-based heating and contribute to a greener future. By harnessing the power of IT servers to generate green heat for large-scale crop cultivation, Switch Datacenters is driving the transition away from fossil fuels. They strongly advocate for the integration of heat-recapture-enabled data centers in areas with high demand for heat, making it a standard design principle. With the world calling for sustainable IT and data centers, the time is ripe for this much-needed change. With the combined expertise of CoolIT and Switch Datacenters, customers can now harness technologically advanced solutions that not only result in considerable energy and water savings but also contribute significantly to the global drive for reduced environmental impact, aligning with the United Nations Sustainable Development Goals of Affordable and Clean Energy (SDG 7), Industry, Innovation, and Infrastructure (SDG 9), and Climate Action (SDG 13). About CoolIT Systems CoolIT Systems is renowned for its scalable liquid cooling solutions tailored for the world's most challenging computing contexts. In both enterprise data centers and high-performance computing domains, CoolIT collaborates with global OEM server design leaders, formulating efficient and trustworthy liquid cooling solutions. In the desktop enthusiast arena, CoolIT delivers unmatched performance for a diverse range of gaming setups. Their modular Direct Liquid Cooling technology, Rack DLC™, empowers dramatic spikes in rack densities, component efficacy, and power savings. Jointly, CoolIT and its allies are pioneering the large-scale adoption of sophisticated cooling techniques. About Switch Datacenters Switch Datacenters is a Dutch privately-owned data center operator and developer founded in 2010 by Gregor Snip and his brother. Initially established as a private data center for their successful hosting company, the Amsterdam-based company later expanded into a fully-fledged commercial data center operator. It added several highly efficient and environmentally-friendly data center sites to its portfolio, with a current focus on constructing and managing wholesale data centers for large global customers while also providing tailor-made data center services. Switch Datacenters is an ambitious, 100% Dutch player in the Amsterdam data center sector, experiencing rapid growth by continually partnering with leading and globally recognized industry players and customers. The company maintains a steadfast commitment to innovative and sustainable site development. Currently, Switch Datacenters has over 200MW of new sustainable data center capacity in development. This year, it will launch its flagship sustainable data center, AMS4, with major customers having already pre-leased the 15-18MW facility.

Read More

Events