Data Center Infrastructure Management (DCIM) Explained.

February 9, 2015 | 117 views

Data center infrastructure management (DCIM) is the convergence of IT and building facilities functions within an organization. The goal of a DCIM initiative is to provide administrators with a holistic view of a data center's performance so that energy, equipment and floor space are used as efficiently as possible. Data center infrastructure management started out as a component of building information modeling (BIM) software, which is used by facilities managers to create building digital schematic diagrams. DCIM tools bring the same capabilities to data centers, allowing administrators to collate, store and analyze data related to power and cooling in real time. Most tools permit diagrams to be printed out -- a useful feature when maintenance is required or data center administrators need to install new equipment.

Spotlight

Rahi Systems

Rahi Systems delivers solutions and services that maximize the performance, scalability and efficiency of today’s integrated environment. Our team has deep expertise in data center infrastructure, compute, storage, networking and security technologies, as well as end-user computing, A/V and cloud solutions. Founded in 2012 by entrepreneurs with deep understanding of the needs of global enterprises, the company has grown through a solutions.

OTHER ARTICLES
IT SYSTEMS MANAGEMENT

Data Center as a Service Is the Way of the Future

Article | August 8, 2022

Data Center as a Service (DCaaS) is a hosting service that gives clients access to actual data center infrastructure and amenities. Through a Wide-Area Network, DCaaS enables clients to remotely access the provider's storage, server, and networking capabilities (WAN). Businesses can tackle their on-site data center's logistical and financial issues by outsourcing to a service provider. Many enterprises rely on DCaaS to overcome the physical constraints of their on-site infrastructure or to offload the hosting and management of non-mission-critical applications. Businesses that require robust data management solutions but lack the necessary internal resources can adopt DCaaS. DCaaS is the perfect answer for companies that are struggling with a lack of IT help or a lack of funding for system maintenance. Added benefits data Center as a Service allows businesses to be independent of their physical infrastructure: A single-provider API Data centers without Staff Effortlessly handle the influx of data Data centers in regions with more stable climates Data Center as a Service helps democratize the data center itself, allowing companies that could never afford the huge investments that have gotten us this far to benefit from these developments. This is perhaps the most important, as Infrastructure-as-a-Service enables smaller companies to get started without a huge investment. Conclusion Data center as a service (DCaaS) enables clients to access a data center remotely and its features, whereas data center services might include complete management of an organization's on-premises infrastructure resources. IT can be outsourced using data center services to manage an organization's network, storage, computing, cloud, and maintenance. The infrastructure of many businesses is outsourced to increase operational effectiveness, size, and cost-effectiveness. It might be challenging to manage your existing infrastructure while keeping up with the pace of innovation, but it's critical to be on the cutting edge of technology. Organizations may stay future-ready by working with a vendor that can supply DCaaS and data center services.

Read More
IT SYSTEMS MANAGEMENT

Enhancing Rack-Level Security to Enable Rapid Innovation

Article | July 27, 2022

IT and data center administrators are under pressure to foster quicker innovation. For workers and customers to have access to digital experiences, more devices must be deployed, and larger enterprise-to-edge networks must be managed. The security of distributed networks has suffered as a result of this rapid growth, though. Some colocation providers can install custom locks for your cabinet if necessary due to the varying compliance standards and security needs for distinct applications. However, physical security measures are still of utmost importance because theft and social engineering can affect hardware as well as data. Risk Companies Face Remote IT work continue on the long run Attacking users is the easiest way into networks IT may be deploying devices with weak controls When determining whether rack-level security is required, there are essentially two critical criteria to take into account. The first is the level of sensitivity of the data stored, and the second is the importance of the equipment in a particular rack to the facility's continuing functioning. Due to the nature of the data being handled and kept, some processes will always have a higher risk profile than others. Conclusion Data centers must rely on a physically secure perimeter that can be trusted. Clients, in particular, require unwavering assurance that security can be put in place to limit user access and guarantee that safety regulations are followed. Rack-level security locks that ensure physical access limitations are crucial to maintaining data center space security. Compared to their mechanical predecessors, electronic rack locks or "smart locks" offer a much more comprehensive range of feature-rich capabilities.

Read More
APPLICATION INFRASTRUCTURE

Infrastructure Lifecycle Management Best Practices

Article | August 8, 2022

As your organization scales, inevitably, so too will its infrastructure needs. From physical spaces to personnel, devices to applications, physical security to cybersecurity – all these resources will continue to grow to meet the changing needs of your business operations. To manage your changing infrastructure throughout its entire lifecycle, your organization needs to implement a robust infrastructure lifecycle management program that’s designed to meet your particular business needs. In particular, IT asset lifecycle management (ITALM) is becoming increasingly important for organizations across industries. As threats to organizations’ cybersecurity become more sophisticated and successful cyberattacks become more common, your business needs (now, more than ever) to implement an infrastructure lifecycle management strategy that emphasizes the security of your IT infrastructure. In this article, we’ll explain why infrastructure management is important. Then we’ll outline steps your organization can take to design and implement a program and provide you with some of the most important infrastructure lifecycle management best practices for your business. What Is the Purpose of Infrastructure Lifecycle Management? No matter the size or industry of your organization, infrastructure lifecycle management is a critical process. The purpose of an infrastructure lifecycle management program is to protect your business and its infrastructure assets against risk. Today, protecting your organization and its customer data from malicious actors means taking a more active approach to cybersecurity. Simply put, recovering from a cyber attack is more difficult and expensive than protecting yourself from one. If 2020 and 2021 have taught us anything about cybersecurity, it’s that cybercrime is on the rise and it’s not slowing down anytime soon. As risks to cybersecurity continue to grow in number and in harm, infrastructure lifecycle management and IT asset management are becoming almost unavoidable. In addition to protecting your organization from potential cyberattacks, infrastructure lifecycle management makes for a more efficient enterprise, delivers a better end user experience for consumers, and identifies where your organization needs to expand its infrastructure. Some of the other benefits that come along with comprehensive infrastructure lifecycle management program include: More accurate planning; Centralized and cost-effective procurement; Streamlined provisioning of technology to users; More efficient maintenance; Secure and timely disposal. A robust infrastructure lifecycle management program helps your organization to keep track of all the assets running on (or attached to) your corporate networks. That allows you to catalog, identify and track these assets wherever they are, physically and digitally. While this might seem simple enough, infrastructure lifecycle management and particularly ITALM has become more complex as the diversity of IT assets has increased. Today organizations and their IT teams are responsible for managing hardware, software, cloud infrastructure, SaaS, and connected device or IoT assets. As the number of IT assets under management has soared for most organizations in the past decade, a comprehensive and holistic approach to infrastructure lifecycle management has never been more important. Generally speaking, there are four major stages of asset lifecycle management. Your organization’s infrastructure lifecycle management program should include specific policies and processes for each of the following steps: Planning. This is arguably the most important step for businesses and should be conducted prior to purchasing any assets. During this stage, you’ll need to identify what asset types are required and in what number; compile and verify the requirements for each asset; and evaluate those assets to make sure they meet your service needs. Acquisition and procurement. Use this stage to identify areas for purchase consolidation with the most cost-effective vendors, negotiate warranties and bulk purchases of SaaS and cloud infrastructure assets. This is where lack of insights into actual asset usage can potentially result in overpaying for assets that aren’t really necessary. For this reason, timely and accurate asset data is crucial for effective acquisition and procurement. Maintenance, upgrades and repair. All assets eventually require maintenance, upgrades and repairs. A holistic approach to infrastructure lifecycle management means tracking these needs and consolidating them into a single platform across all asset types. Disposal. An outdated or broken asset needs to be disposed of properly, especially if it contains sensitive information. For hardware, assets that are older than a few years are often obsolete, and assets that fall out of warranty are typically no longer worth maintaining. Disposal of cloud infrastructure assets is also critical because data stored in the cloud can stay there forever. Now that we’ve outlined the purpose and basic stages of infrastructure lifecycle management, it’s time to look at the steps your organization can take to implement it.

Read More
APPLICATION INFRASTRUCTURE

The Drive with Direction: The Path of Enterprise IT Infrastructure

Article | June 6, 2022

Introduction It is hard to manage a modern firm without a convenient and adaptable IT infrastructure. When properly set up and networked, technology can improve back-office processes, increase efficiency, and simplify communication. IT infrastructure can be utilized to supply services or resources both within and outside of a company, as well as to its customers. IT infrastructure when adequately deployed aids organizations in achieving their objectives and increasing profits. IT infrastructure is made up of numerous components that must be integrated for your company's infrastructure to be coherent and functional. These components work in unison to guarantee that your systems and business as a whole run smoothly. Enterprise IT Infrastructure Trends Consumption-based pricing models are becoming more popular among enterprise purchasers, a trend that began with software and has now spread to hardware. This transition from capital to operational spending lowers risk, frees up capital, and improves flexibility. As a result, infrastructure as a service (IaaS) and platform as a service (PaaS) revenues increased by 53% from 2015 to 2016, making them the fastest-growing cloud and infrastructure services segments. The transition to as-a-service models is significant given that a unit of computing or storage in the cloud can be quite cheaper in terms of the total cost of ownership than a unit on-premises. While businesses have been migrating their workloads to the public cloud for years, there has been a new shift among large corporations. Many companies, including Capital One, GE, Netflix, Time Inc., and others, have downsized or removed their private data centers in favor of shifting their operations to the cloud. Cybersecurity remains a high priority for the C-suite and the board of directors. Attacks are increasing in number and complexity across all industries, with 80% of technology executives indicating that their companies are unable to construct a robust response. Due to lack of cybersecurity experts, many companies can’t get the skills they need on the inside, so they have to use managed security services. Future of Enterprise IT Infrastructure Companies can adopt the 'As-a-Service' model to lower entry barriers and begin testing future innovations on the cloud's basis. Domain specialists in areas like healthcare and manufacturing may harness AI's potential to solve some of their businesses' most pressing problems. Whether in a single cloud or across several clouds, businesses want an architecture that can expand to support the rapid evolution of their apps and industry for decades. For enterprise-class visibility and control across all clouds, the architecture must provide a common control plane that supports native cloud Application Programming Interfaces (APIs) as well as enhanced networking and security features. Conclusion The scale of disruption in the IT infrastructure sector is unparalleled, presenting enormous opportunities and hazards for industry stakeholders and their customers. Technology infrastructure executives must restructure their portfolios and rethink their go-to-market strategies to drive growth. They should also invest in the foundational competencies required for long-term success, such as digitization, analytics, and agile development. Data center companies that can solve the industry's challenges, as well as service providers that can scale quickly without limits and provide intelligent outcome-based models. This helps their clients achieve their business objectives through a portfolio of 'As-a-Service' models, will have a bright future.

Read More

Spotlight

Rahi Systems

Rahi Systems delivers solutions and services that maximize the performance, scalability and efficiency of today’s integrated environment. Our team has deep expertise in data center infrastructure, compute, storage, networking and security technologies, as well as end-user computing, A/V and cloud solutions. Founded in 2012 by entrepreneurs with deep understanding of the needs of global enterprises, the company has grown through a solutions.

Related News

APPLICATION INFRASTRUCTURE

Organized by Inspur Information, OCP China Day 2022 Is Driving Sustainable Data Center Development with Open Compute

Inspur Information | August 16, 2022

On August 10th, OCP China Day 2022 was held in Beijing, hosted by the Open Compute Project Foundation (OCP) and organized by Inspur Information, a leading IT infrastructure solutions provider. Using an innovative approach of global collaboration, and addressing major issues of data center infrastructure sustainability, open compute is becoming an innovation anchor for data centers. OCP China Day is Asia's largest annual technology summit, offering the widest open computing coverage. It is celebrating its 4th anniversary, with nearly 1,000 IT engineers and data center practitioners in attendance. Themed "Open Forward: Green, Convergence, Empowering", this year's summit brings together an array of experts and professionals from more than 30 world-renowned companies, universities and research institutions including the OCP Foundation, Inspur Information, Intel, Meta, Samsung, Western Digital, Enflame, NVIDIA, Microsoft, Alibaba Cloud, Baidu, Tencent Cloud, Tsinghua University, etc. to discuss topics such as data center infrastructure innovation, sustainable development and industrial ecosystem. Driving data center sustainability with green technology "The confidence that our fellow members and external companies have in OCP is at the root of the community's growing influence," said Steve Helvie, OCP's Vice President of Channels. "Open source hardware designed and validated by a wide range of experts breeds confidence for the companies that purchase and deploy these devices; and efficient hardware designs within the community that can reduce carbon emissions are helping to build confidence for data center sustainability. In the future, the community's research projects in thermal reuse, cooling environments, and other areas will inspire even greater confidence in data center infrastructure innovation." As data centers become more visible as a new type of infrastructure, there is a growing concern over data center sustainability, such as utilizing renewable energy, recycling, thermal reuse, and the use of liquid-cooling technologies to reduce water consumption. The resulting greener carbon footprint, is one of OCP's top research priorities. The newly established Cooling Environments Project has become OCP's largest cross-industry collaboration to date, with representatives from multiple companies and industries putting the spotlight on innovations in data center liquid-cooling technologies. The project integrates five sub-projects including Advanced Cooling Solutions (ACS) and Advanced Cooling Facilities (ACF). Examples include the ACS Cold Plate Sub-Project, ACS Door Heat Exchanger Sub-Project, ACS Immersion Sub-Project, Waste Heat Reuse Sub-Project, etc. The goal is to standardize the aforementioned sub-projects and physical interfaces through cross-project coordination between different cooling methods in data centers in order to accelerate data center innovation. According to William Chen, Server Department Product Planning Director, Inspur Information, the rapidly growing scale of data centers is putting new pressure on global sustainability. Consequently, data centers must adopt and promote new technologies to reduce environmental impact as sustainability has become absolutely essential. A variety of solutions, whether liquid-cooling innovations, improved data center layouts, and clean energy usage, will help reduce energy consumption and overall environmental impact. In addition to taking an active part in OCP's Cooling Environments Project, many community members have also contributed to data center sustainability. For example, Inspur Information has put forward the company-level strategy of "All in Liquid Cooling" and built the largest liquid-cooled data center production and R&D base in Asia. Its four-product series includes general purpose servers, high density servers, rack servers and AI servers, all of which support cold plate cooling. Accelerating data center innovation with global collaboration The Open Compute Project, has created a new global collaboration model that eliminates technical barriers and makes hardware innovation faster than ever before. Hou Zhenyu, Corporate Vice President, Baidu ABC Cloud Business Group, points out that as data centers move toward centralization and scale, IT infrastructure is encountering bigger challenges in terms of performance, power consumption, and deployment. Open compute is committed to transforming the design standards of data center equipment from closed source to open source, accelerating the implementation of new technologies and facilitating the construction and efficient development of green data centers through the sharing in IT infrastructure including products, specifications and intellectual property. With over 10 years of development, OCP's innovations now cover all aspects of data centers design, development and management. This includes heterogeneous computing, edge computing and other forward-looking technologies. The newly launched Open Rack 3.0 specification delivers more improvements in terms of space usage, load bearing, power supply, and liquid-cooling support. The design of ORv3 connectors enables blind insertion, and servers, when added to a rack, can be directly inserted into the liquid-cooling manifold. In the field of high-speed network communications, the OCP Mezz (NIC) specification has become the industry standard for I/O options, and SONIC/SAI has been deployed commercially in large volumes in Internet, communications and other industries. The OAM specification for Domain-Specific Architecture Design (DSA), which supports standardized access to multiple AI chips can meet the explosive growth in demand for AI accelerators worldwide, while the BoW specification for Chiplet interconnect allows chip manufacturers to mix and match chips using different manufacturing technologies to enable high-performance chip design across a variety of process steps. The DC-SCM standard (Data Center Security Control Management Module) defines a security control management module that is decoupled from the motherboard, enabling decoupling of the computing and security management units, allowing further simplification of motherboard design. Dr. Weifeng Zhang, Chief Scientist of Heterogeneous Computing at Alibaba Cloud, noted that in recent years, there has been a clear trend toward decoupling computing system architectures to offset the slowing of Moore's Law. With ongoing advances in chip and interconnectivity technologies, interoperability between computing devices has become key to the sustainable development of future computing. Open hardware, open software, and hardware-software layered decoupling have emerged as prominent trends in data center development. This has also prompted vendors to shift from a closed proprietary mentality to one which emphasizes open source and collaboration. This openness gives more companies the opportunity to contribute to data center infrastructure innovation and inspire more innovative ideas through global collaboration. Traditional industries embrace open compute for ecological empowerment Open compute promotes standardization and ecosystem building by forming a consensus via open collaboration and enabling the delivery of infrastructure in line with open source specifications. This facilitates the rapid application of more innovative technologies. This industrial ecosystem allows hyper-scale data centers to apply open compute technologies on a large scale, and also encourages industry users and even SMEs to start deploying cutting-edge solutions based on open compute. Open compute has been accelerating to expand from the Internet to other industries, such as telecommunications, finance, gaming, healthcare, auto manufacturing, etc. Omdia predicts that the market share of non-Internet industries in open compute will grow from 10.5% in 2020 to 21.9% in 2025. The unique technical edge, subtle design thinking, and ecosystem collaboration of open compute are breaking boundaries in data center innovation and enabling the convergence of more technologies. In the future, global collaboration and co-innovation revolving around open compute will drive further data center advancement while addressing worldwide issues such as carbon emissions. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

IT SYSTEMS MANAGEMENT

Juniper Networks Chosen by Jazz to Build Fully-Automated Data Center Infrastructure to Support Data, Music and Video Services

Juniper Networks | August 09, 2022

Juniper Networks , a leader in secure, AI-driven networks, today announced that it has been selected by Jazz, Pakistan’s number one 4G operator and the largest internet and leading digital service provider, to create a transformative, expanded and upgraded data center network to underpin Jazz’s services delivery platform for its 74.9 million subscribers. Jazz’s objective was to reimagine its architectural approach by leveraging continuous automation, assurance and data-driven insights to deliver a superior network user experience at scale while simplifying its operations. Jazz offers the broadest portfolio of value-added digital services to enterprises and subscribers in Pakistan and has built a reputation for cutting-edge innovation with the ability to scale cloud-based services quickly and reliably. Following a rigorous vendor-agnostic technology appraisal, focused on the operational and cost efficiencies made possible by network automation, Jazz selected Juniper’s technology and expertise to underpin this latest project. Juniper’s advanced automation capabilities, transforming the entire network management lifecycle process within a single system, were a standout in the market. News Highlights The new network will support a wide range of customer-facing services that demand reliability and fast data throughput to support a consistently strong user experience. These include cloud-based enterprise data services, mobile banking, music and video download/streaming services, as well as professional services such as an agricultural application for four million farmers who rely on it for information, advice and guidance in remote areas. Jazz will also use the network to power key internal workloads such as CRM and billing. Jazz will deploy the Juniper Apstra System to deliver true intent-based networking (IBN) capabilities. This enables Jazz to design and operate its data center network based on outcomes, with the entire data center lifecycle automated, from Day 0 (design) through Day 1 (configuration and deployment) to Day 2+ (ongoing operations). The network’s initial design is tied to day-to-day operations, enabling a single source of truth throughout its lifecycle. Automation provides a continuous feedback loop of real-time data insights, validation and root cause identification to minimize mean-time-to-repair (MTTR). This approach will enable Jazz to operate a much more efficient, reliable and agile network. It will help to deploy new service features, optimizing user experience for both network teams and customers. The new data center infrastructure includes a spine-and-leaf architecture built with the Juniper Networks QFX Series Switches and fully integrated with the Juniper Apstra System. Jazz has previously deployed MX Series Universal Routing Platforms from Juniper for 400G-ready connectivity for its metro and internet gateway infrastructure. The new QFXs leverage the same Junos® OS operating system, providing a consistent networking estate for Jazz to manage and operate. “In common with all service providers globally, Jazz faces relentless data demand and heightened expectations for seamless digital services. As a result, we wanted to completely rethink our data center operations, using ground-breaking automation to create the best possible user experiences for our enterprise customers and subscribers. Operational simplicity was another important goal, to deliver cost reductions and improved ease of use for our technical teams in the face of massive demand at scale. We evaluated multiple vendors, but Juniper’s ability to deliver the exact networking outcomes we needed meant that a highly strategic decision was very straightforward to make.” Abdul Rehman Usmani, Vice President of Technology at Jazz “The power of automation, bound within a single operational framework thanks to intent-based networking, enables Jazz to address the relevant operational questions, find the right answers quickly and make the best decisions. This means its network becomes a strategic business tool, leveraging data to deliver robust deployment and operational efficiencies and eliminates traditional network constraints that force choices between speed and reliability. Based on data from other Juniper customers, the result will be dramatic savings on downstream costs and tremendous returns on networking investments.” Mike Bushong, Vice President, Cloud Ready Data Center at Juniper Networks About Juniper Networks Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the world’s greatest challenges of well-being, sustainability and equality.

Read More

DATA STORAGE

Portworx by Pure Storage Recognized as the Leader in Kubernetes Storage for Three Consecutive Years by GigaOm

Pure Storage | August 08, 2022

Pure Storage® , the IT pioneer that delivers the world's most advanced data storage technology and services, today announced it was named the leader for the third consecutive year in the GigaOm Radar Report for Enterprise Kubernetes Storage, which analyzed enterprise storage systems with support for Kubernetes-based workloads, and its companion report for Cloud-Native Kubernetes Data Storage, which analyzed Kubernetes-native storage solutions built specifically to support stateful containers with scalable, distributed architectures. According to the GigaOm Radar Report for Cloud-Native Kubernetes Storage, Portworx® by Pure Storage "is one of the most advanced solutions for enterprise Kubernetes storage" and "remains the gold standard in cloud-native Kubernetes storage for the enterprise" as "a complete enterprise-grade solution with outstanding data management capabilities, unmatched deployment possibilities, and superior management features." Across criteria and evaluation metrics, Portworx was ranked by GigaOm as a "strong focus and perfect fit" in advanced data services, advanced CSI integration, deployment models, control plane architecture, developer experience, visibility and insights, as well as architecture, scalability, flexibility, manageability, and performance. Portworx continues to advance the innovation of its Kubernetes Data Platform to bring databases such as Kafka, Cassandra, and Postgres under one platform in the most simple and reliable manner with Portworx Data Services. The GigaOm Radar Report for Enterprise Kubernetes Storage claimed "the integration of Portworx Essentials on Pure Storage controller-based architectures significantly enhances data efficiency because users benefit from the data reduction capabilities offered by the storage arrays." The report also highlights that this powerful integration "allows organizations to seamlessly deploy cloud-native workloads on a proven Kubernetes storage solution, and as their needs grow, they can effortlessly migrate those workloads to the full Portworx solution if they decide to adopt it." Once again, Pure Storage received the highest scores among all market segments, deployment models, and evaluation metrics in the analysis. "For three consecutive years, we've been recognized as a Leader and Outperformer by GigaOm Radar. Customers running containers and databases at scale in production use Portworx to ensure highly reliable, available and secure Kubernetes data storage capabilities. I'm incredibly proud of our Portworx engineering team's recognition by GigaOm as we continue on our mission to help enterprises unleash the power of data." Murli Thirumale, VP, GM Cloud Native Business Unit, Pure Storage In addition to the GigaOm Radar Reports for Cloud-Native Kubernetes Data Storage and Enterprise Kubernetes Data Storage, Pure Storage has been consistently recognized as a leader across the other GigaOm reports for which it qualifies, including High-Performance Object Storage, Kubernetes Data Protection, and Enterprise General-Purpose Storage Systems. About Pure Storage Pure Storage uncomplicates data storage, forever. Pure Storage delivers a cloud experience that empowers every organization to get the most from their data while reducing the complexity and expense of managing the infrastructure behind it. Pure Storage's commitment to providing true storage as-a-service gives customers the agility to meet changing data needs at speed and scale, whether they are deploying traditional workloads, modern applications, containers, or more. Pure Storage believes it can make a significant impact in reducing data center emissions worldwide through its environmental sustainability efforts, including designing products and solutions that enable customers to reduce their carbon and energy footprint. And with a certified customer satisfaction score in the top one percent of B2B companies, Pure Storage's ever-expanding list of customers are among the happiest in the world.

Read More

APPLICATION INFRASTRUCTURE

Organized by Inspur Information, OCP China Day 2022 Is Driving Sustainable Data Center Development with Open Compute

Inspur Information | August 16, 2022

On August 10th, OCP China Day 2022 was held in Beijing, hosted by the Open Compute Project Foundation (OCP) and organized by Inspur Information, a leading IT infrastructure solutions provider. Using an innovative approach of global collaboration, and addressing major issues of data center infrastructure sustainability, open compute is becoming an innovation anchor for data centers. OCP China Day is Asia's largest annual technology summit, offering the widest open computing coverage. It is celebrating its 4th anniversary, with nearly 1,000 IT engineers and data center practitioners in attendance. Themed "Open Forward: Green, Convergence, Empowering", this year's summit brings together an array of experts and professionals from more than 30 world-renowned companies, universities and research institutions including the OCP Foundation, Inspur Information, Intel, Meta, Samsung, Western Digital, Enflame, NVIDIA, Microsoft, Alibaba Cloud, Baidu, Tencent Cloud, Tsinghua University, etc. to discuss topics such as data center infrastructure innovation, sustainable development and industrial ecosystem. Driving data center sustainability with green technology "The confidence that our fellow members and external companies have in OCP is at the root of the community's growing influence," said Steve Helvie, OCP's Vice President of Channels. "Open source hardware designed and validated by a wide range of experts breeds confidence for the companies that purchase and deploy these devices; and efficient hardware designs within the community that can reduce carbon emissions are helping to build confidence for data center sustainability. In the future, the community's research projects in thermal reuse, cooling environments, and other areas will inspire even greater confidence in data center infrastructure innovation." As data centers become more visible as a new type of infrastructure, there is a growing concern over data center sustainability, such as utilizing renewable energy, recycling, thermal reuse, and the use of liquid-cooling technologies to reduce water consumption. The resulting greener carbon footprint, is one of OCP's top research priorities. The newly established Cooling Environments Project has become OCP's largest cross-industry collaboration to date, with representatives from multiple companies and industries putting the spotlight on innovations in data center liquid-cooling technologies. The project integrates five sub-projects including Advanced Cooling Solutions (ACS) and Advanced Cooling Facilities (ACF). Examples include the ACS Cold Plate Sub-Project, ACS Door Heat Exchanger Sub-Project, ACS Immersion Sub-Project, Waste Heat Reuse Sub-Project, etc. The goal is to standardize the aforementioned sub-projects and physical interfaces through cross-project coordination between different cooling methods in data centers in order to accelerate data center innovation. According to William Chen, Server Department Product Planning Director, Inspur Information, the rapidly growing scale of data centers is putting new pressure on global sustainability. Consequently, data centers must adopt and promote new technologies to reduce environmental impact as sustainability has become absolutely essential. A variety of solutions, whether liquid-cooling innovations, improved data center layouts, and clean energy usage, will help reduce energy consumption and overall environmental impact. In addition to taking an active part in OCP's Cooling Environments Project, many community members have also contributed to data center sustainability. For example, Inspur Information has put forward the company-level strategy of "All in Liquid Cooling" and built the largest liquid-cooled data center production and R&D base in Asia. Its four-product series includes general purpose servers, high density servers, rack servers and AI servers, all of which support cold plate cooling. Accelerating data center innovation with global collaboration The Open Compute Project, has created a new global collaboration model that eliminates technical barriers and makes hardware innovation faster than ever before. Hou Zhenyu, Corporate Vice President, Baidu ABC Cloud Business Group, points out that as data centers move toward centralization and scale, IT infrastructure is encountering bigger challenges in terms of performance, power consumption, and deployment. Open compute is committed to transforming the design standards of data center equipment from closed source to open source, accelerating the implementation of new technologies and facilitating the construction and efficient development of green data centers through the sharing in IT infrastructure including products, specifications and intellectual property. With over 10 years of development, OCP's innovations now cover all aspects of data centers design, development and management. This includes heterogeneous computing, edge computing and other forward-looking technologies. The newly launched Open Rack 3.0 specification delivers more improvements in terms of space usage, load bearing, power supply, and liquid-cooling support. The design of ORv3 connectors enables blind insertion, and servers, when added to a rack, can be directly inserted into the liquid-cooling manifold. In the field of high-speed network communications, the OCP Mezz (NIC) specification has become the industry standard for I/O options, and SONIC/SAI has been deployed commercially in large volumes in Internet, communications and other industries. The OAM specification for Domain-Specific Architecture Design (DSA), which supports standardized access to multiple AI chips can meet the explosive growth in demand for AI accelerators worldwide, while the BoW specification for Chiplet interconnect allows chip manufacturers to mix and match chips using different manufacturing technologies to enable high-performance chip design across a variety of process steps. The DC-SCM standard (Data Center Security Control Management Module) defines a security control management module that is decoupled from the motherboard, enabling decoupling of the computing and security management units, allowing further simplification of motherboard design. Dr. Weifeng Zhang, Chief Scientist of Heterogeneous Computing at Alibaba Cloud, noted that in recent years, there has been a clear trend toward decoupling computing system architectures to offset the slowing of Moore's Law. With ongoing advances in chip and interconnectivity technologies, interoperability between computing devices has become key to the sustainable development of future computing. Open hardware, open software, and hardware-software layered decoupling have emerged as prominent trends in data center development. This has also prompted vendors to shift from a closed proprietary mentality to one which emphasizes open source and collaboration. This openness gives more companies the opportunity to contribute to data center infrastructure innovation and inspire more innovative ideas through global collaboration. Traditional industries embrace open compute for ecological empowerment Open compute promotes standardization and ecosystem building by forming a consensus via open collaboration and enabling the delivery of infrastructure in line with open source specifications. This facilitates the rapid application of more innovative technologies. This industrial ecosystem allows hyper-scale data centers to apply open compute technologies on a large scale, and also encourages industry users and even SMEs to start deploying cutting-edge solutions based on open compute. Open compute has been accelerating to expand from the Internet to other industries, such as telecommunications, finance, gaming, healthcare, auto manufacturing, etc. Omdia predicts that the market share of non-Internet industries in open compute will grow from 10.5% in 2020 to 21.9% in 2025. The unique technical edge, subtle design thinking, and ecosystem collaboration of open compute are breaking boundaries in data center innovation and enabling the convergence of more technologies. In the future, global collaboration and co-innovation revolving around open compute will drive further data center advancement while addressing worldwide issues such as carbon emissions. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

IT SYSTEMS MANAGEMENT

Juniper Networks Chosen by Jazz to Build Fully-Automated Data Center Infrastructure to Support Data, Music and Video Services

Juniper Networks | August 09, 2022

Juniper Networks , a leader in secure, AI-driven networks, today announced that it has been selected by Jazz, Pakistan’s number one 4G operator and the largest internet and leading digital service provider, to create a transformative, expanded and upgraded data center network to underpin Jazz’s services delivery platform for its 74.9 million subscribers. Jazz’s objective was to reimagine its architectural approach by leveraging continuous automation, assurance and data-driven insights to deliver a superior network user experience at scale while simplifying its operations. Jazz offers the broadest portfolio of value-added digital services to enterprises and subscribers in Pakistan and has built a reputation for cutting-edge innovation with the ability to scale cloud-based services quickly and reliably. Following a rigorous vendor-agnostic technology appraisal, focused on the operational and cost efficiencies made possible by network automation, Jazz selected Juniper’s technology and expertise to underpin this latest project. Juniper’s advanced automation capabilities, transforming the entire network management lifecycle process within a single system, were a standout in the market. News Highlights The new network will support a wide range of customer-facing services that demand reliability and fast data throughput to support a consistently strong user experience. These include cloud-based enterprise data services, mobile banking, music and video download/streaming services, as well as professional services such as an agricultural application for four million farmers who rely on it for information, advice and guidance in remote areas. Jazz will also use the network to power key internal workloads such as CRM and billing. Jazz will deploy the Juniper Apstra System to deliver true intent-based networking (IBN) capabilities. This enables Jazz to design and operate its data center network based on outcomes, with the entire data center lifecycle automated, from Day 0 (design) through Day 1 (configuration and deployment) to Day 2+ (ongoing operations). The network’s initial design is tied to day-to-day operations, enabling a single source of truth throughout its lifecycle. Automation provides a continuous feedback loop of real-time data insights, validation and root cause identification to minimize mean-time-to-repair (MTTR). This approach will enable Jazz to operate a much more efficient, reliable and agile network. It will help to deploy new service features, optimizing user experience for both network teams and customers. The new data center infrastructure includes a spine-and-leaf architecture built with the Juniper Networks QFX Series Switches and fully integrated with the Juniper Apstra System. Jazz has previously deployed MX Series Universal Routing Platforms from Juniper for 400G-ready connectivity for its metro and internet gateway infrastructure. The new QFXs leverage the same Junos® OS operating system, providing a consistent networking estate for Jazz to manage and operate. “In common with all service providers globally, Jazz faces relentless data demand and heightened expectations for seamless digital services. As a result, we wanted to completely rethink our data center operations, using ground-breaking automation to create the best possible user experiences for our enterprise customers and subscribers. Operational simplicity was another important goal, to deliver cost reductions and improved ease of use for our technical teams in the face of massive demand at scale. We evaluated multiple vendors, but Juniper’s ability to deliver the exact networking outcomes we needed meant that a highly strategic decision was very straightforward to make.” Abdul Rehman Usmani, Vice President of Technology at Jazz “The power of automation, bound within a single operational framework thanks to intent-based networking, enables Jazz to address the relevant operational questions, find the right answers quickly and make the best decisions. This means its network becomes a strategic business tool, leveraging data to deliver robust deployment and operational efficiencies and eliminates traditional network constraints that force choices between speed and reliability. Based on data from other Juniper customers, the result will be dramatic savings on downstream costs and tremendous returns on networking investments.” Mike Bushong, Vice President, Cloud Ready Data Center at Juniper Networks About Juniper Networks Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the world’s greatest challenges of well-being, sustainability and equality.

Read More

DATA STORAGE

Portworx by Pure Storage Recognized as the Leader in Kubernetes Storage for Three Consecutive Years by GigaOm

Pure Storage | August 08, 2022

Pure Storage® , the IT pioneer that delivers the world's most advanced data storage technology and services, today announced it was named the leader for the third consecutive year in the GigaOm Radar Report for Enterprise Kubernetes Storage, which analyzed enterprise storage systems with support for Kubernetes-based workloads, and its companion report for Cloud-Native Kubernetes Data Storage, which analyzed Kubernetes-native storage solutions built specifically to support stateful containers with scalable, distributed architectures. According to the GigaOm Radar Report for Cloud-Native Kubernetes Storage, Portworx® by Pure Storage "is one of the most advanced solutions for enterprise Kubernetes storage" and "remains the gold standard in cloud-native Kubernetes storage for the enterprise" as "a complete enterprise-grade solution with outstanding data management capabilities, unmatched deployment possibilities, and superior management features." Across criteria and evaluation metrics, Portworx was ranked by GigaOm as a "strong focus and perfect fit" in advanced data services, advanced CSI integration, deployment models, control plane architecture, developer experience, visibility and insights, as well as architecture, scalability, flexibility, manageability, and performance. Portworx continues to advance the innovation of its Kubernetes Data Platform to bring databases such as Kafka, Cassandra, and Postgres under one platform in the most simple and reliable manner with Portworx Data Services. The GigaOm Radar Report for Enterprise Kubernetes Storage claimed "the integration of Portworx Essentials on Pure Storage controller-based architectures significantly enhances data efficiency because users benefit from the data reduction capabilities offered by the storage arrays." The report also highlights that this powerful integration "allows organizations to seamlessly deploy cloud-native workloads on a proven Kubernetes storage solution, and as their needs grow, they can effortlessly migrate those workloads to the full Portworx solution if they decide to adopt it." Once again, Pure Storage received the highest scores among all market segments, deployment models, and evaluation metrics in the analysis. "For three consecutive years, we've been recognized as a Leader and Outperformer by GigaOm Radar. Customers running containers and databases at scale in production use Portworx to ensure highly reliable, available and secure Kubernetes data storage capabilities. I'm incredibly proud of our Portworx engineering team's recognition by GigaOm as we continue on our mission to help enterprises unleash the power of data." Murli Thirumale, VP, GM Cloud Native Business Unit, Pure Storage In addition to the GigaOm Radar Reports for Cloud-Native Kubernetes Data Storage and Enterprise Kubernetes Data Storage, Pure Storage has been consistently recognized as a leader across the other GigaOm reports for which it qualifies, including High-Performance Object Storage, Kubernetes Data Protection, and Enterprise General-Purpose Storage Systems. About Pure Storage Pure Storage uncomplicates data storage, forever. Pure Storage delivers a cloud experience that empowers every organization to get the most from their data while reducing the complexity and expense of managing the infrastructure behind it. Pure Storage's commitment to providing true storage as-a-service gives customers the agility to meet changing data needs at speed and scale, whether they are deploying traditional workloads, modern applications, containers, or more. Pure Storage believes it can make a significant impact in reducing data center emissions worldwide through its environmental sustainability efforts, including designing products and solutions that enable customers to reduce their carbon and energy footprint. And with a certified customer satisfaction score in the top one percent of B2B companies, Pure Storage's ever-expanding list of customers are among the happiest in the world.

Read More

Events