Cisco and Pure Storage Announce Epic Infrastructure Solution

Boosted by the 2009 HITECH Act, EHR adoption by hospitals is now near-universal at 96%, according to a HHS report. Digital medical records have helped streamline workflows, reduce errors, and improve coordination of care. Thanks to technology, we’ve said goodbye to thick manila folders, unsecured paper, and cumbersome filing cabinets.

Spotlight

Shiftboard

Shiftboard provides online scheduling software (SaaS) to a broad range of business services and staffing companies, municipal governments, educational institutions, and non-profits. Shiftboard's customers range from 25 to 10,000 users and conduct scheduling and people management operations around the globe. Shiftboard's software is web-based, can be launched on short timelines, and is easy for workers to use.

OTHER ARTICLES
Application Infrastructure, Application Storage

Advancing 5G with cloud-native networking and intelligent infrastructure

Article | July 19, 2023

The success of 5G technology is a function of both the infrastructure that supports it and the ecosystems that enable it. Today, the definitive focus in the 5G space is on enterprise use cases, ranging from dedicated private 5G networks to accessing edge compute infrastructure and public or private clouds from the public 5G network. As a result, vendor-neutral multitenant data center providers and their rich interconnection capabilities are pivotal in helping make 5G a reality. This is true both in terms of the physical infrastructure needed to support 5G and the ability to effectively connect enterprises to 5G. Industry experts expect 5G to enable emerging applications such as virtual and augmented reality (AR/VR), industrial robotics/controls as part of the industrial internet of things (IIoT), interactive gaming, autonomous driving, and remote medical procedures. These applications need a modern, cloud-based infrastructure to meet requirements around latency, cost, availability and scalability. This infrastructure must be able to provide real-time, high-bandwidth, low-latency access to latency-dependent applications distributed at the edge of the network. How Equinix thinks about network slicing Network slicing refers to the ability to provision and connect functions within a common physical network to provide the resources necessary to deliver service functionality under specific performance constraints (such as latency, throughput, capacity and reliability) and functional constraints (such as security and applications/services). With network slicing, enterprises can use 5G networks and services for a wide variety of use cases on the same infrastructure. Providing continuity of network slices with optimal UPF placement and intelligent interconnection Mobile traffic originates in the mobile network, but it is not contained to the mobile network domain, because it runs between the user app on a device and the server workload on multi-access edge compute (MEC) or on the cloud. Therefore, to preserve intended characteristics, the slice must be extended all the way to where the traffic wants to go. This is why we like to say “the slicing must go on.” The placement of network functions within the slice must be optimized relative to the intended traffic flow, so that performance can be ensured end-to-end. As a result, organizations must place or activate the user plane function (UPF) in optimal locations relative to the end-to-end user plane traffic flow. We expect that hybrid and multicloud connectivity will remain a key requirement for enterprises using 5G access. In this case, hybrid refers to private edge computing resources (what we loosely call “MEC”) located in data centers—such as Equinix International Business Exchange™ (IBX®) data centers—and multicloud refers to accessing multiple cloud providers from 5G devices. To ensure both hybrid and multicloud connectivity, enterprises need to make the UPF part of the multidomain virtual Layer 2/Layer 3 interconnection fabric. Because a slice must span multiple domains, automation of UPF activation, provisioning and virtual interconnection to edge compute and multicloud environments is critical. Implementing network slicing for interconnection of core and edge technology Equinix partnered with Kaloom to develop network slicing for interconnection of core and edge (NICE) technology within our 5G and Edge Technology Development Center (5G ETDC) in Dallas. NICE technology is built using cloud-native network fabric and high-performance 5G UPF from Kaloom. This is a production-ready software solution, running on white boxes built with P4 programmable application-specific integrated circuits (ASICs), allowing for deep network slicing and support for high-performance 5G UPF with extremely fast data transfer rates. With NICE technology in the 5G ETDC, Equinix demonstrates: 5G UPF deployment/activation and traffic breakout at Equinix for multiple slices. Software-defined interconnection between the 5G core and MEC resources from multiple providers. Software-defined interconnection between the 5G core and multiple cloud service providers. Orchestration of provisioning and automation of interconnection across the 5G core, MEC and cloud resources. Architecture of NICE technology in the Equinix 5G ETDC The image above shows (from left to right): The mobile domain with radio access network (RAN), devices (simulated) and mobile backhaul connected to Equinix. The Equinix domain with: Equinix Metal® supporting edge computing servers and a fabric controller from Kaloom. Network slicing fabric providing interconnection and Layer 2/Layer 3 cloud-native networking to dynamically activate UPF instances/interfaces connected with MEC environments and clouds, forming two slices (shown above in blue and red). Equinix Fabric™ and multicloud connectivity. This demonstrates the benefit of having the UPF as a feature of the interconnection fabric, effectively allowing UPF activation as part of the virtual fabric configuration. This ultimately enables high-performance UPF that’s suitable for use cases such as high-speed 5G fixed wireless access. Combining UPF instances and MEC environments into an interconnection fabric makes it possible to create continuity for the slices and influence performance and functionality. Equinix Fabric adds multicloud connectivity to slices, enabling organizations to directly integrate network slicing with their mobile hybrid multicloud architectures. Successful private 5G edge deployments deliver value in several ways. Primarily, they offer immediate access to locally provisioned elastic compute, storage and networking resources that deliver the best user and application experiences. In addition, they help businesses access a rich ecosystem of partners to unlock new technologies at the edge. Secure, reliable connectivity and scalable resources are essential at the edge. A multivendor strategy with best-of-breed components complemented by telemetry, advanced analytics with management and orchestration—as demonstrated with NICE in Equinix data centers—is a most effective way to meet those requirements. With Equinix’s global footprint of secure, well-equipped facilities, customers can maximize benefits.” - Suresh Krishnan, CTO, Kaloom Equinix and its partners are building the future of 5G NICE technology is just one example of how the Equinix 5G and Edge Technology Development Center enables the innovation and development of real-world capabilities that underpin the edge computing and interconnection infrastructure required to successfully implement 5G use cases. A key benefit of the 5G ETDC is the ability to combine cutting-edge innovations from our partners like Kaloom with proven solutions from Equinix that already serve a large ecosystem of customers actively utilizing hybrid multicloud architectures.

Read More
Application Storage, Data Storage

Accelerating DevOps and Continuous Delivery with IaaS Virtualization

Article | July 12, 2023

Adopting DevOps and CD in IaaS environments is a strategic imperative for organizations seeking to achieve agility, competitiveness, and customer satisfaction in their software delivery processes. Contents 1. Introduction 2. What is IaaS Virtualization? 3. Virtualization Techniques for DevOps and Continuous Delivery 4. Integration of IaaS with CI/CD Pipelines 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait 5.2. CPU System/Wait Time for VKernel: 5.3. Memory Balloon 5.4.Memory Swap Rate: 5.5. Memory Usage: 5.6. Disk/Network Latency: 6. Industry tips for IaaS Virtualization Implementation 6.1. Infrastructure Testing 6.2. ApplicationTesting 6.3. Security Monitoring 6.4. Performance Monitoring 6.5. Cost Optimization 7. Conclusion 1. Introduction Infrastructure as a Service (IaaS) virtualization presents significant advantages for organizations seeking to enhance their agility, flexibility, and speed to market within the DevOps and continuous delivery frameworks. Addressing the associated risks and challenges is crucial and can be achieved by employing the appropriate monitoring and testing techniques, enlisted further, in this blog. IaaS virtualization allows organizations to provision and de-provision resources as needed, eliminating the need for long-term investments in hardware and data centers. Furthermore, IaaS virtualization offers the ability to operate with multiple operating systems, databases, and programming languages, empowering teams to select the tools and technologies that best suit their requirements. However, organizations must implement comprehensive testing and monitoring strategies, ensure proper security and compliance controls, and adopt the best resource optimization and management practices to leverage the full potential of virtualized IaaS. To achieve high availability and fault tolerance along with advanced networking, enabling complex application architectures in IaaS virtualization, the blog mentions five industry tips. 2. What is IaaS Virtualization? IaaS virtualization involves simultaneously running multiple operating systems with different configurations. To run virtual machines on a system, a software layer known as the virtual machine monitor (VMM) or hypervisor is required. Virtualization in IaaS handles website hosting, application development and testing, disaster recovery, and data storage and backup. Startups and small businesses with limited IT resources and budgets can benefit greatly from virtualized IaaS, enabling them to provide the necessary infrastructure resources quickly and without significant capital expenditures. Virtualized IaaS is a potent tool for businesses and organizations of all sizes, enabling greater infrastructure resource flexibility, scalability, and efficiency. 3. Virtualization Techniques for DevOps and Continuous Delivery Virtualization is a vital part of the DevOps software stack. Virtualization in DevOps process allows teams to create, test, and implement code in simulated environments without wasting valuable computing resources. DevOps teams can use the virtual services for thorough testing, preventing bottlenecks that could slow down release time. It heavily relies on virtualization for building intricate cloud, API, and SOA systems. In addition, virtual machines benefit test-driven development (TDD) teams that prefer to begin their troubleshooting at the API level. 4. Integration of IaaS with CI/CD Pipelines Continuous integration is a coding practice that frequently implements small code changes and checks them into a version control repository. This process not only packages software and database components but also automatically executes unit tests and other tests to provide developers with vital feedback on any potential breakages caused by code changes. Continuous testing integrates automated tests into the CI/CD pipeline. For example, unit and functionality tests identify issues during continuous integration, while performance and security tests are executed after a build is delivered in continuous delivery. Continuous delivery is the process of automating the deployment of applications to one or more delivery environments. IaaS provides access to computing resources through a virtual server instance, which replicates the capabilities of an on-premise data center. It also offers various services, including server space, security, load balancing, and additional bandwidth. In modern software development and deployment, it's common to integrate IaaS with CI/CD pipelines. This helps automate the creation and management of infrastructure using infrastructure-as-code (IAC) tools. Templates can be created to provision resources on the IaaS platform, ensuring consistency and meeting software requirements. Additionally, containerization technologies like Docker and Kubernetes can deploy applications on IaaS platforms. 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait The CPU swap wait is when the virtual system waits while the hypervisor swaps parts of the VM memory back in from the disk. This happens when the hypervisor needs to swap, which can be due to a lack of balloon drivers or a memory shortage. This can affect the application's response time. One can install the balloon driver and/or reduce the number of VMs on the physical machine to resolve this issue. 5.2. CPU System/Wait Time for VKernel Virtualization systems often report CPU or wait time for the virtualization kernel used by each virtual machine to measure CPU resource overhead. While this metric can't be directly linked to response time, it can impact both ready and swap times if it increases significantly. If this occurs, it could indicate that the system is either misconfigured or overloaded, and reducing the number of VMs on the machine may be necessary. 5.3. Memory Balloon Memory ballooning is a memory management technique used in virtualized IaaS environments. It works by injecting a software balloon into the VM's memory space. The balloon is designed to consume memory within the VM, causing it to request more memory from the hypervisor. As a result, if the host system is experiencing low memory, it will take memory from its virtual infrastructures, thus negatively affecting the guest's performance, causing swapping, reduced file-system buffers, and smaller system caches. 5.4. Memory Swap Rate Memory swap rate is a performance metric used in virtualized IaaS environments to measure the amount of memory being swapped to disk. When the swap rate is high, it leads to longer CPU swap times and negatively affects application performance. In addition, when a VM is running, it may require more memory than is physically available on the server. In such cases, the hypervisor may use disk space as a temporary storage area for excess memory. Therefore, to optimize, it is important to ensure that VMs have sufficient memory resources allocated. 5.5. Memory Usage Memory usage refers to the amount of memory being used by a VM at any given time. Memory usage is assessed by analyzing the host level, VM level, and granted memory. When memory usage exceeds the available physical memory on the server, the hypervisor may use disk space as a temporary storage area for excess memory, leading to performance issues. The disparity between used and granted memory indicates the overcommitment rate, which can be adjusted through ballooning. 5.6. Disk/Network Latency Some virtualization providers provide integrated utilities for assessing the latency of disks and network interfaces utilized by a virtual machine. Since latency directly affects response time, increased latency at the hypervisor level will also impact the application. An excessive amount of latency indicates the system is overloaded and requires reconfiguration. These metrics enable us to monitor and detect any negative impact a virtualized system might have on our application. 6. Industry tips for IaaS Virtualization Implementation Testing, compliance management and security arecritical aspects of managing virtualized IaaS environments . By implementing a comprehensive strategy, organizations ensure their infrastructure and applications' reliability, security, and performance. 6.1. Infrastructure Testing This involves testing the infrastructure components of the IaaS environment, such as the virtual machines, networks, and storage, aiming to ensure the infrastructure is functioning correctly and that there are no performance bottlenecks, security vulnerabilities, or configuration issues. Testing the virtualized environment, storage testing (testing data replication and backup and recovery processes), and network testing are some of the techniques to be performed. 6.2. Application Testing Applications running on the IaaS virtual environment should be thoroughly tested to ensure they perform as expected. This includes functional testing to ensure that the application meets its requirements and performance testing to ensure that the application can handle anticipated user loads. 6.3. Security Monitoring Security monitoring is critical in IaaS environments, owing to the increased risks and threats. This involves monitoring the infrastructure and applications for potential security threats, vulnerabilities, or breaches. In addition, regular vulnerability assessments and penetration testing help identify and address potential security issues before they become significant problems. 6.4. Performance Monitoring Performance monitoring is essential to ensuring that the underlying infrastructure meets performance expectations and has no performance bottlenecks. This comprises monitoring metrics such as CPU usage, memory usage, network traffic, and disk utilization. This information is used to identify performance issues and optimize resource usage. 6.5. Cost Optimization Cost optimization is a critical aspect of a virtualized IaaS environment with optimized efficiency and resource allocation. Organizations reduce costs and optimize resource usage by identifying and monitoring usage patterns and optimizing elastic and scalable resources. It involves right-sizing resources, utilizing infrastructure automation, reserved instances, spot instances (unused compute capacity purchased at a discount), and optimizing storage usage. 7. Conclusion IaaS virtualization has become a critical component of DevOps and continuous delivery practices. To rapidly develop, test, and deploy applications with greater agility and efficiency by providing on-demand access to scalable infrastructure resources to Devops teams, IaaS virtualization comes into picture. As DevOps teams continue to seek ways to streamline processes and improve efficiency, automation will play an increasingly important role. Automated deployment, testing, and monitoring processes will help reduce manual intervention and increase the speed and accuracy of development cycles. In addition, containers will offer a lightweight and flexible alternative to traditional virtualization, allowing DevOps teams to package applications and their dependencies into portable, self-contained units that can be easily moved between different environments. This can reduce the complexity of managing virtualized infrastructure environments and enable greater flexibility and scalability. By embracing these technologies and integrating them into their workflows, DevOps teams can achieve greater efficiency and accelerate their delivery of high-quality software products.

Read More
Hyper-Converged Infrastructure

Infrastructure as code vs. platform as code

Article | October 3, 2023

With infrastructure as code (IaC), you write declarative instructions about compute, storage and network requirements for the infra and execute it. How does this compare to platform as code (PaC) and what did these two concepts develop in response to? In its simplest form, the tech stack of any application has three layers — the infra layer containing bare metal instances, virtual machines, networking, firewall, security etc.; the platform layer with the OS, runtime environment, development tools etc.; and the application layer which, of course, contains your application code and data. A typical operations team works on the provisioning, monitoring and management of the infra and platform layers, in addition to enabling the deployment of code.

Read More
Hyper-Converged Infrastructure

Adapting to Changing Landscape: Challenges and Solutions in HCI

Article | October 3, 2023

Navigating the complex terrain of Hyper-Converged Infrastructure: Unveiling the best practices and innovative strategies to harness the maximum benefits of HCI for transformation of business. Contents 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI 1.2 Importance of Adapting to the Changing HCI Environment 2. Challenges in HCI 2.1 Integration & Compatibility: Legacy System Integration 2.2 Efficient Lifecycle: Firmware & Software Management 2.3 Resource Forecasting: Scalability Planning 2.4 Workload Segregation: Performance Optimization 2.5 Latency Optimization: Data Access Efficiency 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability 3.2 Lifecycle Management 3.3 Capacity Planning 3.4 Performance Isolation 3.5 Data Locality 4. Importance of Ongoing Adaptation in the HCI Domain 4.1 Evolving Technology 4.2 Performance Optimization 4.3 Scalability and Flexibility 4.4 Security and Compliance 4.5 Business Transformation 5. Key Takeaways from the Challenges and Solutions Discussed 1. Introduction to Hyper-Converged Infrastructure 1.1 Evolution and adoption of HCI Hyper-Converged Infrastructure has transformed by providing a consolidated and software-defined approach to data center infrastructure. HCI combines virtualization, storage, and networking into a single integrated system, simplifying management and improving scalability. It has gained widespread adoption due to its ability to address the challenges of data center consolidation, virtualization, and resource efficiency. HCI solutions have evolved to offer advanced features like hybrid and multi-cloud support, data deduplication, and disaster recovery, making them suitable for various workloads. The HCI market has experienced significant growth, with a diverse ecosystem of vendors offering turnkey appliances and software-defined solutions. It has become the preferred infrastructure for running workloads like VDI, databases, and edge computing. HCI's ability to simplify operations, improve resource utilization, and support diverse workloads ensures its continued relevance. 1.2 Importance of Adapting to the Changing HCI Environment Adapting to the changing Hyper-Converged Infrastructure is of utmost importance for businesses, as it offers a consolidated and software-defined approach to IT infrastructure, enabling streamlined management, improved scalability, and cost-effectiveness. Staying up-to-date with evolving HCI technologies and trends ensures businesses to leverage the latest advancements for optimizing their operations. Embracing HCI enables organizations to enhance resource utilization, accelerate deployment times, and support a wide range of workloads. In accordance with enhancement, it facilitates seamless integration with emerging technologies like hybrid and multi-cloud environments, containerization, and data analytics. Businesses can stay competitive, enhance their agility, and unlock the full potential of their IT infrastructure. 2. Challenges in HCI 2.1 Integration and Compatibility: Legacy System Integration Integrating Hyper-Converged Infrastructure with legacy systems can be challenging due to differences in architecture, protocols, and compatibility issues. Existing legacy systems may not seamlessly integrate with HCI solutions, leading to potential disruptions, data silos, and operational inefficiencies. This may hinder the organization's ability to fully leverage the benefits of HCI and limit its potential for streamlined operations and cost savings. 2.2 Efficient Lifecycle: Firmware and Software Management Managing firmware and software updates across the HCI infrastructure can be complex and time-consuming. Ensuring that all components within the HCI stack, including compute, storage, and networking, are running the latest firmware and software versions is crucial for security, performance, and stability. However, coordinating and applying updates across the entire infrastructure can pose challenges, resulting in potential vulnerabilities, compatibility issues, and suboptimal system performance. 2.3 Resource Forecasting: Scalability Planning Forecasting resource requirements and planning for scalability in an HCI environment is as crucial as efficiently implementing HCI systems. As workloads grow or change, accurately predicting the necessary computing, storage, and networking resources becomes essential. Without proper resource forecasting and scalability planning, organizations may face underutilization or overprovisioning of resources, leading to increased costs, performance bottlenecks, or inefficient resource allocation. 2.4 Workload Segregation: Performance Optimization In an HCI environment, effectively segregating workloads to optimize performance can be challenging. Workloads with varying resource requirements and performance characteristics may coexist within the HCI infrastructure. Ensuring that high-performance workloads receive the necessary resources and do not impact other workloads' performance is critical. Failure to segregate workloads properly can result in resource contention, degraded performance, and potential bottlenecks, affecting the overall efficiency and user experience. 2.5 Latency Optimization: Data Access Efficiency Optimizing data access latency in an HCI environment is a rising challenge. HCI integrates computing and storage into a unified system, and data access latency can significantly impact performance. Inefficient data retrieval and processing can lead to increased response times, reduced user satisfaction, and potential productivity losses. Failure to ensure the data access patterns, caching mechanisms, and optimized network configurations to minimize latency and maximize data access efficiency within the HCI infrastructure leads to such latency. 3. Solutions for Adapting to Changing HCI Landscape 3.1 Interoperability Achieved by: Standards-based Integration and API HCI solutions should prioritize adherence to industry standards and provide robust support for APIs. By leveraging standardized protocols and APIs, HCI can seamlessly integrate with legacy systems, ensuring compatibility and smooth data flow between different components. This promotes interoperability, eliminates data silos, and enables organizations to leverage their existing infrastructure investments while benefiting from the advantages of HCI. 3.2 Lifecycle Management Achieved by: Centralized Firmware and Software Management Efficient Lifecycle Management in Hyper-Converged Infrastructure can be achieved by implementing a centralized management system that automates firmware and software updates across the HCI infrastructure. This solution streamlines the process of identifying, scheduling, and deploying updates, ensuring that all components are running the latest versions. Centralized management reduces manual efforts, minimizes the risk of compatibility issues, and enhances security, stability, and overall system performance. 3.3 Capacity Planning Achieved by: Analytics-driven Resource Forecasting HCI solutions should incorporate analytics-driven capacity planning capabilities. By analyzing historical and real-time data, HCI systems can accurately predict resource requirements and assist organizations in scaling their infrastructure proactively. This solution enables efficient resource utilization, avoids underprovisioning or overprovisioning, and optimizes cost savings while ensuring that performance demands are met. 3.4 Performance Isolation Achieved by: Quality of Service and Resource Allocation Policies To achieve effective workload segregation and performance optimization, HCI solutions should provide robust Quality of Service (QoS) mechanisms and flexible resource allocation policies. QoS settings allow organizations to prioritize critical workloads, allocate resources based on predefined policies, and enforce performance guarantees for specific applications or users. This solution ensures that high-performance workloads receive the necessary resources while preventing resource contention and performance degradation for other workloads. 3.5 Data Locality Achieved by: Data Tiering and Caching Mechanisms Addressing latency optimization and data access efficiency, HCI solutions must incorporate data tiering and caching mechanisms. By intelligently placing frequently accessed data closer to the compute resources, such as utilizing flash storage or caching algorithms, HCI systems can minimize data access latency and improve overall performance. This solution enhances data locality, reduces network latency, and ensures faster data retrieval, resulting in optimized application response times and improved user experience. 4. Importance of Ongoing Adaptation in the HCI Domain continuous adaptation is of the utmost importance in the HCI domain. HCI is a swiftly advancing technology that continues to provide new capabilities. Organizations are able to maximize the benefits of HCI and maintain a competitive advantage if they stay apprised of the most recent advancements and adapt to the changing environment. Here are key reasons highlighting the significance of ongoing adaptation in the HCI domain: 4.1 Evolving Technology HCI is constantly changing, with new features, functionalities, and enhancements being introduced regularly. Ongoing adaptation allows organizations to take advantage of these advancements and incorporate them into their infrastructure. It ensures that businesses stay up-to-date with the latest technological trends and can make informed decisions to optimize their HCI deployments. 4.2 Performance Optimization Continuous adaptation enables organizations to fine-tune their HCI environments for optimal performance. By staying informed about performance best practices and emerging optimization techniques, businesses can make necessary adjustments to maximize resource utilization, improve workload performance, and enhance overall system efficiency. Ongoing adaptation ensures that HCI deployments are continuously optimized to meet evolving business requirements. 4.3 Scalability and Flexibility Adapting to the changing HCI landscape facilitates scalability and flexibility. As business needs evolve, organizations may require the ability to scale their infrastructure, accommodate new workloads, or adopt hybrid or multi-cloud environments. Ongoing adaptation allows businesses to assess and implement the necessary changes to their HCI deployments, ensuring they can seamlessly scale and adapt to evolving demands. 4.4 Security and Compliance The HCI domain is not immune to security threats and compliance requirements. Ongoing adaptation helps organizations stay vigilant and up-to-date with the latest security practices, threat landscapes, and regulatory changes. It enables businesses to implement robust security measures, proactively address vulnerabilities, and maintain compliance with industry standards and regulations. Ongoing adaptation ensures that HCI deployments remain secure and compliant in the face of evolving cybersecurity challenges. 4.5 Business Transformation Ongoing adaptation in the HCI domain supports broader business transformation initiatives. Organizations undergoing digital transformation may need to adopt new technologies, integrate with cloud services, or embrace emerging trends like edge computing. Adapting the HCI infrastructure allows businesses to align their IT infrastructure with strategic objectives, enabling seamless integration, improved agility, and the ability to capitalize on emerging opportunities. The adaptation is thus crucial in the HCI domain as it enables organizations to stay current with technological advancements, optimize performance, scale infrastructure, enhance security, and align with business transformation initiatives. By continuously adapting to the evolving HCI, businesses can maximize the value and benefits derived from their HCI investments. 5. Key Takeaways from Challenges and Solutions Discussed Hyper-Converged Infrastructure poses several challenges during the implementation and execution of systems that organizations need to address for optimal performance. Integration and compatibility issues arise when integrating HCI with legacy systems, requiring standards-based integration and API support. Efficient lifecycle management is crucial, involving centralized firmware and software management to automate updates and enhance security and stability. Accurate resource forecasting is vital for capacity planning, enabling organizations to scale their HCI infrastructure effectively. Workload segregation demands QOS mechanisms and flexible resource allocation policies to optimize performance. Apart from these, latency optimization requires data tiering and caching mechanisms to minimize data access latency and improve application response times. By tackling these challenges and implementing appropriate solutions, businesses can harness the full potential of HCI, streamlining operations, maximizing resource utilization, and ensuring exceptional performance and user experience.

Read More

Spotlight

Shiftboard

Shiftboard provides online scheduling software (SaaS) to a broad range of business services and staffing companies, municipal governments, educational institutions, and non-profits. Shiftboard's customers range from 25 to 10,000 users and conduct scheduling and people management operations around the globe. Shiftboard's software is web-based, can be launched on short timelines, and is easy for workers to use.

Related News

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Events