Cisco to acquire hyperconverged infrastructure startup Skyport Systems

Cisco Systems Inc. today announced plans to acquire Skyport Systems Inc., a five-year-old hyper-converged infrastructure provider backed by $67 million in funding. Investors in the startup include big names such as Intel Capital, Alphabet Inc.’s GV and Index Ventures. Cisco itself also backed Skyport through a $30 million round completed in 2016. What has enabled the provider to generate so much interest in the venture capital community is its flagship SkySecure system, a specialized appliance designed to run companies’ most sensitive applications. 

Spotlight

Hexagon PPM

Hexagon PPM (formerly Intergraph Process, Power & Marine) software solutions help visualize, create, and manage the life cycle of facilities and structures of all complexities. With nearly five decades of innovation and proven industry leadership, we have become the trusted partner for organizations challenged with the engineering, design, building, and operation of facilities and structures across the globe.

OTHER ARTICLES
Application Infrastructure, Application Storage

Top Books to Consider for Adoption of Hyper-Converged Infrastructure

Article | July 19, 2023

Discover the list of best hyperconverged infrastructure books and gain knowledge on the latest advancements in HCI and process design & explore areas for HCI improvements in the infrastructure domain. This comprehensive guide presents a curated selection of top books to consider for adopting Hyper-Converged Infrastructure (HCI) in IT infrastructure. Organizations increasingly recognize HCI as a transformative solution that streamlines data center management, enhances scalability, and optimizes resource utilization. To navigate this technology effectively, businesses must equip themselves with the proper knowledge and insights from authoritative sources. The carefully compiled list of books featured here offers valuable information, providing IT professionals and decision-makers with a solid foundation to make informed choices and successfully implement HCI within their IT infrastructure. 1. Hyperconverged Infrastructure Data Centers: Demystifying HCI Author: Sam Halabi Hyperconverged Infrastructure Data Centers: Demystifying HCI is a highly informative and authoritative guide that provides a clear understanding of Hyperconverged Infrastructure technology. Written for technical professionals and IT managers, the book offers a vendor-neutral perspective on HCI, covering its use cases and comparing leading hyperconvergence solutions in the market. Halabi effectively explains HCI's benefits, combining storage, computing, and networking into a single system, offering simplicity, scalability, and flexibility without sacrificing control. The book explores computing, virtualization, and software-defined storage advancements, highlighting the improvements they bring to data center designs. The author guides readers through the HCI lifecycle, including evaluation, planning, implementation, and management. The book also delves into HCI applications such as DevOps, virtual desktops, and disaster recovery, presenting a new application deployment and management model. 2. Hyperconverged Infrastructure: A Complete Guide Author: The Art of Service - Hyperconverged Infrastructure Publishing This book is a valuable resource for individuals and organizations seeking to understand and leverage the potential of hyperconverged infrastructure. This guide takes a question-based approach, empowering readers to uncover challenges and develop effective solutions. The guide provides a comprehensive self-assessment tool covering seven core HCI maturity levels. With updated case-based questions, readers can diagnose their HCI projects, initiatives, organizations, and processes based on accepted diagnostic standards and practices. It helps readers identify areas where HCI improvements can be made and provides a clear picture of the attention those areas require, enabling them to lead their organizations effectively and address what truly matters. It empowers readers to make their HCI investments work better by guiding them through asking the right questions and seeking innovative perspectives. 3. Hyperconverged Infrastructure: Practical Tools for Self-Assessment Author: Gerardus Blokdyk The book is a valuable resource for individuals in diverse business roles seeking to optimize their Hyperconverged infrastructure investments. This comprehensive guide emphasizes the integration of HCI with other business initiatives and the monitoring of HCI activities' effectiveness. The guide emphasizes the use of HCI data and information to support organizational decision-making and foster innovation. One of the strengths of this guide lies in its focus on leveraging HCI data and information for organizational decision-making and innovation. The self-assessment tool helps identify areas for improvement, with case-based questions organized into seven core areas of process design. Nevertheless, this guide equips readers with the necessary tools and insights to maximize the value of HCI investments, align with business objectives, and foster a culture of continuous improvement and innovation. 4. Hyper-converged Infrastructure Standard Requirements Author: Gerardus Blokdyk This book offers individuals various business roles considering or exploring hyper-converged infrastructure implementation. This comprehensive guide emphasizes the importance of asking the right questions and understanding the challenges and hyperconvergence solutions related to HCI. It provides a set of organized case-based questions, enabling readers to diagnose their HCI projects and identify areas for improvement. The self-assessment tool helps organizations implement evidence-based best practices and integrate the latest advancements in HCI and process design. With the Hyper-Converged Infrastructure Scorecard, readers can gain a clear understanding of the areas that require attention and prioritize their efforts accordingly. The digital components accompanying the book provide additional resources to support organizations in their HCI journey. 5. The Gorilla Guide to Hyperconverged Infrastructure Implementation Strategies Author: Scott D. Lowe The Gorilla Guide to Hyperconverged Infrastructure Implementation Strategies is a book designed for strategic planners seeking innovative segmentation methods. This book offers individuals various business roles exploring HCI implementation. It starts with the architecture of hyper-converged architecture, followed by Exploring the Intersection of Software-Defined Networking and HCI. It delved into addressing the pain points and storage performance in HCI, with relevant use cases for practical examples. It covers data-center consolidation, test and development environments, and HCI economics, for its impact on the IT budget. It helps organizations implement evidence-based best practices and integrate the latest advancements in HCI and process design. 6. The 2022 Report on Hyper-Converged Infrastructure: World Market Segmentation by City Author: Prof Philip M. Parker The '2022 Report on Hyper-Converged Infrastructure: World Market Segmentation by City' is a book designed for global strategic planners seeking innovative segmentation methods. This report covers over 2,000 cities across 200 countries, providing insights into the estimated market size (latent demand) of hyper-converged infrastructure in each significant city worldwide. The report ranks these cities based on their market size relative to their respective countries, geographic regions, and global market. The sales of hyper-converged infrastructure encompass a wide range of products, including hypervisors such as VMware, KVM, and Hyper-V, used for various purposes like virtual desktop infrastructure, server virtualization, data protection, and cloud solutions. Prominent companies in the industry, including VMware, Nutanix, Maxta, and others are covered in the report. The information presented is gathered from public sources, including news, press releases, and industry players, and is reported in U.S. dollars without adjusting for inflation. 7. The 2020-2025 World Outlook for Hyper-Converged Infrastructure Author: Prof Philip M. Parker The World Outlook for Hyper-Converged Infrastructure study comprehensively analyzes the global market across more than 190 countries. It offers estimates of the latent demand, or potential industry earnings (P.I.E.), for each country, expressed in millions of U.S. dollars. The report also presents the country's share as a percentage of the region and the global market, enabling readers to assess its relative position. The study generates latent demand estimates using econometric models that project economic dynamics within and between countries. While it does not delve into specific market players or product details, it takes a strategic, long-term perspective, disregarding short-term cyclical fluctuations and focusing on aggregated trends. A multi-stage methodology, often taught in graduate business courses on international strategic planning, was employed to formulate these estimates. Wrap-up The adoption of Hyper-Converged Infrastructure represents a significant opportunity for businesses to revolutionize their IT infrastructure, improve operational efficiency, and unlock new levels of agility and scalability. The books recommended in this listicle serve as indispensable resources for IT professionals and decision-makers seeking to embark on an HCI journey. By investing in the knowledge imparted by these authoritative texts, you empower yourself and your organization to leverage the full potential of HCI and stay at the forefront of technological advancements. Remember, success in adopting HCI lies not only in the technology itself but also in the understanding and expertise gained through continuous learning and exploration.

Read More
Hyper-Converged Infrastructure

A Look at Trends in IT infrastructure and Operations for 2022

Article | September 14, 2023

We’re all hoping that 2022 will finally end the unprecedented challenges brought by the global pandemic and things will return to a new normalcy. For IT infrastructure and operations organizations, the rising trends that we are seeing today will likely continue, but there are still a few areas that will need special attention from IT leaders over the next 12 to 18 months. In no particular order, they include: The New Edge Edge computing is now at the forefront. Two primary factors that make it business-critical are the increased prevalence of remote and hybrid workplace models where employees will continue working remotely, either from home or a branch office, resulting in an increased adoption of cloud-based businesses and communications services. With the rising focus on remote and hybrid workplace cultures, Zoom, Microsoft Teams, and Google Meet have continued to expand their solutions and add new features. As people start moving back to office, they are likely to want the same experience they had from home. In a typical enterprise setup, branch office traffic is usually backhauled all the way to the data center. This architecture severely impacts the user experience, so enterprises will have to review their network architectures and come up with a roadmap to accommodate local egress between branch offices and headquarters. That’s where the edge can help, bringing it closer to the workforce. This also brings an opportunity to optimize costs by migrating from some of the expensive multi-protocol label switching (MPLS) or private circuits to relatively low-cost direct internet circuits, which is being addressed by the new secure access service edge (SASE) architecture that is being offered by many established vendors. I anticipate some components of SASE, specifically those related to software-defined wide area network (SD-WAN), local egress, and virtual private network (VPN), will drive a lot of conversation this year. Holistic Cloud Strategy Cloud adoption will continue to grow, and along with software as a service (SaaS), there will be renewed interest in infrastructure as a service (IaaS), albeit for specific workloads. For a medium-to-large-sized enterprise with a substantial development environment, it will still be cost-prohibitive to move everything to the cloud, so any cloud strategy would need to be holistic and forward-looking to maximize its business value. Another pandemic-induced shift is from using virtual machines (VMs) as a consumption unit of compute to containers as a consumption unit of software. For on-premises or private cloud deployment architectures that require sustainable management, organizations will have to orchestrate containers and deploy efficient container security and management tools. Automation Now that cloud adoption, migration, and edge computing architectures are becoming more prevalent, the legacy methods of infrastructure provisioning and management will not be scalable. By increasing infrastructure automation, enterprises can optimize costs and be more flexible and efficient—but only if they are successful at developing new skills. To achieve the goal of “infrastructure as a code” will require a shift in the perspective on infrastructure automation to one that focuses on developing and sustaining skills and roles that improve efficiency and agility across on-premises, cloud, and edge infrastructures. Defining the roles of designers and architects to support automation is essential to ensure that automation works as expected, avoids significant errors, and complements other technologies. AIOps (Artificial Intelligence for IT Operations) Alongside complementing automation trends, the implementation of AIOps to effectively automate IT operations processes such as event correlation, anomaly detection, and causality determination will also be important. AIOps will eliminate the data silos in IT by bringing all types of data under one roof so it can be used to execute machine learning (ML)-based methods to develop insights for responsive enhancements and corrections. AIOps can also help with probable cause analytics by focusing on the most likely source of a problem. The concept of site reliability engineering (SRE) is being increasingly adopted by SaaS providers and will gain importance in enterprise IT environments due to the trends listed above. AIOps is a key component that will enable site reliability engineers (SREs) to respond more quickly—and even proactively—by resolving issues without manual intervention. These focus areas are by no means an exhaustive list. There are a variety of trends that will be more prevalent in specific industry areas, but a common theme in the post-pandemic era is going to be superior delivery of IT services. That’s also at the heart of the Autonomous Digital Enterprise, a forward-focused business framework designed to help companies make technology investments for the future.

Read More
Hyper-Converged Infrastructure, Application Infrastructure

Accelerating DevOps and Continuous Delivery with IaaS Virtualization

Article | July 19, 2023

Adopting DevOps and CD in IaaS environments is a strategic imperative for organizations seeking to achieve agility, competitiveness, and customer satisfaction in their software delivery processes. Contents 1. Introduction 2. What is IaaS Virtualization? 3. Virtualization Techniques for DevOps and Continuous Delivery 4. Integration of IaaS with CI/CD Pipelines 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait 5.2. CPU System/Wait Time for VKernel: 5.3. Memory Balloon 5.4.Memory Swap Rate: 5.5. Memory Usage: 5.6. Disk/Network Latency: 6. Industry tips for IaaS Virtualization Implementation 6.1. Infrastructure Testing 6.2. ApplicationTesting 6.3. Security Monitoring 6.4. Performance Monitoring 6.5. Cost Optimization 7. Conclusion 1. Introduction Infrastructure as a Service (IaaS) virtualization presents significant advantages for organizations seeking to enhance their agility, flexibility, and speed to market within the DevOps and continuous delivery frameworks. Addressing the associated risks and challenges is crucial and can be achieved by employing the appropriate monitoring and testing techniques, enlisted further, in this blog. IaaS virtualization allows organizations to provision and de-provision resources as needed, eliminating the need for long-term investments in hardware and data centers. Furthermore, IaaS virtualization offers the ability to operate with multiple operating systems, databases, and programming languages, empowering teams to select the tools and technologies that best suit their requirements. However, organizations must implement comprehensive testing and monitoring strategies, ensure proper security and compliance controls, and adopt the best resource optimization and management practices to leverage the full potential of virtualized IaaS. To achieve high availability and fault tolerance along with advanced networking, enabling complex application architectures in IaaS virtualization, the blog mentions five industry tips. 2. What is IaaS Virtualization? IaaS virtualization involves simultaneously running multiple operating systems with different configurations. To run virtual machines on a system, a software layer known as the virtual machine monitor (VMM) or hypervisor is required. Virtualization in IaaS handles website hosting, application development and testing, disaster recovery, and data storage and backup. Startups and small businesses with limited IT resources and budgets can benefit greatly from virtualized IaaS, enabling them to provide the necessary infrastructure resources quickly and without significant capital expenditures. Virtualized IaaS is a potent tool for businesses and organizations of all sizes, enabling greater infrastructure resource flexibility, scalability, and efficiency. 3. Virtualization Techniques for DevOps and Continuous Delivery Virtualization is a vital part of the DevOps software stack. Virtualization in DevOps process allows teams to create, test, and implement code in simulated environments without wasting valuable computing resources. DevOps teams can use the virtual services for thorough testing, preventing bottlenecks that could slow down release time. It heavily relies on virtualization for building intricate cloud, API, and SOA systems. In addition, virtual machines benefit test-driven development (TDD) teams that prefer to begin their troubleshooting at the API level. 4. Integration of IaaS with CI/CD Pipelines Continuous integration is a coding practice that frequently implements small code changes and checks them into a version control repository. This process not only packages software and database components but also automatically executes unit tests and other tests to provide developers with vital feedback on any potential breakages caused by code changes. Continuous testing integrates automated tests into the CI/CD pipeline. For example, unit and functionality tests identify issues during continuous integration, while performance and security tests are executed after a build is delivered in continuous delivery. Continuous delivery is the process of automating the deployment of applications to one or more delivery environments. IaaS provides access to computing resources through a virtual server instance, which replicates the capabilities of an on-premise data center. It also offers various services, including server space, security, load balancing, and additional bandwidth. In modern software development and deployment, it's common to integrate IaaS with CI/CD pipelines. This helps automate the creation and management of infrastructure using infrastructure-as-code (IAC) tools. Templates can be created to provision resources on the IaaS platform, ensuring consistency and meeting software requirements. Additionally, containerization technologies like Docker and Kubernetes can deploy applications on IaaS platforms. 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait The CPU swap wait is when the virtual system waits while the hypervisor swaps parts of the VM memory back in from the disk. This happens when the hypervisor needs to swap, which can be due to a lack of balloon drivers or a memory shortage. This can affect the application's response time. One can install the balloon driver and/or reduce the number of VMs on the physical machine to resolve this issue. 5.2. CPU System/Wait Time for VKernel Virtualization systems often report CPU or wait time for the virtualization kernel used by each virtual machine to measure CPU resource overhead. While this metric can't be directly linked to response time, it can impact both ready and swap times if it increases significantly. If this occurs, it could indicate that the system is either misconfigured or overloaded, and reducing the number of VMs on the machine may be necessary. 5.3. Memory Balloon Memory ballooning is a memory management technique used in virtualized IaaS environments. It works by injecting a software balloon into the VM's memory space. The balloon is designed to consume memory within the VM, causing it to request more memory from the hypervisor. As a result, if the host system is experiencing low memory, it will take memory from its virtual infrastructures, thus negatively affecting the guest's performance, causing swapping, reduced file-system buffers, and smaller system caches. 5.4. Memory Swap Rate Memory swap rate is a performance metric used in virtualized IaaS environments to measure the amount of memory being swapped to disk. When the swap rate is high, it leads to longer CPU swap times and negatively affects application performance. In addition, when a VM is running, it may require more memory than is physically available on the server. In such cases, the hypervisor may use disk space as a temporary storage area for excess memory. Therefore, to optimize, it is important to ensure that VMs have sufficient memory resources allocated. 5.5. Memory Usage Memory usage refers to the amount of memory being used by a VM at any given time. Memory usage is assessed by analyzing the host level, VM level, and granted memory. When memory usage exceeds the available physical memory on the server, the hypervisor may use disk space as a temporary storage area for excess memory, leading to performance issues. The disparity between used and granted memory indicates the overcommitment rate, which can be adjusted through ballooning. 5.6. Disk/Network Latency Some virtualization providers provide integrated utilities for assessing the latency of disks and network interfaces utilized by a virtual machine. Since latency directly affects response time, increased latency at the hypervisor level will also impact the application. An excessive amount of latency indicates the system is overloaded and requires reconfiguration. These metrics enable us to monitor and detect any negative impact a virtualized system might have on our application. 6. Industry tips for IaaS Virtualization Implementation Testing, compliance management and security arecritical aspects of managing virtualized IaaS environments . By implementing a comprehensive strategy, organizations ensure their infrastructure and applications' reliability, security, and performance. 6.1. Infrastructure Testing This involves testing the infrastructure components of the IaaS environment, such as the virtual machines, networks, and storage, aiming to ensure the infrastructure is functioning correctly and that there are no performance bottlenecks, security vulnerabilities, or configuration issues. Testing the virtualized environment, storage testing (testing data replication and backup and recovery processes), and network testing are some of the techniques to be performed. 6.2. Application Testing Applications running on the IaaS virtual environment should be thoroughly tested to ensure they perform as expected. This includes functional testing to ensure that the application meets its requirements and performance testing to ensure that the application can handle anticipated user loads. 6.3. Security Monitoring Security monitoring is critical in IaaS environments, owing to the increased risks and threats. This involves monitoring the infrastructure and applications for potential security threats, vulnerabilities, or breaches. In addition, regular vulnerability assessments and penetration testing help identify and address potential security issues before they become significant problems. 6.4. Performance Monitoring Performance monitoring is essential to ensuring that the underlying infrastructure meets performance expectations and has no performance bottlenecks. This comprises monitoring metrics such as CPU usage, memory usage, network traffic, and disk utilization. This information is used to identify performance issues and optimize resource usage. 6.5. Cost Optimization Cost optimization is a critical aspect of a virtualized IaaS environment with optimized efficiency and resource allocation. Organizations reduce costs and optimize resource usage by identifying and monitoring usage patterns and optimizing elastic and scalable resources. It involves right-sizing resources, utilizing infrastructure automation, reserved instances, spot instances (unused compute capacity purchased at a discount), and optimizing storage usage. 7. Conclusion IaaS virtualization has become a critical component of DevOps and continuous delivery practices. To rapidly develop, test, and deploy applications with greater agility and efficiency by providing on-demand access to scalable infrastructure resources to Devops teams, IaaS virtualization comes into picture. As DevOps teams continue to seek ways to streamline processes and improve efficiency, automation will play an increasingly important role. Automated deployment, testing, and monitoring processes will help reduce manual intervention and increase the speed and accuracy of development cycles. In addition, containers will offer a lightweight and flexible alternative to traditional virtualization, allowing DevOps teams to package applications and their dependencies into portable, self-contained units that can be easily moved between different environments. This can reduce the complexity of managing virtualized infrastructure environments and enable greater flexibility and scalability. By embracing these technologies and integrating them into their workflows, DevOps teams can achieve greater efficiency and accelerate their delivery of high-quality software products.

Read More
Application Infrastructure

Securing the 5G edge

Article | November 11, 2021

The rollout of 5G networks coupled with edge compute introduces new security concerns for both the network and the enterprise. Security at the edge presents a unique set of security challenges that differ from those faced by traditional data centers. Today new concerns emerge from the combination of distributed architectures and a disaggregated network, creating new challenges for service providers. Many mission critical applications enabled by 5G connectivity, such as smart factories, are better off hosted at the edge because it's more economical and delivers better Quality of Service (QoS). However, applications must also be secured; communication service providers need to ensure that applications operate in an environment that is both safe and provides isolation. This means that secure designs and protocols are in place to pre-empt threats, avoid incidents and minimize response time when incidents do occur. As enterprises adopt private 5G networks to drive their Industry 4.0 strategies, these new enterprise 5G trends demand a new approach to security. Companies must find ways to reduce their exposure to cyberattacks that could potentially disrupt mission critical services, compromise industrial assets and threaten the safety of their workforce. Cybersecurity readiness is essential to ensure private network investments are not devalued. The 5G network architecture, particularly at the edge, introduces new levels of service decomposition now evolving beyond the virtual machine and into the space of orchestrated containers. Such disaggregation requires the operation of a layered technology stack, from the physical infrastructure to resource abstraction, container enablement and orchestration, all of which present attack surfaces which require addressing from a security perspective. So how can CSPs protect their network and services from complex and rapidly growing threats? Addressing vulnerability points of the network layer by layer As networks grow and the number of connected nodes at the edge multiply, so do the vulnerability points. The distributed nature of the 5G edge increases vulnerability threats, just by having network infrastructure scattered across tens of thousands of sites. The arrival of the Internet of Things (IoT) further complicates the picture: with a greater number of connected and mobile devices, potentially creating new network bridging connection points, questions around network security have become more relevant. As the integrity of the physical site cannot be guaranteed in the same way as a supervised data center, additional security measures need to be taken to protect the infrastructure. Transport and application control layers also need to be secured, to enable forms of "isolation" preventing a breach from propagating to other layers and components. Each layer requires specific security measures to ensure overall network security: use of Trusted Platform Modules (TPM) chipsets on motherboards, UEFI Secure OS boot process, secure connections in the control plane and more. These measures all contribute to and are integral part of an end-to-end network security design and strategy. Open RAN for a more secure solution The latest developments in open RAN and the collaborative standards-setting process related to open interfaces and supply chain diversification are enhancing the security of 5G networks. This is happening for two reasons. First, traditional networks are built using vendor proprietary technology – a limited number of vendors dominate the telco equipment market and create vendor lock-in for service providers that forces them to also rely on vendors' proprietary security solutions. This in turn prevents the adoption of "best-of-breed" solutions and slows innovation and speed of response, potentially amplifying the impact of a security breach. Second, open RAN standardization initiatives employ a set of open-source standards-based components. This has a positive effect on security as the design embedded in components is openly visible and understood; vendors can then contribute to such open-source projects where tighter security requirements need to be addressed. Aside from the inherent security of the open-source components, open RAN defines a number of open interfaces which can be individually assessed in their security aspects. The openness intrinsically present in open RAN means that service components can be seamlessly upgraded or swapped to facilitate the introduction of more stringent security characteristics, or they can simultaneously swiftly address identified vulnerabilities. Securing network components with AI Monitoring the status of myriad network components, particularly spotting a security attack taking place among a multitude of cooperating application functions, requires resources that transcend the capabilities of a finite team of human operators. This is where advances in AI technology can help to augment the abilities of operations teams. AI massively scales the ability to monitor any number of KPIs, learn their characteristic behavior and identify anomalies – this makes it the ideal companion in the secure operation of the 5G edge. The self-learning aspect of AI supports not just the identification of known incident patterns but also the ability to learn about new, unknown and unanticipated threats. Security by design Security needs to be integral to the design of the network architecture and its services. The adoption of open standards caters to the definition of security best practices in both the design and operation of the new 5G network edge. The analytics capabilities embedded in edge hyperconverged infrastructure components provide the platform on which to build an effective monitoring and troubleshooting toolkit, ensuring the secure operation of the intelligent edge.

Read More

Spotlight

Hexagon PPM

Hexagon PPM (formerly Intergraph Process, Power & Marine) software solutions help visualize, create, and manage the life cycle of facilities and structures of all complexities. With nearly five decades of innovation and proven industry leadership, we have become the trusted partner for organizations challenged with the engineering, design, building, and operation of facilities and structures across the globe.

Related News

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Events