Data Security in the Cloud Best Practices

Data security in the cloud best practices include: understanding and implementing security fundamentals, securing cloud infrastructure along the shared responsibility model, encrypting data in the cloud, and ensuring compliance with applicable regulations. Data security fundamentals often come back to the CIA Triad: data confidentiality, data integrity, and data availability. The shared responsibility model refers to the idea that both a cloud provider and the organization using the cloud are responsible for ensuring the overall security of the organization’s cloud infrastructure, including the data housed there.

Spotlight

Backblaze Cloud Storage & Backup

Backblaze provides the lowest cost cloud storage and cloud backup services worldwide. $5/month for unlimited backup of your Mac or PC for individuals and businesses. $0.005/GB/month for cloud storage; ideal for developers and IT.

OTHER ARTICLES
Hyper-Converged Infrastructure, Application Infrastructure

Accelerating DevOps and Continuous Delivery with IaaS Virtualization

Article | July 19, 2023

Adopting DevOps and CD in IaaS environments is a strategic imperative for organizations seeking to achieve agility, competitiveness, and customer satisfaction in their software delivery processes. Contents 1. Introduction 2. What is IaaS Virtualization? 3. Virtualization Techniques for DevOps and Continuous Delivery 4. Integration of IaaS with CI/CD Pipelines 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait 5.2. CPU System/Wait Time for VKernel: 5.3. Memory Balloon 5.4.Memory Swap Rate: 5.5. Memory Usage: 5.6. Disk/Network Latency: 6. Industry tips for IaaS Virtualization Implementation 6.1. Infrastructure Testing 6.2. ApplicationTesting 6.3. Security Monitoring 6.4. Performance Monitoring 6.5. Cost Optimization 7. Conclusion 1. Introduction Infrastructure as a Service (IaaS) virtualization presents significant advantages for organizations seeking to enhance their agility, flexibility, and speed to market within the DevOps and continuous delivery frameworks. Addressing the associated risks and challenges is crucial and can be achieved by employing the appropriate monitoring and testing techniques, enlisted further, in this blog. IaaS virtualization allows organizations to provision and de-provision resources as needed, eliminating the need for long-term investments in hardware and data centers. Furthermore, IaaS virtualization offers the ability to operate with multiple operating systems, databases, and programming languages, empowering teams to select the tools and technologies that best suit their requirements. However, organizations must implement comprehensive testing and monitoring strategies, ensure proper security and compliance controls, and adopt the best resource optimization and management practices to leverage the full potential of virtualized IaaS. To achieve high availability and fault tolerance along with advanced networking, enabling complex application architectures in IaaS virtualization, the blog mentions five industry tips. 2. What is IaaS Virtualization? IaaS virtualization involves simultaneously running multiple operating systems with different configurations. To run virtual machines on a system, a software layer known as the virtual machine monitor (VMM) or hypervisor is required. Virtualization in IaaS handles website hosting, application development and testing, disaster recovery, and data storage and backup. Startups and small businesses with limited IT resources and budgets can benefit greatly from virtualized IaaS, enabling them to provide the necessary infrastructure resources quickly and without significant capital expenditures. Virtualized IaaS is a potent tool for businesses and organizations of all sizes, enabling greater infrastructure resource flexibility, scalability, and efficiency. 3. Virtualization Techniques for DevOps and Continuous Delivery Virtualization is a vital part of the DevOps software stack. Virtualization in DevOps process allows teams to create, test, and implement code in simulated environments without wasting valuable computing resources. DevOps teams can use the virtual services for thorough testing, preventing bottlenecks that could slow down release time. It heavily relies on virtualization for building intricate cloud, API, and SOA systems. In addition, virtual machines benefit test-driven development (TDD) teams that prefer to begin their troubleshooting at the API level. 4. Integration of IaaS with CI/CD Pipelines Continuous integration is a coding practice that frequently implements small code changes and checks them into a version control repository. This process not only packages software and database components but also automatically executes unit tests and other tests to provide developers with vital feedback on any potential breakages caused by code changes. Continuous testing integrates automated tests into the CI/CD pipeline. For example, unit and functionality tests identify issues during continuous integration, while performance and security tests are executed after a build is delivered in continuous delivery. Continuous delivery is the process of automating the deployment of applications to one or more delivery environments. IaaS provides access to computing resources through a virtual server instance, which replicates the capabilities of an on-premise data center. It also offers various services, including server space, security, load balancing, and additional bandwidth. In modern software development and deployment, it's common to integrate IaaS with CI/CD pipelines. This helps automate the creation and management of infrastructure using infrastructure-as-code (IAC) tools. Templates can be created to provision resources on the IaaS platform, ensuring consistency and meeting software requirements. Additionally, containerization technologies like Docker and Kubernetes can deploy applications on IaaS platforms. 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait The CPU swap wait is when the virtual system waits while the hypervisor swaps parts of the VM memory back in from the disk. This happens when the hypervisor needs to swap, which can be due to a lack of balloon drivers or a memory shortage. This can affect the application's response time. One can install the balloon driver and/or reduce the number of VMs on the physical machine to resolve this issue. 5.2. CPU System/Wait Time for VKernel Virtualization systems often report CPU or wait time for the virtualization kernel used by each virtual machine to measure CPU resource overhead. While this metric can't be directly linked to response time, it can impact both ready and swap times if it increases significantly. If this occurs, it could indicate that the system is either misconfigured or overloaded, and reducing the number of VMs on the machine may be necessary. 5.3. Memory Balloon Memory ballooning is a memory management technique used in virtualized IaaS environments. It works by injecting a software balloon into the VM's memory space. The balloon is designed to consume memory within the VM, causing it to request more memory from the hypervisor. As a result, if the host system is experiencing low memory, it will take memory from its virtual infrastructures, thus negatively affecting the guest's performance, causing swapping, reduced file-system buffers, and smaller system caches. 5.4. Memory Swap Rate Memory swap rate is a performance metric used in virtualized IaaS environments to measure the amount of memory being swapped to disk. When the swap rate is high, it leads to longer CPU swap times and negatively affects application performance. In addition, when a VM is running, it may require more memory than is physically available on the server. In such cases, the hypervisor may use disk space as a temporary storage area for excess memory. Therefore, to optimize, it is important to ensure that VMs have sufficient memory resources allocated. 5.5. Memory Usage Memory usage refers to the amount of memory being used by a VM at any given time. Memory usage is assessed by analyzing the host level, VM level, and granted memory. When memory usage exceeds the available physical memory on the server, the hypervisor may use disk space as a temporary storage area for excess memory, leading to performance issues. The disparity between used and granted memory indicates the overcommitment rate, which can be adjusted through ballooning. 5.6. Disk/Network Latency Some virtualization providers provide integrated utilities for assessing the latency of disks and network interfaces utilized by a virtual machine. Since latency directly affects response time, increased latency at the hypervisor level will also impact the application. An excessive amount of latency indicates the system is overloaded and requires reconfiguration. These metrics enable us to monitor and detect any negative impact a virtualized system might have on our application. 6. Industry tips for IaaS Virtualization Implementation Testing, compliance management and security arecritical aspects of managing virtualized IaaS environments . By implementing a comprehensive strategy, organizations ensure their infrastructure and applications' reliability, security, and performance. 6.1. Infrastructure Testing This involves testing the infrastructure components of the IaaS environment, such as the virtual machines, networks, and storage, aiming to ensure the infrastructure is functioning correctly and that there are no performance bottlenecks, security vulnerabilities, or configuration issues. Testing the virtualized environment, storage testing (testing data replication and backup and recovery processes), and network testing are some of the techniques to be performed. 6.2. Application Testing Applications running on the IaaS virtual environment should be thoroughly tested to ensure they perform as expected. This includes functional testing to ensure that the application meets its requirements and performance testing to ensure that the application can handle anticipated user loads. 6.3. Security Monitoring Security monitoring is critical in IaaS environments, owing to the increased risks and threats. This involves monitoring the infrastructure and applications for potential security threats, vulnerabilities, or breaches. In addition, regular vulnerability assessments and penetration testing help identify and address potential security issues before they become significant problems. 6.4. Performance Monitoring Performance monitoring is essential to ensuring that the underlying infrastructure meets performance expectations and has no performance bottlenecks. This comprises monitoring metrics such as CPU usage, memory usage, network traffic, and disk utilization. This information is used to identify performance issues and optimize resource usage. 6.5. Cost Optimization Cost optimization is a critical aspect of a virtualized IaaS environment with optimized efficiency and resource allocation. Organizations reduce costs and optimize resource usage by identifying and monitoring usage patterns and optimizing elastic and scalable resources. It involves right-sizing resources, utilizing infrastructure automation, reserved instances, spot instances (unused compute capacity purchased at a discount), and optimizing storage usage. 7. Conclusion IaaS virtualization has become a critical component of DevOps and continuous delivery practices. To rapidly develop, test, and deploy applications with greater agility and efficiency by providing on-demand access to scalable infrastructure resources to Devops teams, IaaS virtualization comes into picture. As DevOps teams continue to seek ways to streamline processes and improve efficiency, automation will play an increasingly important role. Automated deployment, testing, and monitoring processes will help reduce manual intervention and increase the speed and accuracy of development cycles. In addition, containers will offer a lightweight and flexible alternative to traditional virtualization, allowing DevOps teams to package applications and their dependencies into portable, self-contained units that can be easily moved between different environments. This can reduce the complexity of managing virtualized infrastructure environments and enable greater flexibility and scalability. By embracing these technologies and integrating them into their workflows, DevOps teams can achieve greater efficiency and accelerate their delivery of high-quality software products.

Read More
Hyper-Converged Infrastructure

A Cloudy Future: Data Center Demand and COVID-19

Article | September 14, 2023

COVID-19 has altered our world. In this series of stories, Data Center Frontier explores the strategic challenges the pandemic presents for the data center and cloud computing sectors as we navigate this complex new landscape. We begin with a look at how COVID-19 is impacting demand for digital infrastructure. The COVID-19 Coronavirus pandemic has reinforced the importance of data centers and cloud computing for our society. In the early days of the crisis, the data center

Read More
Hyper-Converged Infrastructure

The importance of location intelligence and big data for 5G growth

Article | October 3, 2023

The pandemic has had a seismic impact on the telecom sector. This is perhaps most notably because where and how the world goes to work has been re-defined, with nearly every business deepening its commitment to mobility. Our homes suddenly became our offices, and workforces went from being centrally managed to widely distributed. This has called for a heightened need for widespread, secure and high-speed connectivity around the clock. 5G has answered the call, and 5G location intelligence and big data can provide service providers with the information they need to optimize their investments. Case in point: Juniper Research reported in its 5G Monetization study that global revenue from 5G services will reach $73 billion by the end of 2021, rising from just $20 billion last year. 5G flexes as connected devices surge Market insights firm IoT Analytics estimates there will be more than 30 billion IoT connections by 2025. That's an average of nearly four IoT devices per person. To help meet the pressure this growth in connectivity is putting on telecom providers, the Federal Communications Commission (FCC) is taking action to make additional spectrum available for 5G services and promoting the digital opportunities it provides to Americans. The FCC is urging that investments in 5G infrastructure be prioritized given the "widespread mobility opportunity" it presents, as stated by FCC Chairwoman Jessica Rosenworcel. While that's a good thing, we must also acknowledge that launching a 5G network presents high financial risk, among other challenges. The competitive pressures are significant, and network performance matters greatly when it comes to new business acquisition and retention. It's imperative to make wise decisions on network build-out to ensure investments yield the anticipated returns. Thus, telcos need not – and should not – go it blindly when considering where to invest. You don't know what you don't know, which is why 5G location intelligence and big data can provide an incredible amount of clarity (and peace of mind) when it comes to optimizing investments, increasing marketing effectiveness and improving customer satisfaction. Removing the blindfold Location data and analytics provide telcos and Communications Service Providers (CSPs) with highly-specific insights to make informed decisions on where to invest in 5G. With this information, companies can not only map strategic expansion, but also better manage assets, operations, customers and products. For example, with this intelligence, carriers can gain insight into the most desired locations of specific populations and how they want to use bandwidth. They can use this data to arm themselves with a clear understanding of customer location and mobility, mapping existing infrastructure and competitive coverage against market requirements to pinpoint new opportunities. By creating complex customer profiles rich with demographic information like age, income and lifestyle preferences, the guesswork is eliminated for where the telco should or shouldn’t deploy new 5G towers. Further, by mapping a population of consumers and businesses within a specific region and then aggregating that information by age, income or business type, for example, a vivid picture comes to life of the market opportunity for that area. This type of granular location intelligence adds important context to existing data and is a key pillar to data integrity, which describes the overall quality and completeness of a dataset. When telcos can clearly understand factors such as boundaries, movement and the customers’ surroundings, predictive insights can be made regarding demographic changes and future telecom requirements within a certain location. This then serves as the basis for a data-backed 5G expansion strategy. Without it, businesses are burdened by the trial-and-error losses that are all too common with 5G build-outs. Location precision's myriad benefits Improved location precision has many benefits for telcos looking to pinpoint where to build, market and provision 5G. Among them are: Better data: Broadening insights on commercial, residential and mixed-use locations through easy-to-consume, scalable datasets provide highly accurate in-depth analyses for marketing and meeting customer demand. Better serviceability insights: Complete and accurate location insights allow for a comprehensive view of serviceable addresses where products and services can be delivered to current and new customers causing ROI to improve and customers to be adequately served. Better subscriber returns: Companies that deploy fixed wireless services often experience plan cancellations due to inconsistencies of signal performance, which typically result from the misalignment of sites with network assets. Location-based data provides operators with the ability to adapt their networks for signal consistency and serviceability as sites and structures change. The 5G future The role of location intelligence in accelerating development of new broadband services and driving ROI in a 5G world cannot be overstated. It adds a critical element of data integrity that informs network optimization, customer targeting and service provisioning so telecom service providers can ensure their investments are not made with blind hope.

Read More
Hyper-Converged Infrastructure, Application Infrastructure

The Future of Computing: Why IaaS is Leading the Way

Article | May 17, 2023

Firms face challenges with managing their resources, and ensuring security & cost optimization, adding complexity to their operations. IaaS solves this need to maintain and manage IT infrastructure. Contents 1. Infrastructure as a Service: Future of Cloud Computing 2. Upcoming Trends in IaaS 2.1 The Rise of Edge Computing 2.2 Greater Focus on Security 2.3 Enhancement in Serverless Architecture 2.4 Evolution of Green Computing 2.5 Emergence of Containerization 3. Final Thoughts 1. Infrastructure as a Service: Future of Cloud Computing As digital transformation continues to reshape the business landscape, cloud computing is emerging as a critical enabler for companies of all sizes. With infrastructure-as-a-service (IaaS), businesses can outsource their hardware and data center management to a third-party provider, freeing up resources and allowing them to focus on their core competencies, reducing operational costs while maintaining the agility to adapt to changing market conditions. With the increasing need for scalable computing solutions, IaaS is set to become a pivotal player in shaping the future of computing. IaaS is already emerging as a prominent solution for organizations looking to modernize their computing capabilities. This article will delve into the recent trends of IaaS and its potential impact on the computing industry, implying why IaaS is important for emerging businesses. 2. Upcoming Trends in IaaS 2.1 The Rise of Edge Computing The rise in IoT and mobile computing has led to a challenge in the amount of data that can be transferred across a network in a certain period. Due to its many uses, such as improving reaction times for self-driving cars and safeguarding confidential health information, the market for edge computing infrastructure is expected to reach a value of $450 billion. (Source: CB Insights) Edge computing is a technology that enables data processing to occur closer to its origin, thereby reducing the volume of data that needs to be transmitted to and from the cloud. A mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository in a footprint of less than 100 square feet. (Source: IDC) Edge computing represents the fourth major paradigm shift in modern computing, following mainframes, client/server models, and the cloud. A hybrid architecture of interconnected IaaS services allows for low latency through edge computing and high performance, security, and flexibility through a private cloud. Connecting edge devices to an IaaS platform streamlines location management and enables remote work, thus looking forward to smoother future of IaaS. An edge layer (fog computing) is required to optimize the architecture model with high-speed and reliable 5G connectivity, connecting edge devices with the cloud. This layer acts as autonomous distributed nodes, capable of analyzing and acting on real-time data. Doing so sends only the data required to the central infrastructure in an IaaS instance. By combining the advantages of edge computing in data capture with the storage and processing capabilities of the cloud, companies can take full advantage of the benefits of data analytics to leverage their innovation and optimization capabilities while simultaneously and effectively managing IoT devices on the edge. IoT devices, also known as edge devices, possess the ability to analyze data in real time through the use of AI, ML, and algorithms, even in the absence of an internet connection. This technology yields numerous advantages, including superior decision-making, early detection of issues, and heightened efficiency. However, an IaaS infrastructure with top-notch computing and storage capabilities is an absolute necessity to analyze the data effectively. 2.2 Greater Focus on Security Hackers might use cloud-based services to host malware through malware-as-a-service (MaaS) platforms or to distribute malware payloads using cloud-based apps and services. In addition, organizations often need more than they can secure in their IaaS footprint, leading to increased misconfigurations and vulnerabilities. Recognizing and reacting to an attack is called reactive security, whereas anticipating a dangerous event before it happens and intervening to prevent it is predictive safety. Predictive security is the future of cloud security. The cybersecurity mesh involves setting up a distributed network and infrastructure to create a secure perimeter. This allows companies to centrally manage access to their data while enforcing security policies across the distributed network. It is a critical component of the Zero-Trust architecture. A popular IaaS cloud security trend is the multi-cloud environment. Multi-cloud proves effective when tools like security information and event management (SIEM) and threat intelligence are deployed. DevSecOps is a methodology that incorporates security protocols at every stage of software development lifecycle (SDLC). This makes it convenient to deal with threats during the lifecycle itself. Since deploying DevOps, software releases have been shortened for every product release. DevSecOps proves to be secure and fast only with a fully automated software development lifecycle. The DevOps and security teams must collaborate to provide massive digital transformation and security. Digital services and applications need stronger and better security in exponential amounts. This methodology must be enforced in a CI/CD pipeline to make it a continuous process. Secure access service edge (SASE) is a cloud-based architecture that integrates networking and software-as-a-service (SaaS) functions, providing them as a unified cloud service. The architecture combines a software-defined wide area network (SD-WAN) or other WAN with multiple security capabilities, securing network traffic. 2.3 Enhancement in Serverless Architecture Serverless architecture apps are launched on demand when an event triggers the app code to run. The public cloud provider then assigns the resources necessary for the operation to occur. With serverless apps, containers are deployed and launched on demand when needed. This differs from the traditional IaaS cloud computing model, where users must pre-purchase capacity units for always-on server components to run their apps. The app will incur minimal charges during off-peak hours with a serverless model. When there is a surge in traffic, it can scale up seamlessly through the provider without requiring DevOps involvement. A serverless database is a type of database that operates as a fully managed database-as-a-service (DBaaS). It automatically adjusts its computing and storage resources to match the demand, making it convenient for users. A serverless database is a cloud based service that eliminates the need to manage infrastructure, scaling, and provisioning. It allows developers to concentrate on constructing applications or digital products without the burden of managing servers, storage, or backups. 2.4 Evolution of Green Computing In promoting green computing, infrastructure-as-a-service plays a significant role by allowing cloud providers to manage the infrastructure. This helps reduce the environmental impact and boosts efficiency by intelligently utilizing servers at high utilization rates. As a result, studies show that public cloud infrastructure is typically 2-4 times more efficient than traditional data centers, a giant leap forward for sustainable computing practices. 2.5 Emergence of Containerization Containerization is a type of operating system virtualization where applications are executed in distinct user spaces called containers. These containers operate on the same shared operating system, providing a complete, portable computing environment for virtualized infrastructure. Containers are self-contained software packages operating in any environment, including private data centers, public clouds, or developer laptops. They comprise all the necessary components required for the right functioning of IaaS-adopted cloud computing. 3. Final Thoughts With the expansion of multi-cloud environments, the emergence of containerization technologies like Docker and Kubernetes, and enhancements in serverless databases, IaaS is poised to become even more powerful and versatile in meeting the diverse computing needs of organizations. These advancements have enabled IaaS providers to offer a wide range of services and capabilities, such as automatic scaling, load balancing, and high availability, making it easier for businesses to build, deploy, and manage their applications swiftly in the cloud.

Read More

Spotlight

Backblaze Cloud Storage & Backup

Backblaze provides the lowest cost cloud storage and cloud backup services worldwide. $5/month for unlimited backup of your Mac or PC for individuals and businesses. $0.005/GB/month for cloud storage; ideal for developers and IT.

Related News

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Application Infrastructure

dxFeed Launches Market Data IaaS Project for Tradu, Assumes Infrastructure and Data Provision Responsibilities

PR Newswire | January 25, 2024

dxFeed, a global leader in data solutions and index management for the financial industry, announces the launch of an Infrastructure as a Service (IaaS) project for Tradu, an advanced multi-asset trading platform catering to active traders and investors. In this venture, dxFeed manages the crucial aspects of infrastructure and data provision for Tradu. As an award-winning IaaS provider (the Best Infrastructure Provider by the Sell-Side Technology Awards 2023), dxFeed is poised to address all technical challenges related to market data delivery to hundreds of thousands of end users, allowing Tradu to focus on its core business objectives. Users worldwide can seamlessly connect to Tradu's platform, receiving authorization tokens for access to high-quality market data from the EU, US, Hong Kong, and Australian Exchanges. This approach eliminates the complexities and bottlenecks associated with building, maintaining, and scaling the infrastructure required for such extensive global data access. dxFeed's scalable low latency infrastructure ensures the delivery of consolidated and top-notch market data from diverse sources to the clients located in Asia, Americas and Europe. With the ability to rapidly reconfigure and accommodate the growing performance demands, dxFeed is equipped to serve hundreds of thousands of concurrent clients, with the potential to scale the solution even further in order to meet the constantly growing demand, at the same time providing a seamless and reliable experience. One of the highlights of this collaboration is the introduction of brand-new data feed services exclusively for Tradu's Stocks platform. This proprietary solution enhances Tradu's offerings and demonstrates dxFeed's commitment to delivering tailored and innovative solutions. Tradu also benefits from dxFeed's Stocks Radar—a comprehensive technical and fundamental market analysis solution. This Software as a Service (SaaS) seamlessly integrates with infrastructure, offering added value to traders and investors by simplifying complex analytical tasks. Moreover, Tradu leverages the advantages of dxFeed's composite feed (the winner at The Technical Analyst Awards). This accolade reinforces dxFeed's commitment to delivering excellence in data provision, further solidifying Tradu's position as a global leader in online foreign exchange. "When we were thinking of our new sophisticated multi-asset trading platform for the active trader and investors we met with the necessity of expanding instrument and user numbers. We realized we needed a highly competent, professional team to deploy the infrastructure, taking into account the peculiarities of our processes and services," said Brendan Callan, CEO of Tradu. "On the one hand, it allows our clients to receive quality consolidating data from multiple sources. On the other hand, as a leading global provider of online foreign exchange, we can dispose of dxFeed's geo-scalable infrastructure and perform rapid reconfiguration to meet growing performance demands to provide data to hundreds of thousands of our clients around the globe." "The range of businesses finding the Market Data IaaS (Infrastructure as a Service) model appealing continues to expand. This approach is gaining traction among various enterprises, from agile startups seeking rapid development to established, prominent brands acknowledging the strategic benefits of delegating market data infrastructure to specialized firms," said Oleg Solodukhin, CEO of dxFeed. By taking on the responsibilities of infrastructure and data provision, dxFeed empowers Tradu to focus on innovation and client satisfaction, setting the stage for a transformative journey in the dynamic world of financial trading. About dxFeed dxFeed is a leading market data and services provider and calculation agent for the capital markets industry. According to the WatersTechnology 2022 IMD & IRD awards honors, it's the "Most Innovative Market Data Project." dxFeed focuses primarily on delivering financial information and services to buy- and sell-side institutions in global markets, both traditional and crypto. That includes brokerages, prop traders, exchanges, individuals (traders, quants, and portfolio managers), and academia (educational institutions and researchers). Follow us on Twitter, Facebook, and LinkedIn. Contact dxFeed: pr@dxfeed.com About Tradu Tradu is headquartered in London with offices around the world. The global Tradu team speaks more than two dozen languages and prides itself on its responsive and helpful client support. Stratos also operates FXCM, an FX and CFD platform founded in 1999. Stratos will continue to offer FXCM services alongside Tradu's multi-asset platform.

Read More

IT Systems Management

ICANN ANNOUNCES GRANT PROGRAM TO SPUR INNOVATION

PR Newswire | January 16, 2024

The Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit organization that coordinates the Domain Name System (DNS), announced today the ICANN Grant Program, which will make millions of dollars in funding available to develop projects that support the growth of a single, open and globally interoperable Internet. ICANN is opening an application cycle for the first $10 million in grants in March 2024. Internet connectivity continues to increase worldwide, particularly in developing countries. According to the International Telecommunication Union (ITU), an estimated 5.3 billion of the world's population use the Internet as of 2022, a growth rate of 6.1% over 2021. The Grant Program will support this next phase of global Internet growth by fostering an inclusive and transparent approach to developing stable, secure Internet infrastructure solutions that support the Internet's unique identifier systems. "With the rapid evolution of emerging technologies, businesses and security models, it is critical that the Internet's unique identifier systems continue to evolve," said Sally Costerton, Interim President and CEO, ICANN. "The ICANN Grant Program offers a new avenue to further those efforts by investing in projects that are committed to and support ICANN's vision of a single, open and globally interoperable Internet that fosters inclusion amongst a broad, global community of users." ICANN expects to begin accepting grant applications on 25 March 2024. The application window will remain open until 24 May 2024. A complete list of eligibility criteria can be found at: https://icann.org/grant-program. Once the application window closes, all applications are subject to admissibility and eligibility checks. An Independent Application Assessment Panel will review admissible and eligible applications and the tentative timeline to announce the grantees of the first cycle is in January of 2025. Potential applicants will have several opportunities to learn more about the Call for Proposals and ask ICANN Grant Program staff members questions through question-and-answer webinar sessions in the coming months. For more information on the program, including eligibility and submission requirements, the ICANN Grant Program Applicant Guide is available at https://icann.org/grant-program. About ICANN ICANN's mission is to help ensure a stable, secured and unified global Internet. To reach another person on the Internet, you need to type an address – a name or a number – into your computer or other device. That address must be unique so computers know where to find each other. ICANN helps coordinate and support these unique identifiers across the world.

Read More

Application Infrastructure

Legrand Acquires Data Center, Branch, and Edge Management Infrastructure Market Leader ZPE Systems, Inc.

Legrand | January 15, 2024

Legrand, a global specialist in electrical and digital building infrastructures, including data center solutions, has announced its acquisition is complete of ZPE Systems, Inc., a Fremont, California-based company that offers critical solutions and services to deliver resilience and security for customers' business critical infrastructure. This includes serial console servers, sensors, and services routers that enable remote access and management of network IT equipment from data centers to the edge. The acquisition brings together ZPE's secure and open management infrastructure and services delivery platform for data center, branch, and edge environments to Legrand's comprehensive data center solutions of overhead busway, custom cabinets, intelligent PDUs, KVM switches, and advanced fiber solutions. ZPE Systems will become a business unit of Legrand's Data, Power, and Control (DPC) Division. Arnaldo Zimmermann will continue to serve as Vice President and General Manager of ZPE Systems, reporting to Brian DiBella, President of Legrand's DPC Division. "ZPE Systems leads the fast growing and profitable data center and edge management infrastructure market. This acquisition allows Legrand to enter a promising new segment whose strong growth is expected to accelerate further with the development of artificial intelligence and associated needs," said John Selldorff, President and CEO, Legrand, North and Central America. "Edge computing, AI and operational technology will require more complex data centers and edge infrastructure with intelligent IT needs to be built in disparate remote geographies. This makes remote management and operation a critical requirement. ZPE Systems is well positioned to address this need through high performance automation infrastructure solutions, which are complementary to our current data center offerings." "By joining forces with Legrand, ZPE Systems is advancing our leadership position in management infrastructure and propelling our technology and solutions to further support existing and new market opportunities," said Zimmermann. About Legrand and Legrand, North and Central America Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial, and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving lives by transforming the spaces where people live, work, and meet with electrical, digital infrastructures and connected solutions that are simple, innovative, and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: data centers, connected offerings and energy efficiency programs). Legrand reported sales of €8.0 billion in 2022. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes.

Read More

Events