Storage for Composable Infrastructure session at HPE Discover

How does storage fit into Composable Infrastructure? This is a theater session from HPE Discover where Brad Parks, Go-to-Market Strategy Manager and Gary Thunquest, R&D Architect discuss it.

Spotlight

PingCAP

Founded in 2015, PingCAP was born when three talented software engineers decided to build a new database system that was more scalable and reliable than MySQL.

OTHER ARTICLES
Hyper-Converged Infrastructure

Implementation of IaaS Container Security for Confidentiality and Integrity

Article | September 14, 2023

Containers have emerged as a choice for deploying and scaling applications, owing to their lightweight, isolated, and portable nature. However, the absence of robust security measures may expose containers to diverse threats, thereby compromising the confidentiality and integrity of data and apps. Contents 1 Introduction 2 IaaS Container Security Techniques 2.1 Container Image Security 2.2 Host Security 2.3 Network Security 2.4 Data Security 2.5 Identity and Access Management (IAM) 2.6 Runtime Container Security 2.7 Compliance and Auditing 3 Conclusion 1. Introduction Infrastructure as a Service has become an increasingly popular way of deploying and managing applications, and containerization has emerged as a leading technology for packaging and deploying these applications. Containers are software packages that include all the necessary components to operate in any environment. While containers offer numerous benefits, such as portability, scalability, and speed, they also introduce new security challenges that must be addressed. Implementing adequate IaaS container security requires a comprehensive approach encompassing multiple layers and techniques. This blog explores the critical components of IaaS container security. It provides an overview of the techniques and best practices for implementing security measures that ensure the confidentiality and integrity of containerized applications. By following these, organizations can leverage the benefits of IaaS and containerization while mitigating the security risks that come along. 2. IaaS Container Security Techniques The increasing IAAS security risks and security issues associated with IAAS these days are leading to a massive data breach. Thus, IAAS security concerns are taken into consideration, and seven best techniques are drafted below. 2.1. Container Image Security: Container images are the building blocks of containerized applications. Ensuring the security of these images is essential to prevent security threats. The following measures are used for container image security: Using secure registries: The registry is the location where container images are stored and distributed. Usage of centrally managed registries on campus, the International Organization for Standardization (ISO) can scan them for security issues and system managers may simply assess package gaps, etc. Signing images: Container images can be signed using digital signatures to ensure their authenticity. Signed images can be verified before being deployed to ensure they have not been tampered with. Scanning images: Although standard AppSec tools such as Software Composition Analysis (SCA) can check container images for vulnerabilities in software packages and dependencies, extra dependencies can be introduced during the development process or even at runtime. 2.2. Host Security: Host security is a collection of capabilities that provide a framework for implementing a variety of security solutions on hosts to prevent attacks. The underlying host infrastructure where containers are deployed must be secured. The following measures are used for host security: Using secure operating systems: The host operating system must be safe and up-to-date with the latest high severity security patches within 7 days of release, and others, within 30 days to prevent vulnerabilities and security issues. Applying security patches: Security patches must be applied to the host operating system and other software packages to fix vulnerabilities and prevent security threats. Hardening the host environment: The host environment must be hardened by disabling unnecessary services, limiting access to the host, and applying security policies to prevent unauthorized access. 2.3. Network Security: Network security involves securing the network traffic between containers and the outside world. The following measures are used for network security: Using Microsegmentation and firewalls: Microsegmentation tools with next-gen firewalls provide container network security. Microsegmentation software leverages network virtualization to build extremely granular security zones in data centers and cloud applications to isolate and safeguard each workload. Encryption: Encryption can protect network traffic and prevent eavesdropping and interception of data. Access control measures: Access control measures can restrict access to containerized applications based on user roles and responsibilities. 2.4. Data Security: Data stored in containers must be secured to ensure its confidentiality and integrity. The following measures are used for data security: Using encryption: Data stored in containers can be encrypted, using Transport Layer Security protocol version 1.1. (TLS 1.1) or higher, to protect it from unauthorized access and prevent data leaks. All outbound traffic from private cloud should be encrypted at the transport layer. Access control measures: Access control measures can restrict access to sensitive data in containers based on user roles and responsibilities. Not storing sensitive data in clear text: Sensitive data must not be stored in clear text within containers to prevent unauthorized access and data breaches. Backup app data, atleast weekly. 2.5. Identity and Access Management (IAM): IAM involves managing access to the container infrastructure and resources based on the roles and responsibilities of the users. The following measures are used for IAM: Implementing identity and access management solutions: IAM solutions can manage user identities, assign user roles and responsibilities, authenticate and provide access control policies. Multi-factor authentication: Multi-factor authentication can add an extra layer of security to the login process. Auditing capabilities: Auditing capabilities can monitor user activity and detect potential security threats. 2.6. Runtime Container Security: To keep its containers safe, businesses should employ a defense-in-depth strategy, as part of runtime protection. Malicious processes, files, and network activity that deviates from a baseline can be detected and blocked via runtime container security. Container runtime protection can give an extra layer of defense against malicious code on top of the network security provided by containerized next-generation firewalls. In addition, HTTP layer 7 based threats like the OWASP Top 10, denial of service (DoS), and bots can be prevented with embedded web application and API security. 2.7. Compliance and Auditing: Compliance and auditing ensure that the container infrastructure complies with relevant regulatory and industry standards. The following measures are used for compliance and auditing: Monitoring and auditing capabilities: Monitoring and auditing capabilities can detect and report cloud security incidents and violations. Compliance frameworks: Compliance frameworks can be used to ensure that the container infrastructure complies with relevant regulatory and industry standards, such as HIPAA, PCI DSS, and GDPR. Enabling data access logs on AWS S3 buckets containing high-risk Confidential Data is one such example. 3. Conclusion IaaS container security is critical for organizations that rely on containerization technology for deploying and managing their applications. There is likely to be an increased focus on the increased use of AI and ML to detect and respond to security incidents in real-time, the adoption of more advanced encryption techniques to protect data, and the integration of security measures into the entire application development lifecycle. In order to stay ahead of the challenges and ensure the continued security of containerized applications, the ongoing process of IaaS container security requires continuous attention and improvement. By prioritizing security and implementing effective measures, organizations can confidently leverage the benefits of containerization while maintaining the confidentiality and integrity of their applications and data.

Read More
Hyper-Converged Infrastructure, IT Systems Management

WIRELESS DATA CENTERS AND CLOUD COMPUTING

Article | September 14, 2023

One of the most exciting areas of Vubiq Network’s innovative millimeter wave technology is in the application of ultra high-speed, short-range communications as applied to solving the scaling constraints and costs for internal data center connectivity and switching. Today’s limits of cabled and centralized switching architectures are eliminated by leveraging the wide bandwidths of the millimeter wave spectrum for the high-density communications requirements inside the modern data center. Our patented technology has the ability to provide more than one terabit per second of wireless uplink capacity from a single server rack through an innovative approach to create a millimeter wave massive mesh network. The elimination of all inter-rack cabling – as well as the elimination of all aggregation and core switches – is combined with higher throughput, lower latency, lower power, higher reliability, and lower cost by using millimeter wave wireless connectivity.

Read More
Hyper-Converged Infrastructure

Why are Investments in Network Monitoring Necessary for Businesses?

Article | October 10, 2023

Businesses are depending more and more on information technology to accomplish daily objectives. The viability and profitability of a firm are directly impacted by the necessity of putting the appropriate technological processes in place. The misunderstanding that "the Internet is down" is often associated with poor internet connectivity shows how crucial network maintenance is since troubleshooting should always begin and conclude with a network expert. In actuality, though, that employee will spend time out of their day to "repair the Internet," and the money spent on that time is the result of the company's failure to implement a dependable network monitoring system. The direct financial loss increases with network unreliability. Because expanding wide area network (WAN) infrastructure and cloud networking have now become a significant component of today's enterprise computing, networks have grown much more virtualized and are no longer restricted to either physical location or hardware. While networks themselves are evolving, there is a growing need for IT network management. As organizations modernize their IT infrastructure, they should think about purchasing a network management system for several reasons. Creating More Effective, Less Redundant Systems Every network has to deal with data transfer through significant hubs and the flow of information. In order to avoid slowing down data transfer, not using up more IP addresses in a network scheme than necessary, and avoiding dead loops, networking engineers have had to carefully route networking equipment to end devices over the years. An effective IT management solution can analyze how your network is operating and provide immediate insights into the types of changes you need to make to cut down on redundancy and improve workflow. More productivity and less time spent troubleshooting delayed data transfers result from increased efficiency. Increasing Firewall Defense Given that more apps are being utilized for internal and external massive data transfers, every network must have adequate firewalls and access control setup. In addition to screen sharing and remote desktop services, more companies require team meeting software with live video conferencing choices. Programs with these features can be highly vulnerable to hackers and other vulnerabilities; thus, it's crucial that firewalls stop attackers from utilizing the software to access restricted sections of corporate networks. Your network management tools can set up your firewalls and guarantee that only secure network connections and programs are used in critical parts of your system. The bottom line is that your company network will constantly require security and development, and your underlying network must be quick and dependable to satisfy demands for both workplace productivity and customer experience. Which IT network management system, nevertheless, is best for your company? Effectiveness doesn't require a lot of complexity, and if it works with well-known network providers, there's a good chance the cost will be justified. Rock-solid security will be the most crucial factor, but you should also search for a system that can operate on physical, cloud, and hybrid infrastructure.

Read More
DevOps

Accelerating DevOps and Continuous Delivery with IaaS Virtualization

Article | May 5, 2023

Adopting DevOps and CD in IaaS environments is a strategic imperative for organizations seeking to achieve agility, competitiveness, and customer satisfaction in their software delivery processes. Contents 1. Introduction 2. What is IaaS Virtualization? 3. Virtualization Techniques for DevOps and Continuous Delivery 4. Integration of IaaS with CI/CD Pipelines 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait 5.2. CPU System/Wait Time for VKernel: 5.3. Memory Balloon 5.4.Memory Swap Rate: 5.5. Memory Usage: 5.6. Disk/Network Latency: 6. Industry tips for IaaS Virtualization Implementation 6.1. Infrastructure Testing 6.2. ApplicationTesting 6.3. Security Monitoring 6.4. Performance Monitoring 6.5. Cost Optimization 7. Conclusion 1. Introduction Infrastructure as a Service (IaaS) virtualization presents significant advantages for organizations seeking to enhance their agility, flexibility, and speed to market within the DevOps and continuous delivery frameworks. Addressing the associated risks and challenges is crucial and can be achieved by employing the appropriate monitoring and testing techniques, enlisted further, in this blog. IaaS virtualization allows organizations to provision and de-provision resources as needed, eliminating the need for long-term investments in hardware and data centers. Furthermore, IaaS virtualization offers the ability to operate with multiple operating systems, databases, and programming languages, empowering teams to select the tools and technologies that best suit their requirements. However, organizations must implement comprehensive testing and monitoring strategies, ensure proper security and compliance controls, and adopt the best resource optimization and management practices to leverage the full potential of virtualized IaaS. To achieve high availability and fault tolerance along with advanced networking, enabling complex application architectures in IaaS virtualization, the blog mentions five industry tips. 2. What is IaaS Virtualization? IaaS virtualization involves simultaneously running multiple operating systems with different configurations. To run virtual machines on a system, a software layer known as the virtual machine monitor (VMM) or hypervisor is required. Virtualization in IaaS handles website hosting, application development and testing, disaster recovery, and data storage and backup. Startups and small businesses with limited IT resources and budgets can benefit greatly from virtualized IaaS, enabling them to provide the necessary infrastructure resources quickly and without significant capital expenditures. Virtualized IaaS is a potent tool for businesses and organizations of all sizes, enabling greater infrastructure resource flexibility, scalability, and efficiency. 3. Virtualization Techniques for DevOps and Continuous Delivery Virtualization is a vital part of the DevOps software stack. Virtualization in DevOps process allows teams to create, test, and implement code in simulated environments without wasting valuable computing resources. DevOps teams can use the virtual services for thorough testing, preventing bottlenecks that could slow down release time. It heavily relies on virtualization for building intricate cloud, API, and SOA systems. In addition, virtual machines benefit test-driven development (TDD) teams that prefer to begin their troubleshooting at the API level. 4. Integration of IaaS with CI/CD Pipelines Continuous integration is a coding practice that frequently implements small code changes and checks them into a version control repository. This process not only packages software and database components but also automatically executes unit tests and other tests to provide developers with vital feedback on any potential breakages caused by code changes. Continuous testing integrates automated tests into the CI/CD pipeline. For example, unit and functionality tests identify issues during continuous integration, while performance and security tests are executed after a build is delivered in continuous delivery. Continuous delivery is the process of automating the deployment of applications to one or more delivery environments. IaaS provides access to computing resources through a virtual server instance, which replicates the capabilities of an on-premise data center. It also offers various services, including server space, security, load balancing, and additional bandwidth. In modern software development and deployment, it's common to integrate IaaS with CI/CD pipelines. This helps automate the creation and management of infrastructure using infrastructure-as-code (IAC) tools. Templates can be created to provision resources on the IaaS platform, ensuring consistency and meeting software requirements. Additionally, containerization technologies like Docker and Kubernetes can deploy applications on IaaS platforms. 5. Considerations in IaaS Virtualized Environments 5.1. CPU Swap Wait The CPU swap wait is when the virtual system waits while the hypervisor swaps parts of the VM memory back in from the disk. This happens when the hypervisor needs to swap, which can be due to a lack of balloon drivers or a memory shortage. This can affect the application's response time. One can install the balloon driver and/or reduce the number of VMs on the physical machine to resolve this issue. 5.2. CPU System/Wait Time for VKernel Virtualization systems often report CPU or wait time for the virtualization kernel used by each virtual machine to measure CPU resource overhead. While this metric can't be directly linked to response time, it can impact both ready and swap times if it increases significantly. If this occurs, it could indicate that the system is either misconfigured or overloaded, and reducing the number of VMs on the machine may be necessary. 5.3. Memory Balloon Memory ballooning is a memory management technique used in virtualized IaaS environments. It works by injecting a software balloon into the VM's memory space. The balloon is designed to consume memory within the VM, causing it to request more memory from the hypervisor. As a result, if the host system is experiencing low memory, it will take memory from its virtual infrastructures, thus negatively affecting the guest's performance, causing swapping, reduced file-system buffers, and smaller system caches. 5.4. Memory Swap Rate Memory swap rate is a performance metric used in virtualized IaaS environments to measure the amount of memory being swapped to disk. When the swap rate is high, it leads to longer CPU swap times and negatively affects application performance. In addition, when a VM is running, it may require more memory than is physically available on the server. In such cases, the hypervisor may use disk space as a temporary storage area for excess memory. Therefore, to optimize, it is important to ensure that VMs have sufficient memory resources allocated. 5.5. Memory Usage Memory usage refers to the amount of memory being used by a VM at any given time. Memory usage is assessed by analyzing the host level, VM level, and granted memory. When memory usage exceeds the available physical memory on the server, the hypervisor may use disk space as a temporary storage area for excess memory, leading to performance issues. The disparity between used and granted memory indicates the overcommitment rate, which can be adjusted through ballooning. 5.6. Disk/Network Latency Some virtualization providers provide integrated utilities for assessing the latency of disks and network interfaces utilized by a virtual machine. Since latency directly affects response time, increased latency at the hypervisor level will also impact the application. An excessive amount of latency indicates the system is overloaded and requires reconfiguration. These metrics enable us to monitor and detect any negative impact a virtualized system might have on our application. 6. Industry tips for IaaS Virtualization Implementation Testing, compliance management and security arecritical aspects of managing virtualized IaaS environments . By implementing a comprehensive strategy, organizations ensure their infrastructure and applications' reliability, security, and performance. 6.1. Infrastructure Testing This involves testing the infrastructure components of the IaaS environment, such as the virtual machines, networks, and storage, aiming to ensure the infrastructure is functioning correctly and that there are no performance bottlenecks, security vulnerabilities, or configuration issues. Testing the virtualized environment, storage testing (testing data replication and backup and recovery processes), and network testing are some of the techniques to be performed. 6.2. Application Testing Applications running on the IaaS virtual environment should be thoroughly tested to ensure they perform as expected. This includes functional testing to ensure that the application meets its requirements and performance testing to ensure that the application can handle anticipated user loads. 6.3. Security Monitoring Security monitoring is critical in IaaS environments, owing to the increased risks and threats. This involves monitoring the infrastructure and applications for potential security threats, vulnerabilities, or breaches. In addition, regular vulnerability assessments and penetration testing help identify and address potential security issues before they become significant problems. 6.4. Performance Monitoring Performance monitoring is essential to ensuring that the underlying infrastructure meets performance expectations and has no performance bottlenecks. This comprises monitoring metrics such as CPU usage, memory usage, network traffic, and disk utilization. This information is used to identify performance issues and optimize resource usage. 6.5. Cost Optimization Cost optimization is a critical aspect of a virtualized IaaS environment with optimized efficiency and resource allocation. Organizations reduce costs and optimize resource usage by identifying and monitoring usage patterns and optimizing elastic and scalable resources. It involves right-sizing resources, utilizing infrastructure automation, reserved instances, spot instances (unused compute capacity purchased at a discount), and optimizing storage usage. 7. Conclusion IaaS virtualization has become a critical component of DevOps and continuous delivery practices. To rapidly develop, test, and deploy applications with greater agility and efficiency by providing on-demand access to scalable infrastructure resources to Devops teams, IaaS virtualization comes into picture. As DevOps teams continue to seek ways to streamline processes and improve efficiency, automation will play an increasingly important role. Automated deployment, testing, and monitoring processes will help reduce manual intervention and increase the speed and accuracy of development cycles. In addition, containers will offer a lightweight and flexible alternative to traditional virtualization, allowing DevOps teams to package applications and their dependencies into portable, self-contained units that can be easily moved between different environments. This can reduce the complexity of managing virtualized infrastructure environments and enable greater flexibility and scalability. By embracing these technologies and integrating them into their workflows, DevOps teams can achieve greater efficiency and accelerate their delivery of high-quality software products.

Read More

Spotlight

PingCAP

Founded in 2015, PingCAP was born when three talented software engineers decided to build a new database system that was more scalable and reliable than MySQL.

Related News

Converged Architecture and Server Consolidation

April 20, 2016

Generally speaking, the cost of maintaining a higher number of individual hardware chassis exceeds the cost of maintaining a lower number of individual hardware chassis. Among other things, there is the cost of allocating space within the data center to a particular hardware chassis, as well as power consumption and cooling requirements. There are also costs involved in maintaining the hardware chassis and its components. Organizations are always looking for new ways to reduce their overall expenditure on hardware by using existing systems as efficiently as possible. This means making the most of every megabyte of RAM and every spare processor cycle, rather than having that hardware sit idle. For example, a server that on average uses 60% of its available RAM and processor capacity isn’t being used as efficiently as a server that on average uses 80% of its available RAM and processor capacity.

Read More

Hewlett Packard Enterprise and Microsoft announce plans to deliver integrated hybrid IT infrastructure

Hewlett Packard Enterprise | December 01, 2015

Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings. “Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.” The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.

Read More

Converged Architecture and Server Consolidation

April 20, 2016

Generally speaking, the cost of maintaining a higher number of individual hardware chassis exceeds the cost of maintaining a lower number of individual hardware chassis. Among other things, there is the cost of allocating space within the data center to a particular hardware chassis, as well as power consumption and cooling requirements. There are also costs involved in maintaining the hardware chassis and its components. Organizations are always looking for new ways to reduce their overall expenditure on hardware by using existing systems as efficiently as possible. This means making the most of every megabyte of RAM and every spare processor cycle, rather than having that hardware sit idle. For example, a server that on average uses 60% of its available RAM and processor capacity isn’t being used as efficiently as a server that on average uses 80% of its available RAM and processor capacity.

Read More

Hewlett Packard Enterprise and Microsoft announce plans to deliver integrated hybrid IT infrastructure

Hewlett Packard Enterprise | December 01, 2015

Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings. “Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.” The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.

Read More

Events