IT Systems Management

Logpoint poll highlights extent of insecure and unmonitored business-critical systems

Logpoint
Logpoint has today announced findings from a recent poll to uncover the security and cost implications enterprises face with their existing IT infrastructure. The poll, issued on Twitter, was targeted at cybersecurity and IT professionals in both the US and UK.

The poll revealed the extent of insecure and unmonitored business-critical systems, with 40 percent noting that they do not include business-critical systems such as SAP in their cybersecurity monitoring. In addition, a further 27 percent were unsure if it was included in their cybersecurity monitoring at all. This is concerning given that SAP serves as the core system behind every aspect of business operations. Not including this in the centralized security monitoring solution leaves organizations vulnerable and exposed to the risk of cyber threats.

"Considering that 77 percent of global transactions touch an SAP system, protecting it against cyber-attacks is vital. Organizations store their most critical assets within SAP, and this data must be protected. SAP systems require extensive protection and security monitoring, and businesses need to ensure they have an integrated security operations platform that monitors all IT infrastructure to ensure they have complete visibility into their SAP system" said Andrew Lintell, Logpoint VP for EMEA.

Furthermore, when asked how they currently review SAP logs for cybersecurity events or cyber threat activity, almost 30 percent of respondents admitted to not reviewing SAP logs in any way, and again, nearly 30 percent said they didn't know if this was being monitored. Failure to do so can create a blind spot for businesses and make it challenging to detect and quickly respond to fraud and threats within SAP.

To add to this, only 23 percent said the process of reviewing SAP logs for cybersecurity events or cyber threat activity was automated through SIEM, with almost 19 percent still doing so manually.

"Bringing SAP systems under the remit of cybersecurity solutions can massively reduce the security risks and provide logs to aid any audit processes. Accommodating it within the SIEM, for example, can enable these applications to benefit from automation and continuous monitoring, as well as coordinated threat detection and response with log storage and log management, to assist in subsequent investigations," commented Lintell.

"The problem though, is that businesses are trying to fill the gaps in their cybersecurity stacks by devoting more spend to a growing litany of cloud security products, with many toolsets and features going unused or resulting in configuration failure and, ultimately, data breaches that could be avoided," Lintell added.

For those businesses looking to invest in cloud security, a near 40 percent of respondents regarded software licensing in the cloud as too expensive, with 24 percent declaring it led to unknown future costs. Lock-in or lack of control with software licensing was also flagged as an issue by 22 percent, along with a lack of user-based licensing options by 14 percent, as the predominant model of charging is data usage-based.

Lintell commented - "Businesses must continue to build out their cloud presence, and the market is seeing some natural consolidation as complementary technologies such as SIEM and SOAR converge. There are cost-effective options available, and a SaaS all-in-one solution can limit the costs associated with licensing, particularly if it's based on the number of devices sending data rather than on the volume of your data, which is where businesses are seeing costs escalate".

Spotlight

Other News
Storage Management

SoftIron Recognized as a Sample Vendor in Gartner Hype Cycle for Edge Computing

GlobeNewswire | October 25, 2023

SoftIron, the worldwide leader in private cloud infrastructure, today announced it has been named as a Sample Vendor for the “Gartner Hype Cycle for Edge Computing, 2023.” Gartner Hype Cycle provides a view of how a technology or application will evolve over time, providing a sound source of insight to manage its deployment within the context of your specific business goals. The five phases of a Hype cycle are innovation trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment and the Plateau of Productivity. SoftIron is recognized in the Gartner report as a Sample Vendor for Edge Storage and the report defines the technology as those that enable the creation, analysis, processing and delivery of data services at, or close to, the location where the data is generated or consumed, rather than in a centralized environment. Gartner predicts that infrastructure and operations (I&O) leaders are beginning the process of laying out a strategy for how they intend to manage data at the edge. Although I&O leaders embrace infrastructure as a service (IaaS) cloud providers, they also realize that a significant part of the infrastructure services will remain on-premises, and would require edge storage data services. Gartner Hype Cycles provide a graphic representation of the maturity and adoption of technologies and applications, and how they are potentially relevant to solving real business problems and exploiting new opportunities. Gartner Hype Cycle methodology gives you a view of how a technology or application will evolve over time, providing a sound source of insight to manage its deployment within the context of your specific business goals. The latest Gartner Hype Cycle analyzed 31 emerging technologies and included a Priority Matrix that provides perspective on the edge computing innovations that will have a bigger impact, and those that might take longer to fully mature. “We are excited to be recognized in the 2023 Garter Hype Cycle for Edge Computing,” said Jason Van der Schyff, COO at SoftIron. “We believe at SoftIron to be well positioned to help our customers address and take advantage of the latest trends and developments in Edge Computing as reported in Gartner’s Hype Cycle.”

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Hyper-Converged Infrastructure

Colohouse Launches Dedicated Server and Hosting Offering for Data Center and Cloud Customers

Business Wire | October 05, 2023

Colohouse, a prominent data center colocation, cloud, dedicated server and services provider, is merging TurnKey Internet’s hosting and dedicated server offering into the Colohouse brand and services portfolio. This strategic move comes from TurnKey Internet’s acquisition in 2021 to align with Colohouse’s broader compute, connectivity and cloud strategy. With the integration of dedicated servers and hosting services into its core brand portfolio, Colohouse aims to enhance its ability to meet the diverse needs of its growing customer base. Including TurnKey Internet’s servers and services is a testament to Colohouse’s dedication to delivering comprehensive and impactful solutions for its customers and prospects in key markets and edge locations. Colohouse will begin offering hosting services immediately available on www.colohouse.com Products: dedicated bare metal servers, enterprise series dedicated servers, cloud VPS servers, control panel offerings and licensing Colohouse’s dedicated servers will be available in these data centers: Miami, FL, Colorado Springs, CO, Chicago, IL, Orangeburg, NY, Albany, NY and Amsterdam, The Netherlands. Client Center: The support team will be available to assist customers 24/7/365 through a single support portal online, or via email and phone, as well as Live Chat through colohouse.com Compliance and security are a top priority for Colohouse’s customers. In fall of 2023, Colohouse will have its first combined SOC audit for all of its data center locations, including dedicated servers and hosting. This will be available for request on its website upon completion of the audit. When I accepted the job of CEO at Colohouse, my vision was, and still is, to build a single platform company that provides core infrastructure but also extends past just colocation, cloud, or bare metal. We recognize that businesses today require flexible options to address their IT infrastructure needs. This is a step for us to create an ecosystem within Colohouse that gives our customers room to test their applications instantly or have a solution for backups and migrations with the same provider. The same provider that knows the nuances of a customer's IT infrastructure, like colocation or cloud, can also advise or assist that same customer with alternative solutions that enhance their overall IT infrastructure, shared Jeremy Pease, CEO of Colohouse. Jeremy further added, “The customer journey and experience is our top priority. Consolidating the brands into Colohouse removes confusion about the breadth of our offerings. Our capability to provide colocation, cloud, and hosting services supports our customers’ growing demand for infrastructure that can be optimized for cost, performance and security. This move also consolidates our internal functions, which will continue to improve the customer experience at all levels.” All products are currently available on colohouse.com. TurnKey Internet customers will not be impacted by transitioning from the TurnKey Internet to Colohouse. All Colohouse and TurnKey Internet customers will continue to receive the industry's best service and support. Colohouse will be launching its first-ever “Black Friday Sale” for all dedicated servers and hosting solutions. TurnKey Internet’s customers have incorporated this annual sale in their project planning and budget cycles to take advantage of the price breaks. The sale will begin in mid-November on colohouse.com. About Colohouse Colohouse provides a digital foundation that connects our customers with impactful technology solutions and services. Our managed data center and cloud infrastructure paired with key edge locations and reliable connectivity allow our customers to confidently scale their application and data while optimizing for cost, performance, and security. To learn more about Colohouse, please visit: https://colohouse.com/.

Read More

Application Infrastructure

Penguin Solutions Certified as NVIDIA DGX-Ready Managed Services Partner

Business Wire | September 28, 2023

Penguin Solutions™, an SGH™ brand (Nasdaq: SGH) that designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale, today announced that it has been certified by NVIDIA to support enterprises deploying NVIDIA DGX™ AI computing platforms under the NVIDIA DGX-Ready Managed Services program. NVIDIA DGX systems are an advanced supercomputing platform for large-scale AI development. The NVIDIA DGX-Ready Managed Services program gives customers the option to outsource management of DGX systems deployed in corporate data centers, including the implementation and monitoring of server, storage, and networking resources required to support DGX platforms. Generative AI requires a completely new computing infrastructure compared to traditional IT, said Troy Kaster, vice president, commercial sales at Penguin Solutions. These new computing infrastructures require services skills, which Penguin is uniquely qualified to support given our extensive experience partnering with some of the largest companies in AI. As a full-service integration and services provider, Penguin has the capabilities to design at scale, deploy at speed, and provide managed services for NVIDIA DGX SuperPOD solutions. Penguin has designed, built, deployed, and managed some of the largest AI training clusters in the world. Penguin currently manages over 50,000 NVIDIA GPUs for Fortune 100 customers including Meta’s AI Research SuperCluster – with 2,000 NVIDIA DGX systems and 16,000 NVIDIA A100 Tensor Core GPUs – one of the most powerful AI training clusters in the world. “AI is transforming organizations around the world, and many businesses are looking to deploy the technology without the complexities of managing infrastructure,” said Tony Paikeday, senior director, DGX platform at NVIDIA. “With DGX-Ready Managed Services offered by Penguin Solutions, our customers can deploy the world’s leading platform for enterprise AI development with a simplified operations model that lets them tap into the leadership-class performance of DGX and focus on innovating with AI.” Advantages of Penguin Solutions powered by NVIDIA DGX include: Design large-scale AI infrastructure combining the most recent DGX systems, ultra-high speed networking solutions, and cutting-edge storage options for clusters tailored to customer requirements Manage AI infrastructure making the most of multiple layers of recent hardware and software, such as acceleration libraries, job scheduling and orchestration Reduce risk associated with investments in computing infrastructure Optimize efficiency of AI infrastructure with best-in-class return on investment. About Penguin Solutions The Penguin Solutions™ portfolio, which includes Penguin Computing™, accelerates customers’ digital transformation with the power of emerging technologies in HPC, AI, and IoT with solutions and services that span the continuum of edge, core, and cloud. By designing highly-advanced infrastructure, machines, and networked systems we enable the world’s most innovative enterprises and government institutions to build the autonomous future, drive discovery and amplify human potential.

Read More