APPLICATION INFRASTRUCTURE

SolidRun Accelerates V2X Infrastructure Development with New Mini SOM based on NXP's i.MX 8XLite

SolidRun | October 29, 2021

SolidRun, a leading developer and manufacturer of high-performance System on Module (SOM) solutions, Single Board Computers (SBC) and network edge solutions, today announces its System on Module (SOM) line based on the NXP i.MX 8X Lite applications processors and engineered for a variety of V2X applications. Available in single- or dual-core configurations, these new SOMs pack all the essential components required to quickly develop V2X, V2I and industrial IoT applications in a 30 x 47mm form factor.

"In order to accelerate the adoption of autonomous vehicle technology, we need to build a digital infrastructure that supports it. The i.MX 8XLite applications processor was designed specifically for that purpose," said Andres Lopez de Vegara Lemos, Product Manager, Edge Processing business, NXP Semiconductors. "Working with SolidRun helps us jumpstart and reduce the development time of V2X hardware solutions by providing engineers a turn-key development tool based on our SoC that serves variety of applications."

Targeting vehicle telematics, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, road infrastructure connectivity and industrial equipment, SolidRun's Mini SOMs based on NXP's i.MX 8XLite provides a foundation for secure V2X applications. It features excellent real-time synchronization and control for a variety of smart-city applications and combines the high-performance application processing from NXP's i.MX 8X processor with V2X acceleration and its RoadLINK SAF5400 single-chip DSRC modem for next-generation telematics. The SOM based on NXP's i.MX 8XLite also features an array of high-speed interfaces – including Ethernet, PCIe Gen 3, USB 2.0, and CAN-FD. As part of the NXP Product Longevity program, NXP guarantees the i.MX 8XLite SoC will be manufactured for 15 years. Similarly, SolidRun guarantees this SOM will be manufactured for at least 15 years, making it the optimal solution for long-term vehicle-based communications infrastructure applications.

Beyond serving as the perfect building block for V2X infrastructure, the SOM based on NXP's i.MX 8XLite is also well suited for industrial IoT, building control and robotics applications requiring time-sensitive networking (TSN) Ethernet or controller area network (CAN) connectivity. Great for advanced industrial processes that require reliable, accurate synchronization and real-time control, the integrated SoC's A35 cores and CAN-FD interface provide low-latency data transmission.

Engineered to serve a variety of application environments, ranging from commercial and industrial vehicles to roadside communications hubs and even robotics, the SOM supports a vast operating temperature range of -40°C to 85°C. Its efficient design maintains a low operating temperature without a fan and reduces the potential of heat and dust-related failures, resulting in reliable long-term operation and performance.

"Smart cities and V2X communications will not only dramatically improve the efficiency of our roadways, but it will also play a significant role in reducing collision-related traffic deaths and make it easier for emergency vehicles to cut through congested areas,However, none of this can take shape without reliable hardware, like our SOM connecting the infrastructure and vehicles. We look forward to working closely with NXP to ensure our SOMs reliably power V2X communications for years to come."

Dr. Atai Ziv, CEO at SolidRun

SolidRun also offers a HummingBoard carrier board that is perfect for prototyping with the i.MX 8XLite-based SOM. While not much larger than the SOM at just 30 x 55mm, the HummingBoard carrier supports up to 2GB of LPDDR4 memory, and features expansion and communications options, including 100BASE-T1 automotive Ethernet, USB 2.0 ports, and UART, SPI, SDIO, and 12C and I/O pins.

The SOMs and HummingBoard carrier boards are available through SolidRun. To help expedite the development process, customers will be provided with an optimized board support package, stable long-term support for select software distributions, access to SolidRun's support tools and sample source code.

Spotlight

The FUJITSU servers in the BS2000 SE series are unique on the mainframe market. As flexibly configurable hybrid systems, they open up completely new design possibilities for freedom in multi-server operation thanks to their openness, integration ability and manageability. To meet the high demands of increasing networking and the explosive growth of transactions, mainframes are, more than ever, suited as central high-performance systems.

Spotlight

The FUJITSU servers in the BS2000 SE series are unique on the mainframe market. As flexibly configurable hybrid systems, they open up completely new design possibilities for freedom in multi-server operation thanks to their openness, integration ability and manageability. To meet the high demands of increasing networking and the explosive growth of transactions, mainframes are, more than ever, suited as central high-performance systems.

Related News

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave | November 07, 2022

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization. “This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.” NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry. “AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.” Dave Salvator, director of product marketing at NVIDIA In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale. “CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.” CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure. Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds. About CoreWeave CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE

Virtana Launches Kubernetes Support Strategy Across the Portfolio with Container Rightsizing in Infrastructure Monitoring and Cloud Cost Optimization

Virtana | September 08, 2022

Virtana, a leading provider of AI-driven solutions for hybrid cloud management and monitoring, announces a new Kubernetes strategy that will deliver container support across the full portfolio of Virtana Platform solutions. This strategy will deliver actionable infrastructure insights for optimal performance, cost, and capacity of business applications. The first deliverable of this strategy is a rightsizing feature for container environments through Virtana Platform's cloud cost optimization solution. With this capability, Virtana Platform users will have access to container rightsizing recommendations alongside ones for traditional compute — all within the Costs Savings Opportunities dashboard of Virtana Platform. Enterprises are increasingly adopting Kubernetes to accelerate software development, scale deployment, and enable faster digital transformation. Gartner® estimates1 that by 2026 more than 90% of global organizations will be running containerized applications in production (an increase from fewer than 40% in 2020). IDC reports2 that 80% of new workloads are being developed in containers. Given this rapid spike in Kubernetes usage, AWS and Azure users need a tool that provides cost savings for both traditional compute and containerized/serverless compute resources. As of today, most users do not appropriately constrain, monitor, or manage their Kubernetes containers, leading to excess spend and unpredictable performance. "There is a huge opportunity to unlock development speed, flexibility, and operational efficiency through container usage. "Yet many enterprises leveraging containers forget the basics of rightsizing and monitoring before and during development, leading to huge end of month bills and unnecessary cloud spend." Jon Cyr, Head of Product for Virtana Jon continued: "Virtana's strategy is to address the challenges that arise through a customer's cloud native journey. We offer the unique ability to trace the entire data path from container to compute, network, and storage." Virtana collects performance metrics from customers' container environments through the use of Prometheus, the leading open source monitoring tool for Kubernetes. Through Virtana Platform, the company then analyzes the metrics to provide insight and deliver prescriptive rightsizing recommendations for Amazon's Elastic Kubernetes Service (EKS) and Microsoft's Azure Kubernetes Service (AKS) containers. From these recommendations, users can tailor the default rightsizing based on constraints for CPU and memory, to meet specific business requirements and risk tolerance. Key benefits of Virtana's cloud cost optimization Kubernetes feature: Rightsizes traditional compute and containers across multiple clouds in one tool Tune sizing based on organization's risk tolerance with what-if analysis that includes CPU and memory Automatically optimize instances with rightsizing recommendations Adjust to real-time changes in cloud service provider offerings to optimize cost structures Analyzes performance metrics using open source collection "Containers and Kubernetes are becoming popular technologies for cloud-native applications and have significantly grown in adoption during the past five years. Gartner estimates that more than one-third of enterprises are running containerized workloads in production and nearly 10% of total workloads run on containers today. Despite the apparent progress, the container ecosystem continues to be chaotic, fast paced and fragmented," said Arun Chandrasekaran and Wataru Katsurashima of Gartner in The Innovation Leader's Guide to Navigating the Cloud-Native Container Ecosystem. This new Kubernetes optimization feature will be available for all Virtana Platform customers with a Pro license, with additional configuration. Companies can try Virtana Platform's cloud cost management and optimization for free at virtana.com/optimize-free-tier. Free trial accounts are able to access this Kubernetes feature and view a limited number of recommendations. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. About Virtana Virtana provides a unified multi-cloud management platform to simplify the optimization, migration, and monitoring of application workloads across public, private, and hybrid cloud environments. The cloud-agnostic SaaS platform allows enterprises to efficiently plan their cloud migrations and then right size workloads across their hybrid cloud infrastructure for performance, capacity, and cost—most customers see 25% cloud cost savings or more within the first 10 days of use.

Read More

APPLICATION INFRASTRUCTURE, DATA STORAGE, IT SYSTEMS MANAGEMENT

Infosys to Modernize CIRCOR's IT Infrastructure Landscape for Efficient and Agile Operations

Infosys | October 20, 2022

Infosys, a global leader in next-generation digital services and consulting, today announced its collaboration with CIRCOR International, one of the world's leading providers of mission critical flow control products and services for the Industrial and Aerospace & Defense markets, to transform its IT infrastructure, service desk, and user support applications. As part of this strategic engagement, Infosys will work on transforming CIRCOR's IT landscape and modernize its IT infrastructure. CIRCOR selected Infosys for its strong system integration and automation capabilities, extensive partner network, and ability to effectively address client requirements. Through this collaboration, Infosys will transform IT services for CIRCOR's business users by deploying SLA-based managed IT services, improve processes, bring in agility into operations and will also modernize the local data centers and cloud landscapes. Infosys will additionally provide integrated services and use ServiceNow as an IT service management platform (ITSM) to support CIRCOR's infrastructure, applications, and operations. Further, Infosys will modernize CIRCOR's cybersecurity landscape, leveraging its Cyber Next platform and helping CIRCOR improve its cybersecurity capability maturity model (CMMC) compliance. The engagement aims to ensure significant cost savings through the duration of the program and enable year-on-year productivity improvements. "The goal of our alliance with Infosys is to offer all our customers – both internal and external – faster and more reliable service, enhance our cybersecurity, and provide 24x7 monitoring for our global IT environment." Pete Sattler, Chief Information Officer, CIRCOR Jasmeet Singh, Executive Vice President and Global Head of Manufacturing, Infosys, said, "We are delighted to collaborate with CIRCOR to fulfill its strategic business goals and accelerate its IT infrastructure transformation journey. With an in-depth understanding of CIRCOR's business priorities and challenges, Infosys will help improve IT service delivery and productivity through analytics, automation, and process maturity." About Infosys Infosys is a global leader in next-generation digital services and consulting. Over 300,000 of our people work to amplify human potential and create the next opportunity for people, businesses, and communities. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer clients, in more than 50 countries, as they navigate their digital transformation powered by the cloud. We enable them with an AI-powered core, empower the business with agile digital at scale and drive continuous improvement with always-on learning through the transfer of digital skills, expertise, and ideas from our innovation ecosystem. We are deeply committed to being a well-governed, environmentally sustainable organization where diverse talent thrives in an inclusive workplace.

Read More