APPLICATION INFRASTRUCTURE

CEVA Redefines High Performance AI/ML Processing for Edge AI and Edge Compute Devices

CEVA | January 06, 2022

Consumer Electronics Show  – CEVA, Inc.the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions, today announced NeuPro-M, its latest generation processor architecture for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor. An industry first, NeuPro-M supports both system-on-chip (SoC) as well as Heterogeneous SoC (HSoC) scalability to achieve up to 1,200 TOPS and offers optional robust secure boot and end-to-end data privacy.

NeuPro-M is the latest generation processor architecture from CEVA for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor.
NeuPro-M is the latest generation processor architecture from CEVA for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor.
NeuPro–M compliant processors initially include the following pre-configured cores:

  • NPM11 – single NeuPro-M engine, up to 20 TOPS at 1.25GHz
  • NPM18 – eight NeuPro-M engines, up to 160 TOPS at 1.25GHz

Illustrating its leading-edge performance, a single NPM11 core, when processing a ResNet50 convolutional neural network, achieves a 5X performance increase and 6X memory bandwidth reduction versus its predecessor, which results in exceptional power efficiency of up to 24 TOPS per watt.

Built on the success of its' predecessors, NeuPro-M is capable of processing all known neural network architectures, as well as integrated native support for next-generation networks like transformers, 3D convolution, self-attention and all types of recurrent neural networks. NeuPro-M has been optimized to process more than 250 neural networks, more than 450 AI kernels and more than 50 algorithms. The embedded vector processing unit (VPU) ensures future proof software-based support of new neural network topologies and new advances in AI workloads. Furthermore, the CDNN offline compression tool can increase the FPS/Watt of the NeuPro-M by a factor of 5-10X for common benchmarks, with very minimal impact on accuracy.

"The artificial intelligence and machine learning processing requirements of edge AI and edge compute are growing at an incredible rate, as more and more data is generated and sensor-related software workloads continue to migrate to neural networks for better performance and efficiencies. With the power budget remaining the same for these devices, we need to find new and innovative methods of utilizing AI at the edge in these increasingly sophisticated systems. NeuPro-M is designed on the back of our extensive experience deploying AI processors and accelerators in millions of devices, from drones to security cameras, smartphones and automotive systems. Its innovative, distributed architecture and shared memory system controllers reduces bandwidth and latency to an absolute minimum and provides superb overall utilization and power efficiency. With the ability to connect multiple NeuPro-M compliant cores in a SoC or Chiplet to address the most demanding AI workloads, our customers can take their smart edge processor designs to the next level."

Ran Snir, Vice President and General Manager of the Vision Business Unit at CEVA

The NeuPro-M heterogenic architecture is composed of function-specific co-processors and load balancing mechanisms that are the main contributors to the huge leap in performance and efficiency compared to its predecessor. By distributing control functions to local controllers and implementing local memory resources in a hierarchical manner, the NeuPro-M achieves data flow flexibility that result in more than 90% utilization and protects against data starvation of the different co-processors and accelerators at any given time. The optimal load balancing is obtained by practicing various data flow schemes that are adopted to the specific network, the desired bandwidth, the available memory and the target performance, by the CDNN framework.

NeuPro-M architecture highlights include:

  • Main grid array consisting of 4K MACs (Multiply And Accumulates), with mixed precision of 2-16 bits
  • Winograd transform engine for weights and activations, reducing convolution time by 2X and allowing 8-bit convolution processing with <0.5% precision degradation
  • Sparsity engine to avoid operations with zero-value weights or activations per layer, for up to 4X performance gain, while reducing memory bandwidth and power consumption
  • Fully programmable Vector Processing Unit, for handling new unsupported neural network architectures with all data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
  • Configurable Weight and Data compression down to 2-bits while storing to memory, and real-time decompression upon reading, for reduced memory bandwidth
  • Dynamically configured two level memory architecture to minimize power consumption attributed to data transfers to and from an external SDRAM

To illustrate the benefit of these innovative features in the NeuPro-M architecture, concurrent use of the orthogonal mechanisms of Winograd transform, Sparsity engine, and low-resolution 4x4-bit activations, delivers more than a 3X reduction in cycle count of networks such as Resnet50 and Yolo V3.

As neural network Weights and Biases and the data set and network topology become key Intellectual Property of the owner, there is a strong need to protect these from unauthorized use. The NeuPro-M architecture supports secure access in the form of optional root of trust, authentication, and cryptographic accelerators.

For the automotive market, NeuPro-M cores and its CEVA Deep Neural Network (CDNN) deep learning compiler and software toolkit comply with Automotive ISO26262 ASIL-B functional safety standard and meets the stringent quality assurance standards IATF16949 and A-Spice.

Together with CEVA's multi award-winning neural network compiler – CDNN – and its robust software development environment, NeuPro-M provides a fully programmable hardware/software AI development environment for customers to maximize their AI performance. CDNN includes innovative software that can fully utilize the customers' NeuPro-M customized hardware to optimize power, performance & bandwidth. The CDNN software also includes a memory manager for memory reduction and optimal load balancing algorithms, and wide support of various network formats including ONNX, Caffe, TensorFlow, TensorFlow Lite, Pytorch and more. CDNN is compatible with common open-source frameworks, including Glow, tvm, Halide and TensorFlow and includes model optimization features like 'layer fusion' and 'post training quantization' all while using precision conservation methods.

NeuPro-M is available for licensing to lead customers today and for general licensing in Q2 this year. NeuPro-M customers can also benefit from Heterogenous SoC design services from CEVA to help integrate and support system design and chiplet development. 


About CEVA, Inc.
CEVA is the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions for a smarter, safer, connected world. We provide Digital Signal Processors, AI engines, wireless platforms, cryptography cores and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence. These technologies are offered in combination with our Intrinsix IP integration services, helping our customers address their most complex and time-critical integrated circuit design projects. Leveraging our technologies and chip design skills, many of the world's leading semiconductors, system companies and OEMs create power-efficient, intelligent, secure and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial, aerospace & defense and IoT.

Our DSP-based solutions include platforms for 5G baseband processing in mobile, IoT and infrastructure, advanced imaging and computer vision for any camera-enabled device, audio/voice/speech and ultra-low-power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and inertial measurement unit ("IMU") solutions for markets including hearables, wearables, AR/VR, PC, robotics, remote controls and IoT. For wireless IoT, our platforms for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6/6e (802.11n/ac/ax), Ultra-wideband (UWB), NB-IoT and GNSS are the most broadly licensed connectivity platforms in the industry.

Spotlight

Public cloud service providers (CSP) purport to offer instantaneous, scalable virtual infrastructure with utility billing. In reality there is wide variance in cloud performance. While the public cloud IaaS industry streamlines IT through these advantages, a lack of  standardization in performance can lead to businesses overspending in order to obtain the necessary performance requirements for their applications. Cloud Spectator set out to test 4 of the largest, most well-known public cloud providers with data centers in North America. This report measures and ranks CSPs using a comprehensive performance and price-performance methodology designed by Cloud Spectator specifically for the purpose of measuring cloud environments. The study documented in this report examines the performance of vCPU, memory, and block storage as well as the value (the CloudSpecsTM Score) as defined by the relationship between price and performance

Spotlight

Public cloud service providers (CSP) purport to offer instantaneous, scalable virtual infrastructure with utility billing. In reality there is wide variance in cloud performance. While the public cloud IaaS industry streamlines IT through these advantages, a lack of  standardization in performance can lead to businesses overspending in order to obtain the necessary performance requirements for their applications. Cloud Spectator set out to test 4 of the largest, most well-known public cloud providers with data centers in North America. This report measures and ranks CSPs using a comprehensive performance and price-performance methodology designed by Cloud Spectator specifically for the purpose of measuring cloud environments. The study documented in this report examines the performance of vCPU, memory, and block storage as well as the value (the CloudSpecsTM Score) as defined by the relationship between price and performance

Related News

WINDOWS SERVER MANAGEMENT

Oracle and Teléfonos de México Partner to Offer Oracle Cloud Infrastructure Services in Mexico

Oracle | October 03, 2022

Oracle and Teléfonos de México have announced an agreement to jointly offer Oracle Cloud Infrastructure services to customers across Mexico. Under the partnership, TELMEX-Triara will become the host partner for the second planned Oracle Cloud Region in Mexico. TELMEX-Triara will be able to offer OCI services as part of its portfolio through its cloud center of excellence, comprised of highly specialized infrastructure and Oracle applications professionals – in addition to supporting clients through professional and managed services. The new region will support the increasing demand for cloud services in Mexico and will provide Mexican enterprises and public sector organizations with a broad set of infrastructure, platform, and application cloud services that meet the most stringent security standards. Customers can use OCI to modernize their applications and innovate with data and analytics or close their data centers completely. With the Oracle Cloud Querétaro Region already available and a second planned region in Mexico, Oracle will be able to help Mexican organizations with business continuity while enabling them to address their data residency and compliance requirements. "We're pleased to be working with one of Mexico's largest telecommunications providers to bring OCI to organizations of all sizes and support their digital transformation initiatives. Together we will help boost digital innovation in Mexico and advance the Mexican government's National Digital Strategy, which seeks to increase interoperability, digital identity, connectivity and inclusion, and digital skills," Maribel Dos Santos, CEO and senior vice president, Oracle Mexico With this agreement, TELMEX-Triara will be one of the first telecommunications operators in Latin America and Mexico to offer OCI services to organizations in the region, leveraging its long-standing experience in helping customers in any industry migrate their mission-critical workloads to the cloud. "One of the main objectives of this alliance is to help our clients in their digital transformation process, offering a complete and differentiated portfolio with the support of leading partners. This agreement with Oracle allows us to expand our cloud services, strengthen our strategic position, and reinforce our value proposition with an industry leader," said Héctor Slim, CEO, Teléfonos de México. "We are excited to offer our customers, partners, and developers in Mexico access to next-generation cloud services across two planned OCI regions. In partnership with TELMEX-Triara, we will develop new cloud service offerings to jointly help customers successfully move to the cloud," said Rodrigo Galvão, senior vice president, Technology, Oracle Latin America. The data center stretches close to 800,000 square feet and boosts over 180,000 square feet of certified and specialized rooms. The TELMEX network is made up of more than 198,000 miles of fiber optic cables, considered one of the most extensive telecommunications networks in Mexico and Latin America. Its public and private redundant connectivity and high-speed bandwidths guarantee the availability and support of solutions and applications anytime, anywhere. About TELMEX-Triara TELMEX claims to have invested significantly in developing the country's most robust, extensive, and cutting-edge technology platform, securing a leading position in telecommunications and IT services in Mexico. This strategy allows it to offer the broadest range of innovative and world-class solutions focused on infrastructure and processes, which helps clients make the most of their technology investment. Triara, TELMEX's data center division provides comprehensive services that guarantee the operational continuity of companies by providing cloud solutions, connectivity, storage, managed IT services, and application management. It also has the highest quality standards and international certifications. About Oracle Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE

DZS Wins 2022 Excellence Award from Cloud Computing Magazine

DZS | October 31, 2022

DZS , a global leader in access and optical edge infrastructure and cloud software solutions, today announced that the company won the 2022 Excellence Award from Cloud Computing Magazine for DZS Cloud, a DZS software platform that provides end-to-end visibility and orchestration, automation, network assurance and WiFi analytics for amazing subscriber experience and extraordinary operational agility. Cloud Computing Magazine, a subsidiary of TMC, awarded this honor to companies who most effectively leveraged cloud computing in their efforts to bring new, differentiated services and solutions to market. “We continue to see remarkable progress and innovation in the cloud computing industry within the past twelve months, making this a very competitive process. It’s our pleasure to recognize such impressive contributions that have been proven to resonate in the cloud marketplace.” Rich Tehrani, CEO of TMC “Our DZS Cloud platform leverages the latest in AI, analytics and machine learning capabilities to deliver our service provider customers advanced automation along with the ability to refine network operations and service assurance, while simplifying the deployment of new services across multi-vendor networks," said Rene Tio, VP of Cloud Solutions for DZS. “As more service providers embrace openness, DZS Cloud becomes a powerful strategic asset, allowing them to unify services across their diverse access and transport vendor environment and accelerate their on-boarding and IT (OSS/BSS) integration cycles from months to weeks, significantly reducing integration costs. TMC is a stalwart in the telecommunications industry – so we are extremely proud for our DZS Cloud platform to be recognized for the innovation it is delivering in this prestigious category." DZS Cloud is being recognized industry-wide for its simple, effortless and efficient design, massive cost-saving ability, and the quality-of-experience it offers. We believe it is the only orchestration and experience management platform purpose-built to manage services across access, mobile and NFV domains. Typical DZS Cloud deployments by service providers have unlocked the following expected benefits: 3 to 4-fold improvement in the speed of delivery of new features and services onboarding Reduce new vendor application provision from 90 days to 3 days 25-35% improvement in network quality-of-experience 30-50% fewer customer service calls and 5-12% reduction in repeat calls Reduced truck dispatches by 44% 5-fold reduction in repeated truck rolls 50% increase in remote issue resolution 80% reduction in number of subscribers experiencing interference Reduced subscriber coverage issues by 70% 15-20% improvement in customer retention When integrated with the three DZS broadband portfolio pillars of Access EDGE, Optical EDGE, and Subscriber EDGE, DZS Cloud produces significant cost savings for service providers, provides complete WiFi connectivity and control for subscribers, and ultimately unlocks the door to transforming today’s service provider into tomorrow’s experience provider. Further, by expanding the DZS Cloud software suite with Expresse and CloudCheck, DZS strategically rounded out its existing DZS Cloud service orchestration and network automation offerings, distinguishing DZS Cloud as one of the industry’s most comprehensive service and consumer-experience-management software platforms for multi-vendor service provider network environments. All DZS solutions are standards-based, have proven interoperability with leading industry vendors, and can be managed and orchestrated easily along-side other third-party solutions. About DZS DZS Inc. is a global leader in access and optical edge infrastructure and cloud edge software solutions. DZS, the DZS logo, and all DZS product names are trademarks of DZS Inc. Other brand and product names are trademarks of their respective holders. Specifications, products, and/or product names are all subject to change.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE, DATA STORAGE

Quali Simplifies Cloud Infrastructure Management with Latest Evolution of Torque Platform

Quali | October 28, 2022

Quali, a leading provider of Environments as a Service infrastructure automation and management solutions, announced today new capabilities that simplify the management of Infrastructure as Code (IaC), strengthen infrastructure governance, and provide further actionable data on the usage and cost of cloud infrastructure. Torque delivers on business’ need to scale with transparency and controls to ensure governance and accountability without introducing inhibitors to rapid execution. Without any additional effort and with no implementation needed, Torque removes friction and promotes productivity by discovering, analyzing and importing existing IaC assets created by DevOps teams, templatizing those assets into complete application environments, and allowing governed self-service access with unprecedented visibility and control. Torque allows teams to set policies that enforce governance, manage costs, and mitigate risks associated with cloud infrastructure, which enables organizations to respond to business requirements and deliver change faster and with greater agility. Torque operates across all major cloud providers, as well as major infrastructure types like containers, VMs and Kubernetes on any target infrastructure. The latest release of Torque delivers key capabilities to simplify complex infrastructure, manage IaC files and integrate with the most widely used technologies to leverage business’ existing investments. New enhancements to Torque include: Helm drift detection – in addition to the ability to detect infrastructure drift for Terraform files, Torque now adds that capability for Helm Charts, building an additional layer of control to ensure infrastructure consistency throughout the CI/CD pipeline. “BYO” Terraform policies – Torque supports basic Terraform policies, but now allows the import of existing definitions, so users can leverage previous work to define policies. Enhanced cost reporting – Cost reporting capabilities have been enhanced to include automatic cost collection for Kubernetes hosts to enhance cost visibility and provide business context to resource consumption. Environment view – From a single pane of glass, Torque lists all elements comprising an environment blueprint definition pulled from the user’s Git, including visibility into all subcomponents of environment definitions. Audit Log integrations – All data collected by Torque can be imported into third-party audit tools like ELK elastic search service, promoting greater visibility and accountability, and further strengthening IT teams’ ability to enforce compliance. “The rate at which technology is evolving has created a level of complexity that businesses are struggling to manage. “As a result, many are turning to IaC for automation, but environments now consist of a larger number of technologies that need to be governed. Torque is the control plane that manages those technologies, so organizations can operate with more speed, greater scale, lower costs and less risk.” Lior Koriat, CEO of Quali With Torque, IT leaders understand what infrastructure is being used, when, why and by who in a consistent, measurable way without any negative impact on development practices and tooling. This ensures freedoms for software development teams are maintained, while accelerating infrastructure delivery speed, accountability and mitigating risk to support the business’ needs to plan, optimize and understand the value delivered by software and infrastructure. Quali will be demonstrating its Torque platform at KubeCon North America October 26th through the 28th in Detroit, Michigan. Stop by booth S6 to learn more. About Quali     Headquartered in Austin, Texas, Quali provides the leading platform for Environments as a Service. Global 2000 enterprises rely on Quali’s infrastructure automation and control plane platform to support the continuous delivery of application software at scale. Quali delivers greater control and visibility over infrastructure, so businesses can increase engineering productivity and velocity, understand and manage cloud costs, optimize infrastructure utilization and mitigate risk.

Read More