HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE, IT SYSTEMS MANAGEMENT

Napatech Extends Collaboration with AMD to Data Center Networking with FPGA-based SmartNIC Solutions

Napatech | September 05, 2022 | Read time : 02:50 min

Napatech
Napatech™ , the leading provider of programmable Smart Network Interface Cards (SmartNICs) used for Data Processing Unit (DPU) and Infrastructure Processing Unit (IPU) services in telecom, cloud, enterprise, cybersecurity and financial applications worldwide, today announced an extension of its sales and marketing collaboration initiatives with AMD that will make Napatech's hardware-plus-software SmartNIC solutions available to AMD customers worldwide through both direct engagements and global channel partners.

OEMs, enterprises and data center operators across a wide range of industries are adopting SmartNIC solutions to achieve levels of performance, security, latency and energy efficiency that are unachievable with servers or appliances equipped with traditional "foundational" NICs. As the leading provider of field programmable gate array (FPGA)-based SmartNICs, Napatech leverages the extensive portfolio of AMD Xilinx FPGAs, in combination with its own commercial-grade software suites, to accelerate and offload a variety of workloads in cybersecurity, financial systems, mobile infrastructure, data centers, network appliances and monitoring solutions.

Many companies that can benefit from Napatech's SmartNIC solutions are already customers of the extensive AMD processor portfolio, leveraging EPYC™ processors in servers or Ryzen™ processors in workstations. The expanded collaboration between AMD and Napatech is designed to enable AMD experts and channel partners to propose and architect end-to-end solutions for these companies, spanning their network infrastructure as well as their processor subsystem. Fine-tuning the balance of workloads between the processor and the SmartNIC through true system-level design results in optimized performance, system cost and energy efficiency for the target applications.

End-users who adopt Napatech's SmartNIC solutions benefit from a true "IT" experience, whereby they simply install a card, load a driver, and achieve seamless acceleration of common applications, both commercial and open source, with no need for custom programming at either the application or FPGA level.

As one example of the benefits of Napatech SmartNIC solutions based on AMD FPGAs, a tier-one global cybersecurity OEM that needed to scale the performance of their security appliance leveraged Napatech's Link-Capture™ software running on the NT200 SmartNIC with AMD Virtex® UltraScale+™ VU9P FPGA. This company achieved industry-leading performance measured in terms of packet rate, lossless throughput, low latency and millions of simultaneous flows. Similarly, an edge data center operator increased the performance of virtual switching for traffic between Virtual Machines (VMs) via the Link-Virtualization™ software running on the NT100 (Virtex UltraScale+ VU5P). This resulted in a doubling of VM density and a 49% reduction in data center OPEX calculated over a five-year period.

"Programmable SmartNICs, including IPUs and DPUs, are a critical part of modern network infrastructure and data center designs. The demand for these innovative technologies is rapidly increasing as services must be distributed as close as possible to the applications they support, without impacting CPU performance," said Bob Laliberte, Senior Analyst and Practice Director at Enterprise Strategy Group (ESG). "We expect that this expanded collaboration between AMD and Napatech will accelerate SmartNIC adoption across a wide range of use cases, given the global reach of the AMD sales team and their channel partners."

"Napatech is the leading merchant supplier of FPGA-based programmable SmartNICs with an extensive AMD FPGA-based product portfolio targeting networking and data center applications. "We are excited to make their solutions available to our customers and prospects via our global sales, marketing and channel organizations."

Sina Soltani, Corporate Vice President of Worldwide Sales at AMD

"GL Communications Inc, a leading telecom test equipment provider, is a proud user of Napatech SmartNICs based on AMD FPGAs to drive feature rich Protocol Emulators and Analyzers at up to 100 Gbps, permitting wire speed capture and analysis for 100,000+ simultaneous sessions," said Vijay Kulkarni, CEO at GL Communications. "As an established Napatech customer, we have found that their solutions provide the performance and feature velocity required for our markets, so we are delighted to see the announcement of their new partnership with AMD."

"FPGA-based SmartNICs comprise the majority of SmartNIC ports currently deployed in network infrastructure and data centers," said Napatech CEO Ray Smets. "We are excited to be working closely with AMD as a long-term supplier of the FPGAs used in our SmartNICs, and we look forward to working with them to increase the adoption of these solutions worldwide."

About Napatech
Napatech is the leading supplier of programmable FPGA-based SmartNIC solutions used in telecom, cloud, enterprise, cybersecurity and financial applications worldwide. Through commercial-grade software suites integrated with robust, high-performance hardware, Napatech accelerates telecom, networking and security workloads to deliver best-in-class system-level performance while maximizing the availability of server compute resources for running applications and services.

Spotlight

Learn how Palo Alto Networks’ cloud-native security solution helps organizations protect cloud, hybrid cloud and on-premises infrastructure — and what Insight is doing to partner with Palo Alto Networks to help clients experience even stronger success.

Spotlight

Learn how Palo Alto Networks’ cloud-native security solution helps organizations protect cloud, hybrid cloud and on-premises infrastructure — and what Insight is doing to partner with Palo Alto Networks to help clients experience even stronger success.

Related News

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

NTT DATA Business Solutions Deploys Aviatrix’s Business-Critical Cloud Networking Infrastructure

Aviatrix | January 12, 2023

Pioneers of Intelligent Cloud Networking, Aviatrix, recently announced that NTT DATA Business Solutions has implemented the Aviatrix Cloud Networking Platform as the core technology for Cloud Network-as-a-Service (NaaS). Due to the multicloud constraints of native service providers regarding security, visibility and networking, NTT DATA Business Solutions uses Aviatrix to provide business-critical SAP application services for its enterprise customers. As part of the NTT Data Group and global strategic partner of SAP, NTT DATA Business Solutions provides turnkey implementation from advisory to managed services, to enhance SAP solutions for enterprises. NTT Business Solutions’ partnership with Aviatrix will help them to deliver cloud NaaS to support enterprise SAP deployments. Aviatrix Cloud Networking Platform provides service providers with the cutting-edge networking, security, and operational visibility they need to support mission-critical applications, whether private, managing multiple internal corporate lines of business, or public, managing external customers. Cloud consumers frequently view cloud networking as transparent, but infrastructure operations teams are aware that to provide application uptime, response times, and agility that business units and customers demand from cloud services, specialized knowledge and technology are crucial. Nauman Mustafa, Vice President of Business Development and Global Service Provider Partners at Aviatrix, said, "Multicloud networking is fundamental to enterprise cloud infrastructure today," said. He further added, "NTT DATA Business Solutions is a globally recognized solution provider, delivering proven solutions for business-critical enterprise SAP applications in the cloud. It has been a pleasure working with our partners at NTT DATA Business Solutions to architect, deploy, and operate their cloud NaaS infrastructure in support of their Intelligent Enterprise SAP services." (Source: PR Newswire) About Aviatrix Aviatrix Intelligent Cloud Networking pioneer, Aviatrix, provides multicloud networking software to deliver a simplified enterprise-grade model for cloud service providers. Its solutions optimize business-critical application performance, availability, cost and security, and when combined with its very own Aviatrix Certified Engineer (ACE) program, the industry’s first and only multicloud networking certification, innovative companies can transform their business by upgrading their cloud networking with Aviatrix.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave | November 07, 2022

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization. “This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.” NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry. “AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.” Dave Salvator, director of product marketing at NVIDIA In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale. “CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.” CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure. Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds. About CoreWeave CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION STORAGE

Rambus Launches New 6400 MT/s DDR5 Registering Clock Driver

Rambus | February 02, 2023

On February 1, 2023, Rambus Inc., a provider of chip and silicon IP that make data faster and safer, announced that its new 6400 MT/s DDR5 Registering Clock Driver (RCD) is now available and sampling for major DDR5 memory module (RDIMM) manufacturers. The Rambus Gen3 6400 MT/s DDR5 Registering Clock Driver provides a 33% increase in data rate and bandwidth over the Gen1 4800 MT/s solutions, enabling data center servers to benefit from a new level of main memory performance. In addition, it offers streamlined timing parameters for increased RDIMM margins while providing industry-leading latency and power. Rambus DDR5 memory interface chips, consisting of the RCD, Temperature Sensors, and Signal Presence Detect (SPD) Hub, are essential for cutting-edge servers to achieve a new level of performance. More intelligence is incorporated into the RDIMMs with DDR5 memory, allowing for over 2x the data rate and 4x the capacity of DDR4 RDIMMs, while simultaneously enhancing memory and power efficiency. Rambus is highly regarded for its signal integrity (SI) and power integrity (PI) knowledge, With over 30 years of experience in high-performance memory. This expertise enables DDR5 memory interface chips to transmit command/address and clock signals from the host memory device to the RDIMMs with superior signal integrity. Sean Fan, Chief Operating Officer at Rambus, explained, "Data center workloads have an insatiable thirst for greater memory bandwidth and capacity, and our mission is to advance the performance of server memory solutions that meet this need for each new server platform generation." He added, "We were first in the industry to 5600 MT/s, and now we have raised the bar with our Gen3 DDR5 RCD capable of 6400 MT/s to support a new generation of RDIMMs for server main memory." (Source – Business Wire) About Rambus Rambus Based in San Jose, California, Rambus provides innovative software, hardware, and services that drive technological advancements from the mobile edge to the data centers. The company's architecture licenses, chips, IP cores, software, and services positively affect the modern world, from memory and interfaces to emerging technologies. Its partners include foundries, prominent chip and system designers, and service providers in the industry. Integrated into countless devices and systems, the firm's products and technologies secure and power various applications, such as Internet of Things (IoT) security, Big Data, and many others.

Read More