APPLICATION INFRASTRUCTURE

Clever Cloud Selects French Kalray for Its High-performance Storage Solutions to Optimize Its New Data Center

Clever Cloud | July 06, 2022 | Read time : 3 min

Clever Cloud
Clever Cloud, a European provider of automation and optimization solutions for websites and applications hosting, and Kalray, a leading provider of a new generation of processors and acceleration cards specialized in Intelligent Data Processing from Cloud to Edge, announce a partnership aimed at strengthening the relationship between the two companies for the deployment of next-generation storage solutions.

Clever Cloud, which is currently setting up in a third Parisian data center and strengthening its technical platform, has chosen to partner with French company Kalray and its Flashbox™, a new generation of NVMe storage array that can accommodate up to 24 PCI Express (NVMe) SSDs, for its high-performance storage solution.

PERFORMANCE, RESILIENCE AND OPENNESS

Without a host CPU (Central Processing Unit), the Flashbox™ leverages K200-LP™ accelerator cards featuring the Coolidge™ DPU (Data Processing Unit) that takes advantage of the unique and patented MPPA® (Massively Parallel Processor Array) architecture developed by Kalray. This chip takes advantage of the power of 80 parallel cores and enables data centers adopting NVMe SSD storage to remove bottlenecks and improve performance, while consuming minimal power and resources.

A unique concept on the market today, the Flashbox™ as proposed by Kalray benefits from ultra-fast interfaces and low power consumption. It can deliver up to 12 million IOPS through a 2x 100 Gb/s network interface and is based on market standards such as Storage Performance Development Kit (SPDK), NVMe/TCP or RDMA over Converged Ethernet (RoCEv1/v2). It will eventually be offered by storage vendors using Kalray cards.

"Kalray's solution meets all our needs and the criteria that are important to us, whether in terms of sovereignty, openness, resilience, energy efficiency or performance," said Quentin Adam, CEO of Clever Cloud.

The Flashbox™ is currently being tested in Clever Cloud's infrastructure and will join its new Paris data center where it will work alongside the latest generation of compute servers Clever Cloud is installing there. It will enable Clever Cloud to offer greater flexibility to its customers in managing storage, but also to support the launch of new offerings.

This is part of the development of Clever Cloud's Storage API, which allows various storage sources, whether local, remote or via third-party cloud service providers, to be simply allocated to the virtual machines that host users' applications. This open, multi-cloud approach is what makes Clever Cloud so successful.

CLEVER CLOUD AND KALRAY: FOR A “MADE IN FRANCE” R&D

Clever Cloud and Kalray are also partnering in several projects where they are working closely together to leverage Kalray's K200-LP™ cards as accelerators to offload Clever Cloud's network layer functionalities or computations related to the protection and redundancy of data stored within Clever Cloud's servers.

"The collaboration with Kalray is not just about two French companies that have an interest in working together, but about two projects that have, at their heart, the creation of value through innovation, where the partnership is also between the technical teams."

Steven Le Roux, CTO of Clever Cloud

“We are very pleased to be working with Clever Cloud, with whom we share common values, both in terms of innovation and the philosophy of our solutions, which are designed to be easy to use, flexible, scalable and open.

Kalray has a unique position in Europe in this market of very high-performance and low-power processors. We are proud to be able to build innovative French storage solutions with Clever Cloud and thus contribute to the technological sovereignty of France and Europe”, declares Eric Baissus, President and CEO of Kalray.

The Flashbox™ will be showcased at the upcoming Flash Memory Summit, a major event in the field, from August 2 to 4, 2022, in Santa Clara Convention Center, CA, USA .

ABOUT CLEVER CLOUD
Founded in 2010, Clever Cloud is a company based in Nantes, France, and specialized in IT automation. It creates and provides the software building blocks necessary for the flexible deployment of applications on self-service PaaS architectures. Its clients include such major names as Airbus, AXA, Caisse d'Épargne, Cegid, Docaposte, MAIF, McDonald's, Solocal, SNCF, TBWA.

ABOUT KALRAY
Kalray is a fabless semiconductor company, a leading provider of a new class of processors, specialized in Intelligent Data Processing from Cloud to Edge. Kalray’s team have created and developed its leading-edge technology and products to help its clients maximize the market possibilities presented by a world dominated by massive, disparate and pervasive data.

Spotlight

The forces of cloud, mobile and the Internet of Things are combining to transform enterprise networks. Information technology departments are now faced with the challenge of providing connectivity to people, places and things anywhere, while ensuring visibility, security and control. Existing network management paradigms, developed decades ago, were designed around fixed branch networks accessing applications within private data centers.


Other News
HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

DATA STORAGE

DartPoints® to Provide the University of South Carolina with Custom Software-Defined Data Center Solution

DartPoints | July 14, 2022

DartPoints®, the leading edge digital infrastructure provider, announces today that it has formed an innovative technology partnership with the University of South Carolina. DartPoints will provide a custom Software-Defined Data Center (SDDC) solution, which replaces the university's current data center. DartPoints' custom SDDC cloud solution will significantly improve the university's IT agility. It adheres to UofSC's compliance requirements while providing the multi-tenancy of a public cloud infrastructure. The solution enables UofSC to reduce capital expenditures while improving functionality, reliability, and security. "We needed a reputable provider that was readily available to ensure our team always has access to the critical data that keeps our campuses running across the state," said Dan Schumacher, executive director of infrastructure services at UofSC. "DartPoints is the ideal partner for our university and its solution is easy to use, highly configurable, and provides the comprehensive services we require." Schumacher said moving information into a cloud-based data center will improve the university's disaster recovery capabilities and protect critical applications in the event of a catastrophic event. In addition, hosting compute services and file share services in the cloud improves efficiency and resilience because the time to respond to issues will be significantly reduced since there is no need to wait on shipping times or face equipment shortages that have occurred since COVID-19. The University of South Carolina is leading the way for cloud-based data centers, as not many universities have fully adopted the model. Doug Foster, vice president for information technology and chief information officer for UofSC, said, "We are committed to the continuous improvement of our services to best meet the needs of our Gamecock community. This is one example of how we offer cutting-edge IT services that evolve with the ever-changing landscape. I am proud to be a part of this adventure with this great university and a talented group of employees." An SDDC architecture helps organizations accelerate delivery of technology services while retaining control over IT, minimizing complexity, and reducing costs. It is an ideal solution for government agencies, hospitals, higher education institutions, and any organization that needs to respond quickly to demands for IT resources. "The university had a number of factors that needed to be addressed, including latency, data location, cost, and technical expertise. "We were able to work with UofSC's team to develop a customized solution that addresses all of their needs, and we believe that similar solutions can help other large institutions." Brad Alexander, DartPoints' CTO DartPoints has been providing multi-tenant cloud, network connectivity, and managed services in South Carolina for over a decade, from its four active data centers in the state, located in Columbia, Greenville, North Charleston, and Spartanburg. DartPoints offers unmatched support and technical expertise backed by tenured and continuously upskilled technicians. About The University of South Carolina The University of South Carolina is the flagship institution in the state. The public university has seven satellite campuses, in addition to the main campus that is located in Columbia, the state capital. Founded in 1801, the university is a Research 1 institution and offers more than 300 programs of study, from bachelor's to doctorate. The university has an approximate student enrollment of 35,3000 students and awards more than 9,000 degrees each year. The Division of Information Technology works to help fulfill the academic mission of the University of South Carolina by providing technology services that maximize productivity, increase collaboration, and improve service. We strive to provide repeatable, reliable, and consistent IT services to our constituents, who span across the eight-campus system. More than 170 highly skilled individuals are employed by the division. About DartPoints DartPoints is the leading digital infrastructure provider enabling next-generation applications at the edge. By weaving together cloud, interconnection, colocation, and managed services, DartPoints enables edge ecosystems for enterprises, carriers, and cloud and content providers. DartPoints is building tomorrow's distributed digital infrastructure while serving today's cloud and colocation needs — and helping to bridge the digital divide.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE,IT SYSTEMS MANAGEMENT

CEVA Accelerates 5G Infrastructure Rollout with Industry's First Baseband Platform IP for 5G RAN ASICs

CEVA, Inc. | September 21, 2022

CEVA, Inc., the leading licensor of wireless connectivity and smart sensing technologies and co-creation solutions, today introduced PentaG-RAN™, the industry's first 5G baseband platform IP for ASICs targeting cellular infrastructure in both base station and radio configurations, including distributed units (DU), and Remote Radio Units (RRU), from small cell to Massive Multiple-Input, Multiple Output (mMIMO). This heterogeneous baseband compute platform has been designed to significantly reduce the entry barriers for companies wishing to break into the new market opportunities available in Open RAN (O-RAN) equipment. The burgeoning market for 5G base station and radio ASICs is fueled by the digital transformation that continuously calls for higher cellular bandwidth at lower latency. More recently, network disaggregation driven by the Open RAN initiative and the drive for mMIMO has gained the attention of silicon vendors worldwide to the cellular infrastructure market opportunity. Global technology intelligence firm ABI Research forecasts that shipments of 5G outdoor mobile infrastructure equipment, including mMIMO radios, O-RAN radios, small cells and virtual basebands is expected to grow at a CAGR of 17% from 2022 to 2027, surpassing 23 million units annually by 2027. The inability of the current platforms to scale effectively to mMIMO dimensions or support new O-RAN use cases where power and cost are significant factors has disrupted the market and opens the door for new entrants. With this in mind, PentaG-RAN provides a groundbreaking platform for a complete L1 PHY (physical layer 1) solution with optimal hardware/software partitioning, incorporating powerful vector DSPs, PHY control DSPs, flexible 5G hardware accelerators and other specialized components required for modem processing chains. It delivers up to 10X savings in power and area compared to available FPGA and commercial-off-the-shelf (COTS) CPU based alternatives. To further reduce design risk and expediate ASIC design, CEVA's co-creation services are available to PentaG-RAN customers for the development of their entire PHY subsystem, and up to designing the complete chip. Guy Keshet, Vice President and General Manager of the Mobile Broadband Business Unit at CEVA, stated: "CEVA has been at the leading edge of DSP technology in cellular infrastructure for the last decade, collaborating with leading Tier-1 OEMs to deliver the most advanced DSP architectures deployed in 5G RAN today. Our PentaG-RAN platform was formed out of this extensive experience and brings the next level platform solution to help drive the next generation of 5G chipsets for mMIMO systems and new O-RAN use cases." The PentaG-RAN platform addresses both base station and radio compute configurations: Base station, supporting Macro DU/vDU and Small Cell – offering a scalable compute platform for L1 inline DU/vDU acceleration. This configuration handles the complete acceleration of the main processing chains (data and control), for both symbol-to-bit domains (including FEC) and frequency processing (including FFT and equalization). Advanced algorithms including channel estimation and MMSE calculation are mapped to the CEVA-XC DSP for optimal processing and power efficiency. It includes a powerful resource pool for accelerating COTS platforms and supports high-PHY and low-PHY 7.2x split partitioning based on Open RAN specifications. Radio, supporting Open RAN Low-PHY, Massive MIMO and Beamformer – offering a scalable compute platform for Massive MIMO beamforming processing on RRU side, including Beamformer and Beamforming weight calculation. This configuration offers unmatched compute and PPA efficiency, enabling customers with a competitive path for cost reduction and integration options with transceivers, compared to FPGA and other COTS solutions. It supports a range of use-cases from Small Cell and Macro, to Massive MIMO 32TR to 64TR and beyond, for both Sub-6 and mmWave and supports O-RAN 7.2x splits. To further expedite time-to-market for PentaG-RAN licensees, the platform is supported by CEVA's new Virtual Platform Simulator (VPS), a unified System-C modeling environment that allows pre-silicon software development, solution dimensioning, architecture proof-of-concept, and modeling of all platform components. The VPS also includes reference software implementation for main processing chains, as well as beamforming use cases. About CEVA, Inc. CEVA is the leading licensor of wireless connectivity and smart sensing technologies and co-creation solutions for a smarter, safer, connected world. We provide Digital Signal Processors, AI engines, wireless platforms, cryptography cores and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence. These technologies are offered in combination with our Intrinsix IP integration services, helping our customers address their most complex and time-critical integrated circuit design projects. Leveraging our technologies and chip design skills, many of the world's leading semiconductors, system companies and OEMs create power-efficient, intelligent, secure and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial, aerospace & defense and IoT.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

Web3 Decentralized Storage Company W3 Storage Lab Changes Name to Fog Works

Fog Works | September 23, 2022

W3 Storage Lab announced today it has changed its name to Fog Works. The new name better reflects the company’s positioning, has greater brand-building potential, and is more indicative of the company’s vision of being a key builder of Web3 infrastructure, applications, and devices. The name Fog Works is derived from the term fog computing which was coined by Cisco. Fog computing is an extension of cloud computing: a network architecture where computing and storage is mostly decentralized and pushed to the edge of the network, but a cloud still exists in the center. Web3 is a fully decentralized, blockchain-enabled iteration of the internet. By being entirely decentralized, Web3 is essentially the ultimate fog computing architecture with no cloud in the center. “Our goal is to make Web3 a reality for everyday consumers. “Because we’re making Web3 work for everyone, the name Fog Works really encapsulates our vision. We’re excited to build a brand around it.” Xinglu Lin, CEO of Fog Works Fog Works has co-developed a next generation distributed storage ecosystem that is based on the public blockchain, CYFS, and the Datamall Coin. CYFS is a next-generation protocol that re-invents basic Web protocols – TCP/IP, DNS, and HTTP – to create the infrastructure necessary for the complete decentralization of Web3. It has been in development for over seven years, practically eliminates latency in file retrieval – a huge problem with current decentralized storage solutions – and has infinite scalability. Fog Works is developing a series of killer applications for both consumers and enterprises that will use both CYFS and the Datamall Coin, which facilitates a more efficient market for decentralized storage. To further the development of decentralized applications (dApps) on CYFS, Fog Works is co-sponsoring the CodeDAO Web3 Hackathon. CodeDAO is the world’s first fully decentralized code hosting platform in the world. During the hackathon, developers will compete for prizes by developing dApps using CYFS. Teams will have seven days to develop their projects. The CodeDAO Hackathon runs October 15, 2022, to October 21, 2022. For more information, please visit https://codedao.ai/hackathon.html. About Fog Works Fog Works, formerly known as W3 Storage Lab, is a Web3 decentralized application company headquartered in Sunnyvale, CA with operations around the world. Its mission is to leverage the power of Web3 to help people manage, protect, and control their own data. Fog Works is led by an executive team with a highly unique blend of P2P networking experience, blockchain expertise, and entrepreneurship. It is funded by Draper Dragon Fund, OKX Blockdream Ventures, Lingfeng Capital, and other investors.

Read More

Spotlight

The forces of cloud, mobile and the Internet of Things are combining to transform enterprise networks. Information technology departments are now faced with the challenge of providing connectivity to people, places and things anywhere, while ensuring visibility, security and control. Existing network management paradigms, developed decades ago, were designed around fixed branch networks accessing applications within private data centers.

Resources