IT SYSTEMS MANAGEMENT
Oak9 | July 04, 2022
According to Oak9, a developer-first infrastructure-as-code (IaC) security solution, businesses are starting to approach apps like code. For instance, technologies that specify governance or policy concepts as code, like HashiCorp Sentinel, are available. By applying the proper security in accordance with SaC blueprints to risk-appropriately secure a cloud application's architecture, Oak9's platform is powered by its patented Security as Code (SaC) technology. SaC is designed to assess changes to cloud-native infrastructure.
According to the corporation, organizations today use a variety of tools, technologies, etc. Multicloud/multi-IaC language environments are becoming more and more common because of this. In addition, the technology-agnostic nature of Oak9 reduces the need to manage security across several technologies at once.
In order to allow developers to utilize their preferred IaC languages, clouds, multi-clouds, workflows, etc., the company claims to work with integrated development environments (IDEs), code repositories, continuous integration and continuous deployment (CI/CD) pipelines, and chat ops tools.
The market's adoption of IaC has accelerated, making the security of cloud apps a crucial need that Oak9 can address, according to Alex Brown at the venture capital company HPA, which oversaw a recent investment round for Oak9.
According to Oak9, its technology speeds up the delivery of cloud-native applications while providing security to find and fix any weaknesses. The platform is made to inform users of the location of security flaws in a company's cloud, their criticality, the reasons for their existence, and how to fix them. Organizations can use the tool to apply the security fix to their whole cloud infrastructure.
In order to strengthen security in IaC and cloud environments, Oak9 just announced $8 million in new funding. The money will be used, in part, by Oak9, which recently announced an IaC remediation capability, to expand its free community version and introduce a new generation of Security as Code offerings.
Within the last 15 months, Oak9 has now raised $14 million. Menlo Ventures, which took the initiative in the most recent round, and HPA, which upped its investment in Oak9, are also participating.
Inspur | July 04, 2022
The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance.
MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users.
The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo.
Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN).
Continuing to lead in AI computing performance
Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models.
The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance.
Greatly improving Transformer training performance
Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems.
In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time.
Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles.
About Inspur Information
Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.
Zeeve | June 30, 2022
Leo Capital and Blu Ventures contributed $2.65 million to the seed round of Zeeve, an enterprise-grade no-code platform for automating blockchain infrastructure. The money obtained from this round will be utilized to strengthen product development, expand the technical team, and broaden the company's appeal to DApp developers and multinational organizations.
Using its no-code platform, Zeeve makes it simple to install Blockchain nodes and Decentralized Apps on enterprise-grade infrastructure. The stakeholders may manage their nodes and networks with powerful analytics and real-time notifications, and nodes can be deployed in a matter of minutes. The majority of the important permissioned blockchain protocols, such as Hyperledger Fabric, R3 Corda, Fluree, and Hyperledger Sawtooth, as well as public blockchain protocols, such as Bitcoin, Ethereum, Polygon, Binance Smart Chain, Tron, Avalanche, and Fantom, are supported by Zeeve's solution.
Zeeve was established in 2021 by Dr. Ravi Chamria, a serial entrepreneur, tech evangelist, and co-founders Ghan Vashistha and Sankalp Sharma. Zeeve has since become a leader in the development of simple-to-deploy web3 infrastructure, with the trust of more than 10,000 developers, Blockchain startups, and businesses.
"The Internet has come a long way - from the simple web pages of web1.0 to the decentralized web3.0. Lots of exciting innovations have happened in the web3.0 space like DeFi, NFTs, Decentralized Insurance, Prediction Markets, etc. We should expect to see a lot more innovation over the next five years, revolutionizing how we use the internet. With further advancements in blockchain technology, we may soon see web3 utilized for everything from online commerce to voting and governance."
Dr. Ravi Chamria, CEO, Zeeve
Web3 is reportedly being hailed as the internet of the future by Harvard Business Review. Web3 has the power to increase everyone's access to the internet. More quickly than in previous web iterations, Web3 infrastructure may be used by new enterprises to create communities around their brands and product concepts. By linking to content networks powered by blockchain technology and granting users some level of data governance, even currently operating platforms may use such prospects. All of this suggests that the web will appear very different — and far more open — than it does right now in the future.
"In this new era of the internet, companies like Zeeve play a pivotal part in making it easy for enterprises and Blockchain startups to deploy blockchain nodes and consume APIs to connect with Blockchains. Zeeve's offering helps DevOps teams ease their operational, security, and performance challenges while deploying and managing Blockchain nodes and networks," says Tarun Upaday, Partner, Blu Ventures.