APPLICATION INFRASTRUCTURE

Cigniti Enhances its 5G Assurance Focus With innovate5G Partnership

Cigniti | December 06, 2021

Cigniti-Enhances-min
Cigniti Technologies, a global leader in independent quality engineering and software testing services, expands its portfolio of pioneering innovative digital assurance and experience solutions for next generation 5G technologies by forming a strategic partnership with innovate5G.

As companies move to 5G, they are going to be challenged in optimizing for the plethora of possibilities that 5G has to offer and with that, customer expectations and demand for new services will increase at a rapid pace. So, whether it be enhanced video, telehealth, adaptive manufacturing, AR/VR, gaming, consumer IoT services, connected vehicles, to name just a few, the pivot to the 5G paradigm will be significant as will the manner in which these new solutions are rolled out.

Embracing this new horizon of transformation and realization of 5G networks - machine type communication (mMTC), enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (URLLC) – creates the mandate for a new paradigm of digital assurance. innovate5G’s in5Genius platform combined with Cigniti’s 5G Assurance capabilities, creates an end-to-end assurance model for organizations that are leveraging 5G as the backbone for their business and consumer applications, IoT, and edge computing.

“The promised benefits of 5G - high-speed networks, increased capacity and minimal latency – creates a new dimension for companies to exploit across business and consumer applications, as well as with IoT. But what comes will be new architectures supporting the intrinsic relationship between the applications/IoT devices and the network. Our partnership with innovate5G, further enables Cigniti’s Digital Assurance services to best support our clients’ 5G initiatives with speed, reliability and predictability.”

Srikanth Chakkilam

Chris Stark “Our in5Genius cloud platform allows enterprises to leverage the vast capabilities of the 5G bandwidth. Through this partnership with Cigniti we are now able to offer best in class enterprise grade quality engineering services to expedite the development and mainstream roll-out of the 5G centric digital experience.”

About Cigniti: Cigniti Technologies Limited global leader in providing IP-led, strategic digital assurance, software quality engineering, testing and consulting services, is headquartered in Hyderabad, India, with offices in USA, U.K., UAE, Australia, Czech Republic and Singapore. Leading global enterprises including Fortune 500 & Global 2000, trust us to accelerate their digital transformation, continuously expand their digital horizons and assure their digital next. We bring the power of AI into Agile and DevOps and offer digital services encompassing intelligent automation, big data analytics, cloud migration assurance, 5G Assurance, Customer experience assurance and much more. Our IP, next-gen quality engineering platform, BlueSwan helps assure digital next by predicting and preventing unanticipated application failures, thereby assisting our clients in accelerating their adoption of digital.

About innovate5G: innovate5G - in5Genius - platform meant for application developers to test their 5G applications in a secure and pressure-free environment, at their own pace. Whether it’s an idea for a gaming, industrial, or enterprise application, innovate5G is eliminating barriers for developers to test out their ideas - such as having to partner with a carrier or large OEM. We are working with universities and innovation zones to build 5G networks and developing a lab-sharing model in a similar vein to Airbnb. We design and build bespoke private CBRS LTE and 5G networks for enterprises.

Spotlight

According to Gartner,1 HyperConverged Infrastructure is growing at 50% cumulative annual growth rate. By 2017, 50% of enterprises will be using deploying it for Webscale IT. The reason: “The cost of traditional infrastructure has become oppressive.” Traditional infrastructure in today’s data center has become a series of complex technology silos that require deep technology skills for different layers of the infrastructure stack and large up-front investments. When outgrown, they require rip-and-replace upgrades that are risky, disruptive, and expensive. The larger these environments become, the greater the complexity grows, and the higher the risk and operational cost become.


Other News
IT SYSTEMS MANAGEMENT

Hudson Interxchange Leverages the Carma Network & Digital Infrastructure (NDI) Platform

Hudson Interxchange | August 12, 2022

Hudson Interxchange partners with Carma’s Network and Digital Infrastructure Platform (NDI) to form the fully integrated core of its operations, engineering, customer service, customer facing portal. Carma uniquely spans every traditional industry vertical, offering functionality to enable seamless delivery of interconnection in Meet Me Rooms and usage-based billing for large power feeds that Hudson Interxchange will utilize in a single platform. “Hudson Interxchange required a solution that could rapidly deploy, integrate all existing data, and deliver new functionality. We selected Carma’s platform because its holistic approach links ordering, service delivery, and billing into a single solution for a seamless customer experience.” Tom Brown, President & CEO, Hudson Interxchange “Carma combines management of 15MW of power, Meet Me Room interconnection to 300 different carriers, and cloud services in a single platform for Hudson Interxchange so they can focus on their core business, while we deliver a turnkey IT platform to enable it,” according to Frank McDermott, CEO, Carma. Carma addresses the challenges of today’s telecommunications businesses with an industry-focused solution that aggregates over two dozen functions into one system. Traditionally disjointed silos like sales, order entry, contract management, workflow, ticketing, space, power, interconnection, conduit, outside plant fiber, capacity management, expense management, revenue assurance, billing, reporting, analytics, and customer facing portals come together for the most capable, ubiquitous, and scalable platform available. About Hudson Interxchange Hudson Interxchange offers unparalleled infrastructure and capacity strategically located at key aggregation points across a global platform of existing and emerging markets, enabling seamless connectivity and dense power with scalable offerings that maximize operational and capital expenditure. The flexibility of the Hudson IX platform enables infrastructure solutions that are tailored to a customer's specific needs. The Hudson IX team is hyperfocused on providing exceptional client experience. About Carma Carma delivers the world’s first Network & Digital Infrastructure (NDI) platform that provides a fully integrated sales, operations, service, and finance solution for any vertical in the telecommunications industry. Carma aggregates over two dozen functions into one platform for a simpler, more robust, and more secure ecosystem with a dramatically lower total cost of ownership. Carma links the physical assets of the network and data center to every customer, order, service, and invoice line item for complete visibility into every transaction. Carma is a Microsoft for Startups member, CSP Direct Partner, and ISV Cloud Embed Partner.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

IT SYSTEMS MANAGEMENT

Cortex Gives Global Enterprises Autodiscovery for Cloud Infrastructure

Cortex | July 27, 2022

Cortex today announced new innovations designed to give engineering teams the same levels of visibility into and control over cloud infrastructure that the platform has, since its inception, over microservices. The company’s industry-leading System of Record for Engineering, which has given engineers and SREs comprehensive microservices visibility and control, now provides a Resource Catalog that extends to the entire cloud environment, including S3 buckets, databases, caches, load balancers and data pipelines. “We’ve now extended the platform to say, ‘Here's all the infrastructure we have, here's who owns it, here's what they do, and here's how they tie to the services,’” said Ganesh Datta, co-founder and CTO of Cortex. “We found that many customer infrastructure teams were already using the platform for tracking infrastructure migrations for microservices with Cortex scorecards, and that they wanted to expand that to include all of their assets. The platform now provides a central repository for all of that information.” Cortex Resource Catalog The new Cortex Resource Catalog enables customers to define their own resources in addition to those predefined by the platform. For example, customers wanting to represent a certain path within an S3 bucket as a first-class resource owned by a certain team, or who want to represent kafka topics as resources along with relationships to their consumer/producer microservices, can now do so using Cortex. “Giving developers observability of their infrastructure gives them much-needed contextual information that improves and speeds development of the applications and services they create,” said Paul Nashawaty, Senior Analyst at Enterprise Strategy Group. “The ability to share this information across teams helps them stay aligned in their workflows and outcomes, and greatly benefits their organizations and their customers.” Cortex’s fast-growing customer base, which includes Adobe, Brex, Grammarly, Palo Alto Networks and SoFi, has found great flexibility in the platform, enabling them to develop a multitude of creative new use cases. The ability to systematically add items that are not microservices to a catalog, track them by owner and apply scorecards to their performance has been the company’s most-requested capability in the last 12 months. “These new capabilities provide significantly deeper visibility into what cloud resources are being used, by whom, and to what effect, than any single platform has had before. “These new levels of visibility and control give companies using Cortex greater ability to optimize a broader set of resources to enhance cross-functional collaboration and improve their own performance, which is especially important to engineering teams as they work to optimize resources to align with their business goals.” Cortex co-founder and CEO Anish Dhar About Cortex Cortex is designed to give engineers and SREs comprehensive visibility and control over microservices and cloud infrastructure. It does this by providing a single-pane-of-glass for visualization of service and infrastructure ownership, documentation and performance history, replacing institutional knowledge and spreadsheets. This gives engineering and SRE teams the visibility and control they need, even as teams shift, people move, platforms change and microservices and infrastructure continue to grow. Cortex is a YCombinator Company backed by Sequoia Capital and Tiger Global.

Read More

DATA STORAGE

SANBlaze Enters New Markets in the Storage Testing Industry

SANBlaze | August 01, 2022

SANBlaze Technology Inc., a leading worldwide provider of advanced storage test and validation technologies, today announced the expansion of its industry-first NVMe® over PCIe® 5.0 validation and compliance testing system from traditional SSD manufacturers to new markets comprised of data center storage and large cloud vendors. “Customer confidence has grown beyond our traditional walls of satisfying requirements from major SSD manufacturers to supporting large data centers and cloud storage organizations. “This evolution stems from our first-to-market leadership for early adoption and development of NVMe PCIe Gen5. Early availability was a critical factor in enabling our key strategic customers to meet their internal development schedules for Gen 5 SSD’s and FCS releases.” Rick Walsh, VP of Sales and Marketing “SANBlaze partnered with WD to get our Gen5 Validation infrastructure ready in time including SRIS/SRNS clocking features which helped to fast track our overall Gen5 bring up,” said Anuj Awasthi, Senior Director, System Design and Firmware Verification Engineering, Western Digital Corp. In addition to SSD manufacturers such as Western Digital, SANBlaze is onboarding new major cloud and data center storage providers as they recognize the capabilities and value of the Certified by SANBlaze test suites as a first-level SSD validation criterion. Certified by SANBlaze is an instant benchmarking tool that saves on CapEx overhead expenses for SSD compute applications. SANBlaze Suite of Products SANBlaze solutions include the SBExpress-RM5™ rackmount appliance, the SBExpress-DT5™ desktop appliance, and the industry benchmark SBExpress Certified by SANBlaze software, which provides over 900 ready-to-go tests and scripts. These latest PCIe 5.0 platforms provide broad test capabilities for development, QA, validation, and manufacturing teams in data centers large and small. SBExpress-RM5 The SBExpress-RM5 is a 16-bay enterprise-class NVMe test appliance supporting hot-plug and all link speeds up through PCIe 5.0. The system features a unique modular “riser” design that enables user-configurable variable slot support, as well as field-upgradable support for all PCIe 5.0 connector form factors, including U.2, M.2, EDSFF, and the new E3/EDSFF. The ability to margin and measure power, glitch signals, and test spread spectrum clocking (SSC) or conventional clocking in both common and SRIS/SRNS modes sets the SBExpress-RM5 apart from all others in the NVMe SSD testing space. Data integrity is verified with a comprehensive suite of read/write/compare tests, with exception cases such as power glitching while running IO, and built-in "Write Atomicity" testing as part of the Certified by SANBlaze test suite. Testing is accessible through a web interface to the appliance or via Python, XML and REST APIs, which come standard with the system. The SBExpress™ Gen5 software includes over nine-hundred test scripts to enable IOL testing in the customer’s lab, before undergoing official testing, as well as ZNS, VDM, and TCG Opal verification. SBExpress-DT5 The SBExpress-DT5 is the sixth-generation SBExpress system and is both evolutionary, growing from its successful family of predecessors, and revolutionary, with advanced test capabilities such as Vendor Defined Messaging (VDM) testing, MI (Management Interface) in-band, and SMBUS testing at 1MHz. All features of the enterprise test suite Certified by SANBlaze are supported by DT5 at PCIe 5.0 speed. SANBlaze at Flash Memory Summit Flash Memory Summit 2022 takes place August 2-4 at the Santa Clara Convention Center, Santa Clara CA, USA. SANBlaze, a member of the Symbiosys Alliance, will be present in booth #219. The Symbiosys Alliance will be present in booth #119. About SANBlaze SANBlaze is a pioneer in storage testing and validation technologies. SANBlaze systems are deployed in the test and development labs of most major storage hardware and software vendors worldwide. SANBlaze is revolutionizing the NVMe Storage Area Network (SAN) and PCIe device qualification markets by offering NVMe testing end-to-end. We are first to market a solution that tests Native NVMe and NVMe over Fabrics (NVMe-oF™) for complete end-to-end testing of your entire system using single port or dual port drives. About the Symbiosys Alliance The Symbiosys Alliance is an I/O interconnect technology group chartered to create value for its membership and for their respective customers by strategically and collaboratively aligning member products and services to current and upcoming market opportunities. These synergized solutions can provide developers with the state-of-the-art resources they need to roll out highly competitive offerings efficiently and confidently to their respective marketplaces. The alliance addresses a range of verticals increasingly characterized by hyper-fast innovation cycles. These include semiconductors, data storage, IoT, cloud computing, consumer electronics, automotive, aerospace, medical, and more. Members leverage alliance partnerships to precisely anticipate and address these innovation cycles by delivering high-quality solutions that resonate with the latest technological advances.

Read More

Spotlight

According to Gartner,1 HyperConverged Infrastructure is growing at 50% cumulative annual growth rate. By 2017, 50% of enterprises will be using deploying it for Webscale IT. The reason: “The cost of traditional infrastructure has become oppressive.” Traditional infrastructure in today’s data center has become a series of complex technology silos that require deep technology skills for different layers of the infrastructure stack and large up-front investments. When outgrown, they require rip-and-replace upgrades that are risky, disruptive, and expensive. The larger these environments become, the greater the complexity grows, and the higher the risk and operational cost become.

Resources