ThoughtSpot Brings Search-Based Analytics to Google Cloud

SDxCentral | November 13, 2018

Search and analytics company ThoughtSpot extended its partnership with Google Cloud. It is now certified to run its in-memory calculation engine on Google’s Cloud Platform (GCP). ThoughtSpot also added new integration between its products and the Google Cloud Machine Learning Engine. While Google Cloud has a number of certified partners in analytics and big data, ThoughtSpot’s Director of Content and Communications Ryan Mattison said it is the first to bring search and artificial intelligence (AI)-driven analytics to GCP. This includes running its AI engine, Falcon, on the cloud platform and leveraging the GCP machine learning engine to make predictions. Palo Alto, California-based ThoughtSpot was founded in 2012 by engineers from Google, Oracle, Microsoft, Yahoo, and other Silicon Valley companies. Its CEO Ajeet-Singh, joined the company from Nutanix in July 2018, which he co-founded. ThoughtSpot was co-founded by Ajeet Singh, who serves as its executive chairman. ThoughtSpot’s core technology is a next-generation analytics platform that enables search functionality so users can analyze enterprise data.

Spotlight

Converged infrastructure solutions such as FlexPod accelerate application deployments,speed up time to market, lower costs and simplify IT operational management. With application and data growth accelerating across enterprise data center environments, it is critically important for businesses adopting converged infrastructure systems to modernize their application Availability capabilities at the same time. By leveraging Availability software with FlexPod, businesses can accelerate their digital transformation initiatives,increase IT automation, mitigate risk and improve the return on their IT investments.


Other News
HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

Nasuni Extends Its File Data Platform to Deliver Secure High-Performance File Access Anywhere

Nasuni | August 19, 2022

Nasuni Corporation, the leader in file data services, today announced the availability of Nasuni Access Anywhere, a secure, high-performance file solution for hybrid and remote workers. The new add-on service makes Nasuni the first file data platform to address the security and performance needs of the distributed workforce. With Nasuni Access Anywhere, enterprises can extend the Nasuni File Data Platform to deliver high-performance file access for remote and hybrid users along with productivity tools that let them manage files from anywhere, on any device. This new add-on service delivers a secure VPN-less solution that provides resilient access to files from any location, even in high-latency environments. It also enables employees to share files and folders with clients, contractors and partners with enforced file security. In addition, the popularity of Microsoft 365, Microsoft Teams and Slack is creating more demand for workflow integration with an organization's file shares. Nasuni Access Anywhere ensures that corporate file shares can be accessed directly within Microsoft Teams, Microsoft 365 and Slack facilitating frictionless collaboration all through a single platform. Key capabilities include: VPN-less access to corporate file shares: Provides secure, reliable connectivity via the Internet, supporting local performance and visibility to all files without a VPN. Offline access with desktop sync: Employees can use the desktop file system for working offline and the solution will automatically sync all files to the corporate file share when connectivity is restored. Microsoft 365, Microsoft Teams and Slack integration: Store, search, browse and edit files from within key collaboration applications while being connected to corporate files on a single data platform. Secure external file collaboration: Users can securely share and receive content with external parties without compromising control, performance or security. File transfer acceleration: Employees can upload and download files with remarkable speed even from home and remote locations. Hybrid and remote work are here to stay. Nearly one-quarter (24%) of employees expect to work fully remote while more than half (53%) anticipate a hybrid work arrangement, according to a recent Gallup poll. Unfortunately, remote workers and employees working in offices with limited infrastructure face significant productivity challenges. Network connectivity may be low and connections may have high latency, which makes sustained access and sharing of files difficult. Using a VPN to gain a secure connection may add to performance problems and as a result, employees often save critical file data offline, putting the data at risk and making shared file access even more challenging. "The world of work has changed forever, and business runs on file data. Enterprises can no longer rely on hardware-based network-attached storage (NAS) – a 20-year-old technology. They need a cloud-based approach that makes data accessible from anywhere and secures it from any threat. "Nasuni is a single, secure file data platform that transforms file infrastructure into data services. Now, with Nasuni Access Anywhere, our platform provides secure access to file data from anywhere." David Grant, president of Nasuni "Nasuni backed by Azure object storage is our corporate file data platform, Microsoft Teams is our project collaboration platform and [this solution] is our bridge that connects the two," said customer Todd Dughman, director of information technology, associate at U.S. civil engineering, surveying and landscape architecture firm Teague Nall and Perkins, Inc. About Nasuni Nasuni Corporation is a leading file data services company that helps organizations create a secure, file data cloud for digital transformation, global growth and information insight. The Nasuni File Data Platform is a cloud-native suite of services that simplifies file data infrastructure, enhances file data protection and ensures fast file access globally at the lowest cost. By consolidating file data in easily expandable cloud object storage from Azure, AWS, Google Cloud and others, Nasuni becomes the cloud-native replacement for traditional network attached storage (NAS) and file server infrastructure, as well as complex legacy file backup, disaster recovery, remote access and file synchronization technologies. Organizations worldwide rely on Nasuni to easily access and share file data globally from the office, home or on the road. Sectors served by Nasuni include manufacturing, construction, creative services, technology, pharmaceuticals, consumer goods, oil and gas, financial services and public sector agencies. Nasuni's corporate headquarters is based in Boston, Massachusetts, USA delivering services in over 70 countries around the globe.

Read More

IT SYSTEMS MANAGEMENT

Cortex Gives Global Enterprises Autodiscovery for Cloud Infrastructure

Cortex | July 27, 2022

Cortex today announced new innovations designed to give engineering teams the same levels of visibility into and control over cloud infrastructure that the platform has, since its inception, over microservices. The company’s industry-leading System of Record for Engineering, which has given engineers and SREs comprehensive microservices visibility and control, now provides a Resource Catalog that extends to the entire cloud environment, including S3 buckets, databases, caches, load balancers and data pipelines. “We’ve now extended the platform to say, ‘Here's all the infrastructure we have, here's who owns it, here's what they do, and here's how they tie to the services,’” said Ganesh Datta, co-founder and CTO of Cortex. “We found that many customer infrastructure teams were already using the platform for tracking infrastructure migrations for microservices with Cortex scorecards, and that they wanted to expand that to include all of their assets. The platform now provides a central repository for all of that information.” Cortex Resource Catalog The new Cortex Resource Catalog enables customers to define their own resources in addition to those predefined by the platform. For example, customers wanting to represent a certain path within an S3 bucket as a first-class resource owned by a certain team, or who want to represent kafka topics as resources along with relationships to their consumer/producer microservices, can now do so using Cortex. “Giving developers observability of their infrastructure gives them much-needed contextual information that improves and speeds development of the applications and services they create,” said Paul Nashawaty, Senior Analyst at Enterprise Strategy Group. “The ability to share this information across teams helps them stay aligned in their workflows and outcomes, and greatly benefits their organizations and their customers.” Cortex’s fast-growing customer base, which includes Adobe, Brex, Grammarly, Palo Alto Networks and SoFi, has found great flexibility in the platform, enabling them to develop a multitude of creative new use cases. The ability to systematically add items that are not microservices to a catalog, track them by owner and apply scorecards to their performance has been the company’s most-requested capability in the last 12 months. “These new capabilities provide significantly deeper visibility into what cloud resources are being used, by whom, and to what effect, than any single platform has had before. “These new levels of visibility and control give companies using Cortex greater ability to optimize a broader set of resources to enhance cross-functional collaboration and improve their own performance, which is especially important to engineering teams as they work to optimize resources to align with their business goals.” Cortex co-founder and CEO Anish Dhar About Cortex Cortex is designed to give engineers and SREs comprehensive visibility and control over microservices and cloud infrastructure. It does this by providing a single-pane-of-glass for visualization of service and infrastructure ownership, documentation and performance history, replacing institutional knowledge and spreadsheets. This gives engineering and SRE teams the visibility and control they need, even as teams shift, people move, platforms change and microservices and infrastructure continue to grow. Cortex is a YCombinator Company backed by Sequoia Capital and Tiger Global.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

Linux and Open Source Veterans Sign On to Form CIQ Leadership Team

CIQ | September 02, 2022

Nine cloud and Linux veterans have signed on to form the leadership team for CIQ, the company building the next generation of software infrastructure for enterprises running data-intensive workloads atop the Rocky Linux enterprise Linux distribution. This follows a successful funding round in May and a teaming with Google Cloud in July. CIQ secured $26 million in Series A funding in May, led by Two Bear Capital, with the goal of building a suite of enterprise workflow orchestration and hybrid cloud solutions designed to support enterprises as compute-intensive workloads like high-performance computing (HPC) grow to include non-traditional workloads like high-end data analytics and AI/ML models, both on premises and in the cloud. In July, Google Cloud announced that it had made its optimized version of Rocky Linux generally available to users of their cloud infrastructure service. “The next generation of software infrastructure that we’re building at CIQ will help enterprises and organizations tackle data-intensive workflows—from big data analytics, to HPC for modeling and simulation, to training sophisticated machine learning models. “The CIQ team has a legendary degree of experience building and running Linux-based infrastructure at scale for some of the most demanding applications. From the base operating system, through cloud, VMs, containers, all the way up to the top of a unified application stack, the CIQ team uniquely understands the point of view of customers across a diversity of industries. Together we’re developing a game plan that will show enterprises how to embrace the future of software infrastructure with greater speed, less risk and a shorter path to success.” Gregory Kurtzer, founder and CEO of CIQ The senior leadership team at CIQ now includes: Robert Adolph, co-founder and chief product officer—Adolph has introduced disruptive innovations in technology and business strategy for the past 25 years. He has integrated engineering, and research and development experience with large partners and customers specialized in engineering, computing, digital and project management. Adolph has been an advisory board member and angel investor in many start-ups and is a premium partner for customers on innovative project development. Rob Dufalo, SVP engineering—Dufalo has worked on both ends of the computing spectrum, delivering solutions for the second largest search engine in the world and working on IoT and embedded computing solutions for over 20 years. He has extensive experience in development, quality management and in operating cloud native solutions. John Frey, CTO—Frey has spent the last 20 years of his career creating partnerships, mergers and acquisitions, and operating cost efficiency. His primary focus is using his research and engineering experience to apply machine learning with data-driven insights to solve HPC and cloud computing solutions to CIQ’s current product stack. Gregory M. Kurtzer, co-founder and CEO—Kurtzer is a 20+ year veteran in Linux, open source and HPC. He is well known in the HPC space for designing scalable and easy-to-manage secure architectures for innovative performance-intensive computing while working for the U.S. Department of Energy and a joint appointment to UC Berkeley. Kurtzer founded and led several large open source projects such as CentOS Linux, the Warewulf and PERCEUS cluster toolkits, the container system Singularity, and most recently, the successor to CentOS, Rocky Linux. David LaDuke, VP marketing—LaDuke is a former marketing executive at Apple and NeXT, having started his career in open source as a founder and CMO at Linuxcare. He went on to found Sputnik, the world’s first cloud solution for managing guest Wi-Fi networks. Sputnik merged with Lokket in 2017 to form what is today a growing wireless internet service provider (WISP) focused on providing affordable broadband to schools and underserved communities across the U.S. Stephen Moody, SVP support and technology—Moody is a technology executive, formerly at MIW for over 15 years, where he has developed patented products that focus on utilizing green technologies. His experience with IBM, Boeing, Microsoft, Rackable Systems and ZT Systems has given him the opportunity to leverage his ability to build high-performing technology teams and solutions. Marlin Prager, CFO—Prager has over 26 years of experience in the media and entertainment industries where he has been directly involved in equity, debt, and merger and acquisitions deals totaling more than $10 billion. He formerly was with Legendary Entertainment, which was acquired by Wanda in 2016. He then worked on several start-up projects including Open Drives and Illuscio before joining CIQ. Prager was formerly a partner at W2 Films and also founded Digital Cinema Ventures, supporting the rollout of digital projectors in theaters. Prager began his career with Price Waterhouse, where he earned his CPA. Brock Taylor, VP HPC and strategic partners—Taylor brings over 20 years experience working in the silicon industry at Intel Corporation and most recently AMD. He has held positions of solutions architect, manager and director for HPC systems, roles in which he focused his efforts on abstracting away the complexities of HPC systems from users to help expand access of these technologies across scientific and engineering domains. Art Tyde, VP business development—Tyde is a 30-year veteran of open source with significant technology sales and engineering experience in both start-ups and the enterprise. Tyde is credited as the founder of the enterprise Linux services and support industry, having founded the San Francisco (and Silicon Valley) Bay Area Linux Users Group in 1994 and co-founded Linuxcare with CIQ VP marketing, David LaDuke. The CIQ suite of tools is designed to leverage the capabilities of Rocky Linux, which is the successor to CentOS. Red Hat is moving CentOS to end-of-life status and will eventually no longer support it. Kurtzer, founder and CEO of CIQ, is also the founder of Rocky Linux and an original founder of CentOS. Rocky Linux is a community-led project, managed by the Rocky Enterprise Software Foundation. About CIQ CIQ powers the next generation of software infrastructure, leveraging capabilities from enterprise, cloud, hyperscale and HPC. From the base operating system, through containers, orchestration, provisioning, computing, and up to cloud applications, CIQ works with every part of the technology stack to drive solutions for customers and communities with stable, scalable, secure production environments. CIQ is the founding support and services partner of Rocky Linux, and the creator of the next generation federated computing stack.

Read More

Spotlight

Converged infrastructure solutions such as FlexPod accelerate application deployments,speed up time to market, lower costs and simplify IT operational management. With application and data growth accelerating across enterprise data center environments, it is critically important for businesses adopting converged infrastructure systems to modernize their application Availability capabilities at the same time. By leveraging Availability software with FlexPod, businesses can accelerate their digital transformation initiatives,increase IT automation, mitigate risk and improve the return on their IT investments.

Resources