Application Infrastructure

Telecom Infra Project Welcomes Minim and Evaluates Its Technology as a Standard Element in OpenWiFi Architecture

Minim, Inc. the creator of intelligent networking products, today announced it joined the Telecom Infra Project (TIP) Open Converged Wireless (OCW) Project Group and will be supporting the Open Source TIP OpenWiFi Project. Minim and TIP OpenWiFi members are evaluating the use of Minim’s intelligent connectivity software as a standard element in the Project Group’s OpenWiFi framework. This move would allow ISPs and OEMs to activate Minim or any other Wi-Fi management software that supports OpenWiFi’s new subscriber management framework on TIP OpenWiFi compliant devices.

TIP OpenWiFi was unveiled in May 2021 and is a community-developed, fully disaggregated Wi-Fi system, including Access Point (AP) hardware, an open-source AP network operating system (NOS) and an SDK to build Cloud native Wi-Fi Controller & management software for Wi-Fi Service Providers and Enterprises, making it possible for Access points, Cloud controllers and over-the-top analytics solutions, from different vendors to interoperate and work together on the same Wi-Fi network, with confidence.

OpenWiFi has the potential to unlock extensive opportunities for both Minim and our customers,We have seen first-hand fragmented and proprietary router firmwares can slow down new product innovation. OpenWiFi is a promising Wi-Fi platform for us to standardize a Minim solution across our own hardware portfolio while contributing value alongside a vibrant ecosystem of solution providers. We look forward to continued research and development with the global TIP community

Alec Rooney, CTO, Minim

Minim is currently working with OpenWiFi developers on a joint roadmap, targeting full integration and platform data exchange in the first half of 2022. Upon successful integration and testing, Minim plans to leverage its OpenWiFi-based MinimOS in its own product portfolio, as well as offer the managed firmware to third parties.

“TIP OpenWiFi promises to lower WiFi infrastructure cost and accelerate innovation,” stated Adlane Fellah, Senior Analyst at Maravedis. “At a time when WiFi is playing a key role in our lives, OpenWiFi is set to disrupt the WiFi vendor-lock model by bringing a common set of features that are open for everyone to leverage. It also enables a potential marketplace for WiFi vendors like Minim to be inherently interoperable and bring unique and extensive value to home WiFi service delivery.”

About the Telecom Infra Project (TIP)
The Telecom Infra Project (TIP) is a global community of companies and organizations that are driving infrastructure solutions to advance global connectivity. Half of the world’s population is still not connected to the internet, and for those who are, connectivity is often insufficient. This limits access to the multitude of consumer and commercial benefits provided by the internet, thereby impacting GDP growth globally. However, a lack of flexibility in the current solutions - exacerbated by a limited choice in technology providers - makes it challenging for operators to efficiently build and upgrade networks. Founded in 2016, TIP is a community of diverse participants that includes hundreds of companies - from service providers and technology partners, to systems integrators and other connectivity stakeholders. We are working together to develop, test and deploy open, disaggregated, and standards-based solutions that deliver the high quality connectivity that the world needs - now and in the decades to come.

About Minim
Minim, Inc. is the creator of intelligent networking products that dependably connect people to the information they need and the people they love. Headquartered in Manchester, NH, the company delivers smart software-driven communications products under the globally recognized Motorola brand and Minim® trademark. Minim end users benefit from a personalized and secure WiFi experience, leading to happy and safe homes where things just work.

Spotlight

Poor in-store connectivity can lead to shopper abandonment. Associates and customers need immediate access to inventory and speedy frictionless checkout services. With Aruba’s robust networking — you can keep everyone connected and happy.

Spotlight

Poor in-store connectivity can lead to shopper abandonment. Associates and customers need immediate access to inventory and speedy frictionless checkout services. With Aruba’s robust networking — you can keep everyone connected and happy.

Related News

Data Storage

AMI to Drive Intel DCM's Future and Broaden Manageability Solutions for Sustainable Data Centers

Cision Canada | October 17, 2023

AMI, the leader in foundational technology for sustainable, scalable, and secure global computing, is set to drive the future of Intel Data Center Manager (DCM) as it takes over the development, sales, and support of DCM under an agreement with Intel. This strategic transition empowers AMI to lead further the innovation and expansion of the Intel DCM product. With a unique position in the industry, AMI plays a pivotal role in enabling the cloud and data center ecosystem for all compute platforms. Intel DCM empowers data centers with the capability to manage and fine-tune server performance, energy consumption, and cooling efficiency. This operational optimization reduces the total cost of ownership, improves sustainability, and elevates performance benchmarks. We thank Intel for trusting AMI to lead Intel DCM into the future. This solution for efficient data center management will play a crucial role in enhancing the operational eco-efficiency of the data centers. It empowers data center managers with real-time insights into energy usage, thermal status, device health, and asset management, says Sanjoy Maity, CEO at AMI. AMI remains steadfast in aiding data center operators in achieving their manageability and sustainability objectives. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. For more information, visit ami.com.

Read More

Hyper-Converged Infrastructure

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

GlobeNewswire | October 19, 2023

Alluxio, the data platform company for all data-driven workloads, today introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics. To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises' ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.” Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward, said Haoyuan Li, Founder and CEO, Alluxio. Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management. With this announcement, Alluxio expands from a one-product portfolio to two product offerings - Alluxio Enterprise AI and Alluxio Enterprise Data - catering to the diverse needs of analytics and AI. Alluxio Enterprise AI is a new product that builds on the years of distributed systems experience accumulated from the previous Alluxio Enterprise Editions, combined with a new architecture that is optimized for AI/ML workloads. Alluxio Enterprise Data is the next-gen version of Alluxio Enterprise Edition, and will continue to be the ideal choice for businesses focused primarily on analytic workloads. Accelerating End-to-End Machine Learning Pipeline Alluxio Enterprise AI enables enterprise AI infrastructure to be performant, seamless, scalable and cost-effective on existing data lakes. Alluxio Enterprise AI helps data and AI leaders and practitioners achieve four key objectives in their AI initiatives: high-performance model training and deployment to yield quick business results; seamless data access for workloads across regions and clouds; infinite scale that has been battle-tested at internet giant’s scale; and maximized return on investments by working with existing tech stack instead of costly specialized storage. With Alluxio Enterprise AI, enterprises can expect up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, over 90% GPU utilization, and up to 90% lower costs for AI infrastructure. Alluxio Enterprise AI has a distributed system architecture with decentralized metadata to eliminate bottlenecks when accessing massive numbers of small files, typical of AI workloads. This provides unlimited scalability beyond legacy architectures, regardless of file size or quantity. The distributed cache is tailored to AI workload I/O patterns, unlike traditional analytics. Finally, it supports analytics and full machine learning pipelines - from ingestion to ETL, pre-processing, training and serving. Alluxio Enterprise AI includes the following key features: Epic Performance for Model Training and Model Serving - Alluxio Enterprise AI offers significant performance improvements to model training and serving on an enterprise’s existing data lakes. The enhanced set of APIs for model training can deliver up to 20x performance over commodity storage. For model serving, Alluxio provides extreme concurrency and up to 10x acceleration for serving models from offline training clusters for online inference. Intelligent Distributed Caching Tailored to I/O Patterns of AI Workloads - Alluxio Enterprise AI’s distributed caching feature enables AI engines to read and write data through the high performance Alluxio cache instead of slow data lake storage. Alluxio’s intelligent caching strategies are tailored to the I/O patterns of AI engines – large file sequential access, large file random access, and massive small file access. This optimization delivers high throughput and low latency for data-hungry GPUs. Training clusters are continuously fed data from the high-performance distributed cache, achieving over 90% GPU utilization. Seamless Data Access for AI Workloads Across On-prem and Cloud Environments - Alluxio Enterprise AI provides a single pane of glass for enterprises to manage AI workloads across diverse infrastructure environments easily. Providing a source of truth of data for the machine learning pipeline, the product fundamentally removes the bottleneck of data lake silos in large enterprises. Sharing data between different business units and geographical locations becomes seamless with a standard data access layer via the Alluxio Enterprise AI platform. New Distributed System Architecture, Battle-tested At Scale - Alluxio Enterprise AI builds on a new innovative decentralized architecture, DORA (Decentralized Object Repository Architecture). This architecture sets the foundation to provide infinite scale for AI workloads. It allows an AI platform to handle up to 100 billion objects with commodity storage like Amazon S3. Leveraging Alluxio’s proven expertise in distributed systems, this new architecture has addressed the ever-increasing challenges of system scalability, metadata management, high availability, and performance. “Performance, cost optimization and GPU utilization are critical for optimizing next-generation workloads as organizations seek to scale AI throughout their businesses,” said Mike Leone, Analyst, Enterprise Strategy Group. “Alluxio has a compelling offering that can truly help data and AI teams achieve higher performance, seamless data access, and ease of management for model training and model serving.” “We've collaborated closely with Alluxio and consider their platform essential to our data infrastructure,” said Rob Collins, Analytics Cloud Engineering Director, Aunalytics. “Aunalytics is enthusiastic about Alluxio's new distributed system for Enterprise AI, recognizing its immense potential in the ever-evolving AI industry.” “Our in-house-trained large language model powers our Q&A application and recommendation engines, greatly enhancing user experience and engagement,” said Mengyu Hu, Software Engineer in the data platform team, Zhihu. “In our AI infrastructure, Alluxio is at the core and center. Using Alluxio as the data access layer, we’ve significantly enhanced model training performance by 3x and deployment by 10x with GPU utilization doubled. We are excited about Alluxio’s Enterprise AI and its new DORA architecture supporting access to massive small files. This offering gives us confidence in supporting AI applications facing the upcoming artificial intelligence wave.” Deploying Alluxio in Machine Learning Pipelines According to Gartner, data accessibility and data volume/complexity is one the top three barriers to the implementation of AI techniques within an organization. Alluxio Enterprise AI can be added to the existing AI infrastructure consisting of AI compute engines and data lake storage. Sitting in the middle of compute and storage, Alluxio can work across model training and model serving in the machine learning pipeline to achieve optimal speed and cost. For example, using PyTorch as the engine for training and serving, and Amazon S3 as the existing data lake: Model Training: When a user is training models, the PyTorch data loader loads datasets from a virtual local path /mnt/alluxio_fuse/training_datasets. Instead of loading directly from S3, the data loader will load from the Alluxio cache instead. During training, the cached datasets will be used in multiple epochs, so the entire training speed is no longer bottlenecked by retrieving from S3. In this way, Alluxio speeds up training by shortening data loading and eliminates GPU idle time, increasing GPU utilization. After the models are trained, PyTorch writes the model files to S3 through Alluxio. Model Serving: The latest trained models need to be deployed to the inference cluster. Multiple TorchServe instances read the model files concurrently from S3. Alluxio caches these latest model files from S3 and serves them to inference clusters with low latency. As a result, downstream AI applications can start inferencing using the most up-to-date models as soon as they are available. Platform Integration with Existing Systems To integrate Alluxio with the existing platform, users can deploy an Alluxio cluster between compute engines and storage systems. On the compute engine side, Alluxio integrates seamlessly with popular machine learning frameworks like PyTorch, Apache Spark, TensorFlow and Ray. Enterprises can integrate Alluxio with these compute frameworks via REST API, POSIX API or S3 API. On the storage side, Alluxio connects with all types of filesystems or object storage in any location, whether on-premises, in the cloud, or both. Supported storage systems include Amazon S3, Google GCS, Azure Blob Storage, MinIO, Ceph, HDFS, and more. Alluxio works on both on-premise and cloud, either bare-metal or containerized environments. Supported cloud platforms include AWS, GCP and Azure Cloud.

Read More

Windows Systems and Network

Zayo Bolsters Global Network Infrastructure, Increases Capacity to Meet Rapid Bandwidth Demand

Business Wire | October 10, 2023

Zayo Group Holdings, Inc., a leading global communications infrastructure platform, today announced its latest series of infrastructure investments to extend global capacity to support rapidly increasing bandwidth demand—including deployment of 400G across the globe, long-haul capacity growth, and enhancements to its global IP infrastructure. Next-generation technology is being deployed at never-before-seen rates. This has placed the communications infrastructure industry at a unique inflection point as all digital businesses—enterprises, carriers and hyperscalers alike—scramble to ensure they have enough capacity to support these technologies, said Bill Long, Chief Product Officer at Zayo. As this trend plays out, it will be a strong tailwind for those providers who can capitalize on the moment. As one of the newest and most modern networks on the market, Zayo is uniquely positioned to support this growing demand for global bandwidth. Deploying 400G Globally For today’s digital businesses, 400G is essential to ensure the speed and scalability to support increasingly complex and data-intensive applications. Zayo recently completed upgrades of its European network to be fully 400G-enabled, with plans for its Tier-1 backbone in North America to be fully 400G-enabled by the end of 2024. In Q3, Zayo added nine new 400G-enabled routes to its North American network to provide high-bandwidth options between key cities, including: Atlanta to DC Denver to Dallas (Direct) Montreal to Quebec City Clinton to Ponchatoula Indianapolis to Columbus Ashburn to Baltimore Bend to Umatilla Laurel to Denver Additional 400G routes in progress include: Houston to New Orleans St. Louis to Chicago Buffalo to Albany Winnipeg to Toronto Toronto to Montreal Buffalo to Toronto Columbus to Ashburn Cleveland to Buffalo Houston to Ponchatoula Umatilla to Quincy What this means: The enablement of Zayo’s global network with 400G will allow customers to continue scaling their bandwidth with Zayo on existing routes, opening up high-capacity access on new routes, improving network stability and providing an overall better customer experience through quicker delivery and optimal routing. The enhanced capacity from these routes will support customers that have exponential growth needs driven by emerging technologies such as 5G, cloud adoption, IoT, AI, edge computing, and automation. Expanding Global Low-Latency Network Zayo has also been working to expand capacity in other key economic centers across the globe. In October 2022, Zayo announced its global low-latency route connecting the U.S. to South America’s financial hub of Sao Paulo. In Q3 2023, the company completed expansions to its connectivity infrastructure in Sao Paulo including a new key terrestrial route that will provide connectivity throughout the metro ring and to four key data centers. New Sao Paulo Points of Presence (PoPs): Alameda Araguaia, 3641, Alphaville, Barueri, SP, 06455 000, Brazil Av. Marcos Penteado de Ulhoa Rodrigues, 249, Santana de Parnaiba, SP, 06543 001, Brazil Avenida Ceci, 1900, Tambore, Barueri, SP, 06460 120, Brazil Rua Ricardo Prudente de Aquino, 85 Santan de Parnaiba, Brazil What this means: As Latin America’s center of innovation and commerce, São Paulo has seen an increased demand for connectivity from the U.S. To meet the growing needs of customers, Zayo is establishing diverse, high-bandwidth connectivity from its first-class North American fiber network directly into the largest economic center in the Southern Hemisphere. IP Infrastructure Growth IP demand continues to be a driver for capacity increases. Zayo continues to bolster its IP infrastructure with new PoPs in key markets and data centers across the globe. Zayo added eight new IP PoPs to its North American network in Q3, including: 45 Parliament St, Toronto, ON 250 Williams St NW, Atlanta, GA 6477 W Wells Park Rd, Salt Lake City, UT 2335 S Ellis St, Chandler, AZ 375 Pearl St, New York, NY 626 Wilshire Blvd, Los Angeles, CA 431 Horner Ave, Etobicoke, ON 1100 White St SW, Atlanta, GA Zayo's IP backbone, which runs on Zayo's wholly owned fiber infrastructure, makes up nearly 10% of the world's Internet capacity. Zayo currently manages 96Tb of core capacity and 34TB of peering capacity, and adds 1-2Tb of peering capacity every quarter. Upgrading Long-Haul Capacity As one of the other providers actively investing in its long-haul infrastructure, Zayo is continuing to overbuild its routes in high-demand areas to enable enhanced fiber capacity. In Q3 2023, Zayo completed the overbuild of its Omaha to Denver route, providing increased capacity on this highly sought-after route. Zayo also has three new long-haul route builds and two additional route overbuilds in progress with scheduled completion by the end of 2023. What this means: The enhancement to Zayo’s long-haul dark fiber routes provide customers with diverse routing options and the ability to customize and enhance their network to meet the unique needs of their businesses while maximizing resiliency and ability to scale. Zayo will continue to invest in future-proofing its network and services to connect what’s next for its customers. About Zayo For more than 15 years, Zayo has empowered some of the world’s largest and most innovative companies to connect what’s next for their business. Zayo’s future-ready network spans over 17 million fiber miles and 142,000 route miles. Zayo’s tailored connectivity and edge solutions enable carriers, cloud providers, data centers, schools, and enterprises to deliver exceptional experiences, from edge to core to cloud. Discover how Zayo connects what’s next at www.zayo.com and follow us on LinkedIn and Twitter.

Read More