Getting Your IT Infrastructure Ready for Edge Computing

Microsoft | May 18, 2020

  • IT Infrastructure innovations, edge computing began with engineers as a natural extension of technology to address a growing need .

  • Software-defined deployment, decreasing cloud networking costs and more, including edge computing��s rough spots and the additional operations complexity it adds.

  • It completely changed the architecture of the data center, the frameworks for security, and end-users’ expectations around data access and manipulation.


Like many IT innovations, edge computing began with engineers as a natural extension of technology to address a growing need. The concept isn’t new; distributed computing has been around for decades. But, at the same time standards began to converge and edge hardware started making the rounds at trade shows, the hype machine saw an opportunity. It amplified edge’s considerable promise in reducing latency, offering software-defined deployment, decreasing cloud networking costs and more. But as is too often the case, the bold feature bullets ignored the production concerns businesses must address, including edge computing’s rough spots and the additional operations complexity it adds.


Of course, edge computing will survive a little overexcited promotion, just like many of the once improbable technologies before it. People used to say, “What? Abstract all my data center applications away from the hardware as virtual servers? Impossible!” A decade later, we can’t imagine how we’d deliver traditional enterprise services, cloud computing, online retail, media streaming and everything else in between without exactly this. Virtualization survived its awkward hype adolescence, and edge computing will, too. The needs edge computing addresses are only growing.



Learn more: HOW DISTRIBUTED CLOUD WILL AFFECT DATA CENTER INFRASTRUCTURES IN 2020 AND BEYOND .
 

“ It completely changed the architecture of the data center, the frameworks for security, and end-users’ expectations around data access and manipulation ".

~ Microsoft.


Thanks to engineers and operations teams, the edge distributed model is moving toward practical use. It’s proving itself capable of meeting requirements for new levels of network performance through reduced latency, scalability and, more importantly, manageability. For some businesses, it’s even reducing costs over the long haul. With the proliferation of connected devices and a growing focus on 5G-enabled technology, tech pros should set aside their natural reluctance to wade through the edge hype and consider it a genuine possibility.

“ Edge computing is much the same, High-latency, poor application performance, low bandwidth – these are simply unacceptable to end-users today,With expectations set, IT will need to deliver this to users across the business. “


Its adoption is following the rise of emerging technologies and the applications taking best advantage of it: 5G, augmented reality, autonomous vehicles, IoT and smart manufacturing. These environments require not only low upstream latency, but high-performance compute and timely result data. Light only travels so fast, pushing infrastructure closer and closer to consumers for faster, more seamless processing in the form of brand-differentiating user experiences. The rise of cloud computing and the efficiency of large, remotely located data centers requires a new compute model. To lower latency and raise capacity, edge computing will augment the data center to bring compute and storage much closer to the user.


Edge computing is the epitome of agility. While traditional data centers are strategic, large multi-story facilities that support thousands of applications, edge data centers – which could be a ½ rack in a cabinet – can go anywhere and meet more specific, if smaller demands. It is not an either-or, though. We used the word “augment” purposefully. Edge computing provides an add-on capability that will modernize the traditional data center as the digital transformation sweeping the globe makes new demands that deliver performance and experience to end-users. Broadly speaking, edge computing moves some computational needs away from the centralized data center to nodes at the edge of the network, improving application performance and decreasing bandwidth requirements. In fact, a recent report showed potential improved latency and data transfer reduction to the cloud of up to.


Learn more: LOOKING TO BUILD THE INFRASTRUCTURE TO CONNECT THE WORLD’S GAMING PLATFORMS .
 

Spotlight

This presentation contains certain forward-looking statements that involve risks and uncertainties, including, but not limited to, statements regarding: the RISC-V Foundation and its initiatives; our contributions to and investments in the RISC-V ecosystem; the transition of our devices, platforms and systems to RISC-V architectures; shipments of RISC-V processor cores; our business strategy, growth opportunities and technology development efforts; market trends and data growth and its drivers. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which such performance or results will be achieved, if at all. Forward-looking statements are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements.

Spotlight

This presentation contains certain forward-looking statements that involve risks and uncertainties, including, but not limited to, statements regarding: the RISC-V Foundation and its initiatives; our contributions to and investments in the RISC-V ecosystem; the transition of our devices, platforms and systems to RISC-V architectures; shipments of RISC-V processor cores; our business strategy, growth opportunities and technology development efforts; market trends and data growth and its drivers. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which such performance or results will be achieved, if at all. Forward-looking statements are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements.

Related News

APPLICATION INFRASTRUCTURE, DATA STORAGE

Virtual-Q Selects Juniper Networks to Provide Scalable, Automated Data Center Infrastructure

Juniper Networks | October 11, 2022

Juniper Networks, a leader in secure, AI-driven networks, today announced that Virtual-Q, a provider of IT Services and IT Consulting, has selected Juniper Apstra data center solutions to modernize and automate its network infrastructure to provide a scalable and seamless customer experience. Based in Houston, Texas, Virtual-Q specializes in IT-as-a-Service through its hosted desktop solution, streamlining the costs associated with remote IT solutions. The company delivers enterprise-class security, computing, support and disaster recovery solutions to businesses across all sectors and sizes. Virtual-Q had been operating with a network that lacked scalability and struggled to meet the increasing hybrid and virtual customer demands associated with the pandemic. With the need to accommodate large-scale growth but also be able to easily manage and operate, Virtual-Q turned to Juniper Networks to help design and build their new network, along with support from Juniper’s partner GDT. Apstra was deployed to simplify and automate data center operations management from design to deployment through everyday operations and assurance. Additionally, Apstra delivers a high level of visibility into the network fabric, allowing for faster resolution times and increased operational efficiencies. With an approach to data center operations based on the insight that a reliability-focused strategy results in speed and efficiency, Apstra enables Virtual-Q to transform their operations. By also deploying Juniper’s QFX switches, EX switches and MX series universal routing platforms, Virtual-Q is well-positioned to expand its capacity with 400G bandwidth, develop a cloud-ready network infrastructure that can grow alongside its evolving data center needs and meet its 1,082 percent annual growth rate. The company also utilizes Juniper professional services. Supporting Quotes: “Juniper Apstra allows us to seamlessly manage and automate our data center infrastructure without compromising our ability to serve our customers. With Apstra’s intent-based design, operators can focus on what needs to be accomplished in the data center instead of how it should be done. As one of the most user-friendly products on the market, we are excited to see the transformation Apstra will bring to our network operations.” Victor J. Quinones, Jr., Founder and CEO, Virtual-Q “In addition to simplifying data center management, Apstra allows its customers to automate each aspect of the design, deployment and operation of their data center infrastructure. Apstra enables Virtual-Q to lay a strong foundation for reliable and flexible operations regardless of vendor.” Mansour Karam, VP of Products, Juniper Networks About Juniper Networks Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the world’s greatest challenges of well-being, sustainability and equality.

Read More

HYPER-CONVERGED INFRASTRUCTURE, DATA STORAGE, IT SYSTEMS MANAGEMENT

Kyndryl and Elastic Announce Expanded Partnership to Enable Data Observability, Search and Insights Across Cloud and Edge Computing Environments

Kyndryl | September 23, 2022

Kyndryl, the world’s largest IT infrastructure services provider, and Elastic (NYSE: ESTC), the company behind Elasticsearch, today announced an expanded global partnership to provide customers full-stack observability, enabling them to accelerate their ability to search, analyze and act on machine data (IT data and business data) stored across hybrid cloud, multi-cloud and edge computing environments. Under the partnership, Kyndryl and Elastic will collaborate on creating joint solutions and delivery capabilities designed to provide deep, frictionless observability at all levels of applications, services, and infrastructure to address customer data, analytics and IT operations management challenges. The companies will focus on delivering large-scale IT operations and AIOps capabilities to joint customers by leveraging Kyndryl’s data framework and toolkits and Elastic’s Enterprise Search, Observability, and Security solutions, enabling streamlined migrations, modernized infrastructure and tenant management, and AI development for efficient and proactive IT management. As part of the partnership, Kyndryl and Elastic plan to collaborate to support customer needs and requirements via joint offerings and solutions across the following areas: IT Data Modernization – Helping organizations manage exponential storage growth and giving them the capability to search for data wherever it resides. IT Data Management Services for Elastic – Providing flexibility to users of Elastic by letting Kyndryl manage the entire stack infrastructure and analytics workloads for IT operations. Intelligent IT Analytics – Enabling actionable observability through AI/ML capabilities that deliver unified insights for proactive and efficient IT operations with technology domain-specific insights. Data Migration Services for Elastic – Delivering the capability to streamline migrations and deploy self-managed Elastic workloads to the hyperscalers of a customer’s choice. Kyndryl’s global team of data management experts will also participate in the global Elastic certification program to expand their expertise in advising, implementing and managing Elastic solutions across critical IT projects and environments. “Customers in all industries are seeking to improve their capacity to search and analyze the data stored in the cloud and on edge computing environments. “We are happy to partner with Elastic to create and bring forward a unified approach that will help customers overcome hurdles and improve their ability to access and gain insights at scale from their business data.” Nicolas Sekkaki, Applications, Data & AI global practice leader for Kyndryl “Enabling customers to gain actionable insights from their data is a key enabler of data-driven digital transformation,” said Scott Musson, Vice President, Worldwide Channel and Alliances at Elastic. “The combination of Kyndryl’s global expertise in managing mission-critical information systems and the proven scale and flexibility of the Elastic Search Platform provides the critical foundation to help organizations drive speed, scale, and productivity, and address their observability needs across hybrid cloud, multi-cloud and edge computing environments.” For more information about the Kyndryl and Elastic partnership, please visit: https://www.kyndryl.com/us/en/about-us/alliances About Kyndryl Kyndryl is the world’s largest IT infrastructure services provider serving thousands of enterprise customers in more than 60 countries. The Company designs, builds, manages and modernizes the complex, mission-critical information systems that the world depends on every day. About Elastic Elastic is a leading platform for search-powered solutions. We help organizations, their employees, and their customers accelerate the results that matter. With solutions in Enterprise Search, Observability, and Security, we enhance customer and employee search experiences, keep mission-critical applications running smoothly, and protect against cyber threats. Delivered wherever data lives, in one cloud, across multiple clouds, or on-premise, Elastic enables 18,000+ customers and more than half of the Fortune 500, to achieve new levels of success at scale and on a single platform.

Read More

HYPER-CONVERGED INFRASTRUCTURE, APPLICATION INFRASTRUCTURE, IT SYSTEMS MANAGEMENT

Fluree and ZettaLabs Announce Merger to Serve Enterprises Seeking Data-Centric Architecture and Legacy Data Infrastructure Modernization

Fluree | September 22, 2022

Fluree, a company headquartered in Winston-Salem, North Carolina, which has developed a distributed ledger graph database platform, and New Jersey-based ZettaLabs, a business that uses artificial intelligence and machine learning to prepare raw data for analytics use, today announced the merger of the two companies. The combination of Fluree and ZettaLabs will enable Fluree to expand its offerings beyond its established expertise of working with “green-field,” new data-centric initiatives that encompass bleeding-edge innovation. With ZettaLabs now part of Fluree, the company possesses the prowess to tackle enterprise legacy data architectures and take the first steps toward modernization. All ZettaLabs employees will integrate into the Fluree ecosystem, bringing Fluree’s total headcount to 50. “At Fluree, we are building the data infrastructure for the future,” said Brian Platz, Fluree co-founder and CEO. “While many of our customers enjoy the unique benefits of our semantic graph distributed ledger database technology, we recognize that organizations first need a way out of their entrenched silos in order to build their end-goal infrastructures. Dealing with legacy infrastructure is one of the biggest challenges for modern businesses, but nearly 74% of organizations are failing to complete legacy data migration projects today due to inefficient tooling and a lack of interoperability. By adding the ZettaLabs team and product suite to our own, Fluree is poised to help organizations on their data infrastructure transformation journeys by uniquely addressing all major aspects of migration and integration: security, governance and semantic interoperability.” ZettaSense has been rebranded as Fluree Sense. The data pipeline that uses AI and machine learning, as well as ontologies, to normalize, cleanse and harmonize data from disparate data sources that need to be integrated in a way that eliminates any requirement for additional data governance, master data management or data quality software. Fluree Sense makes data in existing legacy databases, data warehouses and data lakes ready for downstream enterprise consumption and sharing, whether in analytic repositories like Snowflake or Databricks, or Fluree’s immutable knowledge graph database. “We developed our flagship product, ZettaSense, to ingest, classify, resolve and cleanse big data coming from a variety of sources. who will become Fluree’s president. “The problem is that the underlying data technical architecture -- with multiple operational data stores, warehouses and lakes, now spreading out across multiple clouds -- is continuing to grow in complexity. Now with Fluree, our shared customer base and any new customers can evolve to a modern and elegant data-centric infrastructure that will allow them to more efficiently and effectively share cleansed data both inside and outside its organizational borders." Eliud Polanco, co-founder and CEO of ZettaLabs The merger, the first in Fluree’s history, makes Fluree a go-to company for roughly 90% of businesses hindered by legacy infrastructure and database systems that do not have the toolset or talent to undergo an effective transformation. It also augments Fluree’s customer base, which now includes large, enterprise financial-services customers. Use cases for Fluree Sense include: Legacy data migrations that cleanse and harmonize data from multiple sources to enable migration from a legacy enterprise business platform to a target digital platform; Customer data integrations that integrate customer, account, product and transaction data from across multiple data sources into a single golden 360-degree customer record; Consent management that enables active customer consent and control of how data is shared across products, regions and business functions within an organization; and, Cross-border data residency that allows secure sharing of information across borders adhering to the various national data-privacy regulations using multi-party computation. “We don’t have a lack of data today — we have a lack of high-quality data,” said Peter Serenita, retired Chief Data Officer and current chairman of the New York City-headquartered nonprofit organization Enterprise Data Management Council. “This is why it is essential for enterprises to take a data-centric approach to their modernization initiatives in order to truly transform their legacy infrastructure and eliminate their data silos for good. Joining forces with the ZettaLabs team and product will allow Fluree to continue its mission of turning big data into better data for sustainable business outcomes.” While Fluree currently serves the existing enterprise data management market as an innovative database solution, it is mostly for new data projects that have identified a specific requirement for data trust, integrity, sharing or security. The merger with ZettaLabs enables Fluree to provide value to all enterprise data teams looking to get a handle on their legacy infrastructure and modernize their platforms to satisfy increasingly complex business goals. Fluree now has a full spectrum of data management capabilities for organizations — from the first step of integrating and migrating legacy system data infrastructure with ZettaLabs’ technology to building modernized operational and analytical data infrastructure atop Fluree’s database system. “Fluree’s merger with ZettaLabs is directly in line with Fluree’s vision to deliver data-centric capabilities to modernize enterprise data abilities,” said Dan Malven, managing director of 4490 Ventures, a Madison-based venture capital firm and Fluree lead investor. “Enterprises seeking data-centric architectures now not only have a landing place with Fluree’s core ledger graph database technology, but also now a starting point for their legacy infrastructure to onboard their data management into data centricity.” About Fluree Co-founded in 2016 by CEO Brian Platz and Executive Chairman Flip Filipowski, Fluree PBC is headquartered in Winston-Salem, North Carolina. Fluree is pioneering a data-first technology approach with its data management platform. It guarantees data integrity, facilitates secure data sharing and powers data-driven insights. The Fluree platform organizes blockchain-secured data in a scalable semantic graph database — establishing a foundational layer of trusted data for connected and secure data ecosystems. The company’s foundation is a set of W3C semantic web standards that facilitate trusted data interoperability. Fluree currently employs 50.

Read More