IT SYSTEMS MANAGEMENT

Cortex Gives Global Enterprises Autodiscovery for Cloud Infrastructure

Cortex | July 27, 2022 | Read time : 03:00 min

Cortex
Cortex today announced new innovations designed to give engineering teams the same levels of visibility into and control over cloud infrastructure that the platform has, since its inception, over microservices. The company’s industry-leading System of Record for Engineering, which has given engineers and SREs comprehensive microservices visibility and control, now provides a Resource Catalog that extends to the entire cloud environment, including S3 buckets, databases, caches, load balancers and data pipelines.

“We’ve now extended the platform to say, ‘Here's all the infrastructure we have, here's who owns it, here's what they do, and here's how they tie to the services,’” said Ganesh Datta, co-founder and CTO of Cortex. “We found that many customer infrastructure teams were already using the platform for tracking infrastructure migrations for microservices with Cortex scorecards, and that they wanted to expand that to include all of their assets. The platform now provides a central repository for all of that information.”

Cortex Resource Catalog

The new Cortex Resource Catalog enables customers to define their own resources in addition to those predefined by the platform. For example, customers wanting to represent a certain path within an S3 bucket as a first-class resource owned by a certain team, or who want to represent kafka topics as resources along with relationships to their consumer/producer microservices, can now do so using Cortex.

“Giving developers observability of their infrastructure gives them much-needed contextual information that improves and speeds development of the applications and services they create,” said Paul Nashawaty, Senior Analyst at Enterprise Strategy Group. “The ability to share this information across teams helps them stay aligned in their workflows and outcomes, and greatly benefits their organizations and their customers.”

Cortex’s fast-growing customer base, which includes Adobe, Brex, Grammarly, Palo Alto Networks and SoFi, has found great flexibility in the platform, enabling them to develop a multitude of creative new use cases. The ability to systematically add items that are not microservices to a catalog, track them by owner and apply scorecards to their performance has been the company’s most-requested capability in the last 12 months.

“These new capabilities provide significantly deeper visibility into what cloud resources are being used, by whom, and to what effect, than any single platform has had before. “These new levels of visibility and control give companies using Cortex greater ability to optimize a broader set of resources to enhance cross-functional collaboration and improve their own performance, which is especially important to engineering teams as they work to optimize resources to align with their business goals.”

Cortex co-founder and CEO Anish Dhar

About Cortex
Cortex is designed to give engineers and SREs comprehensive visibility and control over microservices and cloud infrastructure. It does this by providing a single-pane-of-glass for visualization of service and infrastructure ownership, documentation and performance history, replacing institutional knowledge and spreadsheets. This gives engineering and SRE teams the visibility and control they need, even as teams shift, people move, platforms change and microservices and infrastructure continue to grow. Cortex is a YCombinator Company backed by Sequoia Capital and Tiger Global.

Spotlight

The next wave of technology innovation is already here with new applications transforming the way we live, work, and travel. The huge adoption of these new services drives exponential growth in the total demand for data. We must provide more data capacity and higher computing speeds if we hope to keep up. The sheer scale and scope of the gap we face demands that we re-think the way we have traditionally organized the design and deployment of networks and data centers. As many hands make light work, deploying many smaller distributed data centers seem the most viable solution. These facilities are often called “Edge Data Centers” (EDCs).


Other News
DATA STORAGE

DartPoints® to Provide the University of South Carolina with Custom Software-Defined Data Center Solution

DartPoints | July 14, 2022

DartPoints®, the leading edge digital infrastructure provider, announces today that it has formed an innovative technology partnership with the University of South Carolina. DartPoints will provide a custom Software-Defined Data Center (SDDC) solution, which replaces the university's current data center. DartPoints' custom SDDC cloud solution will significantly improve the university's IT agility. It adheres to UofSC's compliance requirements while providing the multi-tenancy of a public cloud infrastructure. The solution enables UofSC to reduce capital expenditures while improving functionality, reliability, and security. "We needed a reputable provider that was readily available to ensure our team always has access to the critical data that keeps our campuses running across the state," said Dan Schumacher, executive director of infrastructure services at UofSC. "DartPoints is the ideal partner for our university and its solution is easy to use, highly configurable, and provides the comprehensive services we require." Schumacher said moving information into a cloud-based data center will improve the university's disaster recovery capabilities and protect critical applications in the event of a catastrophic event. In addition, hosting compute services and file share services in the cloud improves efficiency and resilience because the time to respond to issues will be significantly reduced since there is no need to wait on shipping times or face equipment shortages that have occurred since COVID-19. The University of South Carolina is leading the way for cloud-based data centers, as not many universities have fully adopted the model. Doug Foster, vice president for information technology and chief information officer for UofSC, said, "We are committed to the continuous improvement of our services to best meet the needs of our Gamecock community. This is one example of how we offer cutting-edge IT services that evolve with the ever-changing landscape. I am proud to be a part of this adventure with this great university and a talented group of employees." An SDDC architecture helps organizations accelerate delivery of technology services while retaining control over IT, minimizing complexity, and reducing costs. It is an ideal solution for government agencies, hospitals, higher education institutions, and any organization that needs to respond quickly to demands for IT resources. "The university had a number of factors that needed to be addressed, including latency, data location, cost, and technical expertise. "We were able to work with UofSC's team to develop a customized solution that addresses all of their needs, and we believe that similar solutions can help other large institutions." Brad Alexander, DartPoints' CTO DartPoints has been providing multi-tenant cloud, network connectivity, and managed services in South Carolina for over a decade, from its four active data centers in the state, located in Columbia, Greenville, North Charleston, and Spartanburg. DartPoints offers unmatched support and technical expertise backed by tenured and continuously upskilled technicians. About The University of South Carolina The University of South Carolina is the flagship institution in the state. The public university has seven satellite campuses, in addition to the main campus that is located in Columbia, the state capital. Founded in 1801, the university is a Research 1 institution and offers more than 300 programs of study, from bachelor's to doctorate. The university has an approximate student enrollment of 35,3000 students and awards more than 9,000 degrees each year. The Division of Information Technology works to help fulfill the academic mission of the University of South Carolina by providing technology services that maximize productivity, increase collaboration, and improve service. We strive to provide repeatable, reliable, and consistent IT services to our constituents, who span across the eight-campus system. More than 170 highly skilled individuals are employed by the division. About DartPoints DartPoints is the leading digital infrastructure provider enabling next-generation applications at the edge. By weaving together cloud, interconnection, colocation, and managed services, DartPoints enables edge ecosystems for enterprises, carriers, and cloud and content providers. DartPoints is building tomorrow's distributed digital infrastructure while serving today's cloud and colocation needs — and helping to bridge the digital divide.

Read More

APPLICATION INFRASTRUCTURE,STORAGE MANAGEMENT,IT SYSTEMS MANAGEMENT

StorPool Storage Adds NVMe/TCP, StorPool on AWS, and NFS File Storage

StorPool Storage | August 24, 2022

StorPool Storage announced today the official release of the 20th major version of StorPool Storage - the primary storage platform for large-scale cloud infrastructure running diverse, mission-critical workloads. This is a major milestone in the evolution of StorPool, with the addition of several new capabilities that future-proof the leading storage software and increase its potential applications. StorPool Storage is designed for workloads that demand extreme reliability and low latency. It enables deploying high-performance, linearly-scalable primary storage systems on commodity hardware to serve large-scale clouds' data storage and data management needs. With StorPool, businesses streamline their IT operations by connecting a single storage system to all their cloud platforms while benefiting from our utterly hands-off approach to storage infrastructure. The StorPool team architects, deploys, tunes, monitors, and maintains each storage system so that end-users experience fast and reliable services while our customers’ tech teams dedicate their time to the projects that aim to grow their business. StorPool Storage v20 offers important new capabilities: NVMe/TCP Support StorPool Storage now supports NVMe/TCP (NVMe over Fabrics, TCP transport) - the next-generation block storage protocol that leverages TCP/IP, the most common set of communication protocols extensively used in datacenters with standard Ethernet networking and controllers. With NVMe/TCP, customers get high-performance, low-latency access to standalone NVMe SSD-based StorPool storage systems, using the standard NVMe/TCP initiators available in VMware vSphere, Linux-based hypervisors, container nodes, and bare-metal hosts. The StorPool NVMe/TCP implementation is software-only and does not require specialized hardware to deliver the high throughput and fast response times required by modern workloads. NVMe/TCP targets are highly available - in the event of a node failure, StorPool fails over the targets on the failed storage node to a running node in the cluster. StorPool on AWS Using StorPool on AWS achieves extremely low latency and high IOPS, delivered to single-instance workloads such as large transactional databases, monolithic SaaS applications, and heavily loaded e-commerce websites. StorPool Storage can now be deployed on sets of three or more i3en.metal instances in AWS. The solution delivers blazing-fast 1.3M+ balanced random read/write IOPS to EC2 r5n and other compatible compute instances (m5n, c6i, r6i, etc.). StorPool frees customers from per-instance storage limitations and can deliver this level of performance to any compatible instance type with sufficient network bandwidth. It achieves these numbers while utilizing less than a fifth of client CPU resources for storage operations, leaving more than 80% for the user application(s) and database(s). StorPool customers using AWS get the same white-glove service provided to on-premises customers, so they have peace of mind that their application’s foundation is running optimally in the cloud. StorPool’s expert team designs, deploys, tunes, monitors, and maintains each StorPool storage system on AWS. The complete StorPool Storage solution enables anyone to easily and economically deploy heavy enterprise applications to AWS, which was previously not achievable at the cost/performance ratio offered by StorPool. NFS File Storage on StorPool Last but definitely not least, StorPool is introducing support for running highly available NFS Servers inside StorPool storage clusters for specific use cases. NFS services delivered with StorPool are suitable for throughput-intensive file workloads shared among internal and external end-users (video rendering, video editing, heavily loaded web applications). They can also address moderate-load use cases (configuration files, scripts, images, email hosting) and support cloud platform operations (secondary storage for Apache CloudStack, NFS storage for OpenStack Glance). NFS file storage on StorPool is not suitable for IOPS-intensive file workloads like virtual disks for virtual machines. Deploying NFS Servers on StorPool storage nodes leverages the ability of StorPool Storage to run hyper-converged with other workloads and the low resource consumption of StorPool software components. Running in virtual machines backed by StorPool volumes and managed by the StorPool operations team, NFS Servers can have multiple file shares. The cumulative provisioned storage of all shares exposed from each NFS Server can be up to 50 TB. StorPool ensures the high availability of each NFS service with the proven resilience of the StorPool block storage layer - NFS service data is distributed across all nodes in the cluster, and StorPool maintains data and service integrity in case of hardware failures. "With each iteration of StorPool Storage, we build more ways for users to maximize the value and productivity of their data,” said Boyan Ivanov, CEO of StorPool Storage. “These upgrades offer substantial advantages to customers dealing with large data volumes and high-performance applications, especially in complex hybrid and multi-cloud environments.” “High performance access to data is essential wherever applications live,” said Scott Sinclair, ESG Practice Director.“ Adding NVMe/TCP support, StorPool on AWS and NFS file storage to an already robust storage platform enables StorPool to better help their customers achieve a high level of productivity with their primary workloads.” StorPool storage systems are ideal for storing and managing data of demanding primary workloads such as databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. In addition to the new capabilities, in 2022 alone, StorPool has added or improved many features for data protection, availability, and integration with popular cloud infrastructure tools. About StorPool Storage StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary storage systems. The StorPool team has experience working with various clients – Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises, and SaaS vendors. StorPool Storage comes as a software, plus a fully managed service that transforms standard hardware into fast, highly available and scalable storage systems.

Read More

IT SYSTEMS MANAGEMENT

HGC unveils edge digital infrastructure suite EdgeX by HGC(R) for the engines of the metaverse

HGC Global Communications Limited | July 07, 2022

HGC Global Communications Limited (HGC), a fully-fledged ICT service provider and network operator with extensive global coverage, is announcing the launch of EdgeX by HGC®, a first-of-its-kind edge digital infrastructure platform ecosystem built to support OTTs' global expansion of the metaverse and the next generation internet. With the metaverse expected to generate US$1.54 trillion in revenues by the end of the decade, from today's US$206.5 billion, according to PwC's Seeing is Believing report, companies across the globe are moving fast into this new segment. However, many lack the knowledge, experience, or scale to build and manage the needed digital backbones to succeed, which require high levels of performance, redundant systems, and seamless integration across the different segments of digital infrastructure. This is where the newly introduced EdgeX by HGC® set of services solves the problem by bringing together five fields of digital infrastructure and solutions under one umbrella for easy deployment, management, and scale. These include: Connectivity: EdgeX by HGC® allows users to drastically reduce latency times, speed up market entry, and enjoy faster connectivity speeds by gaining access to HGC's ready-to-go platform Eyeball-as-a-Service® (EaaS), benefit from HGC's and the AMS-IX (Amsterdam Internet Exchange) long standing partnership, and also grants acees to the HGC's IP Transit (IPTx) family which delivers direct Internet connectivity via HGC's own international IP backbone network. Cybersecurity: As a comprehensive cybersecurity solution provider, HGC ensures connections and assets are protected through a diversified 360-degree cybersecurity portfolio which is included in EdgeX by HGC®. The suite contains flagship solutions like Anti-DDoS, data security design, phishing assment, bot detection, penetration testing, and more. Direct Cloud Connect: Building on strong foundations deployed with the world's leading cloud providers, subscribers of EdgeX by HGC® automatically gain access to enterprise-grade private network connectivity with dominated public cloud providers' direct connect services and applications which ease deployment, expansion and bypass public Internet congestion delivering on high uptime SLAs. Data Center & Managed Services: For data center operators in particular, EdgeX by HGC® offers an expressway that enhances the connectivity ecosystem and greatly reduces latency times with essential interconnections easily deployed to HGC's global and regional hubs which include several points-of-presence (PoP) for OTTs. It also provides robust and easy manageable edge hosting computing capabilities including Public Cloud Direct Connections, and various Edge Compute Resources. System Integration: With a lot of the world's data being created at the edge, EdgeX by HGC® also combines HGC's edge resources which enable rapid and widespread deployments through planning, design and integration. EdgeX by HGC® is scalable, bespokable and can be used by companies of any size in any vertical from gaming to telecoms, OTTs, and any other enterprise that wishes to win in the digital space through a one-stop-shop solution that combines the horsepower of digital infrastructure into one solution platform designed to address the booming data demand sparked by the metaverse. Commenting on the launch, Cliff Tam, HGC's International Business Senior Vice President for Global Data Strategy & Operations, said: "EdgeX by HGC® not only offers faster and smoother service delivery, but it also expands OTT services with increased edge agility to capitalize on the enormous growth potential around this sector. By allowing OTTs to offload their cloud infrastructure and colocation services while increasing the network speeds, we are enabling users to focus on pushing the boundaries of innovation which is so crutial as the metaverse begins to take shape." About HGC Global Communications Limited HGC Global Communications Limited (HGC) is a leading Hong Kong and international telecom operator and ICT solution provider. The company owns an extensive network and infrastructure in Hong Kong and overseas and provides various kinds of services. HGC has 23 overseas offices, with business over 5 continents. It provides telecom infrastructure service to other operators and serves as a service provider to corporate and households.

Read More

HYPER-CONVERGED INFRASTRUCTURE,APPLICATION INFRASTRUCTURE

DJIB Launches First Ever Enterprise Grade Decentralised Data Storage Drive

DJIB | September 06, 2022

Today DJIB launched the first ever end-to-end encrypted, enterprise grade decentralised data storage drive with embedded multi chain non-fungible token functionality. It enables the widespread adoption of NFTs in business applications. Cloud data storage is dominated by services such as Amazon AWS, Google Cloud and Microsoft Azure. However, in the age of blockchains, users find traditional storage limiting as it is centralised in the hands of individual corporations. User data can be potentially accessed without their knowledge by employees of such providers. The currently missing ability to save objects as NFTs will be increasingly required in business applications. This is why, while being AWS S3 compatible and blazingly fast, the DJIB data storage drive for the first time addresses all of these concerns by being end-to-end encrypted, censorship resistant, and with built-in NFT functionality. It reimagines the concept of NFTs, treating them as a new type of file format, whereby users can "Save as NFT" any file stored on the drive, thus demystifying the creation of NFTs. Files can be up to 5TB large, which removes currently existing technical constraints. Users can either attach custom business logic to their NFTs, or use pre-defined templates from a library without knowing how to code. For example, a musician can publish a song with pre-defined licensing rights, or a pharmaceutical company can allow patients to share and profit from their medical data with very granular permissions and usage rights - all without the need of any intermediaries or use of specialist software. Any asset can now be tokenised. Any financial director can issue share certificates in NFT format. Such NFTs are immediately interoperable with all the blockchains with which DJIB has a connector. It started from Solana, Ethereum and BSC, but will soon cover all key networks. DJIB is already working on connectors with teams from major blockchains, starting with those that are enterprise focused and see this as an opportunity to foster the development of applications within their ecosystems. Moe Sayadi, DJIB CEO whose background is of a solutions architect at Microsoft and Avaloq, says: "Making our decentralised drive available to enterprise customers and removing the mystery behind the creation of NFTs opens an unimaginable trove of opportunities. It puts a powerful tool into the hands of non-technical domain experts. They can focus on the business logic attached to any document and potentially physical item, and move entire business processes to the cloud. This enables Object Oriented Business Process Management and many other exciting innovations which are in our pipeline and will be announced soon. We are discussing with corporate CTOs some very interesting use cases and I can confidently say that NFT evolution has finally passed the apes stage."

Read More

Spotlight

The next wave of technology innovation is already here with new applications transforming the way we live, work, and travel. The huge adoption of these new services drives exponential growth in the total demand for data. We must provide more data capacity and higher computing speeds if we hope to keep up. The sheer scale and scope of the gap we face demands that we re-think the way we have traditionally organized the design and deployment of networks and data centers. As many hands make light work, deploying many smaller distributed data centers seem the most viable solution. These facilities are often called “Edge Data Centers” (EDCs).

Resources