Application Infrastructure, Windows Systems and Network
businesswire | July 27, 2023
Tachyum™ announced today the release of the white paper, “Tachyum Prodigy – Solution for Data Centers that are Hungry for Energy,” explaining how Prodigy®, the world’s first Universal Processor, is ideally suited to help overcome increasingly excessive energy use in data centers.
Data centers are among the most energy-demanding facilities in the world. A report by the International Energy Agency (IEA) indicates that 3% of global electricity use comes from data centers and data transmission networks. Data centers are expected to consume 20% of the world's energy supply by 2025 as the demand for digital services continues to grow.
Tachyum’s white paper details how Prodigy delivers disruptive innovation for data centers with unprecedented power efficiency by unifying the CPU, GPGPU and TPU into a single processor die, enabling groundbreaking performance while removing the need for power-hunger, costly accelerators. The white paper further shows how this compares to other data center solutions with higher performance/watt than both CPUs and GPUs; up to 4.4x higher performance/W than Intel 8490-H for cloud computing; and up to 13.4x higher performance/W than Nvidia H100 for generative AI.
“Keeping pace with the energy demands of today’s hyperscale data centers is simply not attainable,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “Additionally, massive growth from the rapid adoption of AI can only be accomplished by enabling a single hardware platform, like Prodigy, that can handle multiple workloads and avoids the time and energy from separate, custom-built solutions. Those reading the white paper will see that this approach makes the most sense and will become more evident as power demand increases.”
Prodigy provides both the high performance required for cloud and HPC/AI workloads within a single architecture. As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 128 high-performance custom-designed 64-bit compute cores, to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
About Tachyum
Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPGPU, and a TPU in a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When hyperscale data centers are provisioned with Prodigy, all AI, HPC, and general-purpose applications can run on the same infrastructure, saving companies billions of dollars in hardware, footprint, and operational expenses. As global data center emissions contribute to a changing climate, and consume more than four percent of the world’s electricity—projected to be 10 percent by 2030—the ultra-low power Prodigy Universal Processor is a potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost. Prodigy, now in its final stages of testing and integration before volume manufacturing, is being adopted in prototype form by a rapidly growing customer base, and robust purchase orders signal a likely IPO in late 2024. Tachyum has offices in the United States and Slovakia.
Read More
Hyper-Converged Infrastructure, Data Storage
businesswire | August 03, 2023
Ermetic, a leading cloud infrastructure security company, today announced CNAPPgoat, an open source project that allows organizations to safely test their cloud security skills, processes, tools and posture in interactive sandbox environments that are easy to deploy and destroy. CNAPPgoat supports AWS, Azure and GCP platforms for assessing the security capabilities included in Cloud Native Application Protection Platforms (CNAPP).
The CNAPPgoat project will be officially presented at DEF CON Demo Labs in Las Vegas on Friday, August 11 from 12:00pm-1:55pm by Noam Dahan, Research Lead and Igal Gofman, Head of Research for Ermetic. On Wednesday, August 16 at 10am PST/1pm EST, Ermetic will present a webinar on using CNAPPgoat, to register visit thislink.
Unlike projects that illustrate possible attack paths, CNAPPgoat provides a large and expanding library of scenarios that security teams can execute to create a customized cloud environment for simulating unsecured and vulnerable assets and validating their defenses. The ability to easily provision a vulnerable environment with a broad range of risk scenarios provides the following benefits:
Create a sandbox for testing an organization’s security posture by assessing security team capabilities, procedures and protocols
Use vulnerable environments for hands-on workshops to train team members on new skills and techniques
Provision a “shooting range” for pentesters to test their skills at exploiting the scenarios and developing relevant capabilities
Benchmark CNAPP tools against known environments to evaluate their capabilities
“Compared to existing open-source projects that create ‘capture the flag’ scenarios where participants are expected to follow a certain path, CNAPPgoat spans the leading cloud provider platforms and CNAPP capabilities while providing a modular and granular approach for provisioning specific categories of risks and vulnerabilities,” said Igal Gofman, Director of Research for Ermetic.
“This breadth and depth allows pentesters and defenders to precisely isolate the elements they want to explore for training, new skills acquisition, prevention and security posture assessments,” added Noam Dahan, Research Lead.
CNAPPgoat enables security teams, trainers and pentesters to provision and run vulnerable scenarios from the following modules that make up the CNAPP specification defined byGartner:
Cloud Infrastructure Entitlement Management (CIEM) - covers risks associated with identities and entitlements, such as the unintended ability of an identity to escalate its privileges
Cloud Workload Protection Platform (CWPP) - includes the exposure of workloads to vulnerabilities such as running vulnerable/end of life software or OS versions
Cloud Security Posture Management (CSPM) - spans the misconfiguration of cloud infrastructure components, such as publicly exposed storage resources
Infrastructure as Code (IaC) scanning - will be added soon for finding misconfigurations directly in the code
CNAPPgoat is an open community initiative designed to be used by anyone for commercial, technical and educational purposes. See today’sblogfor implementation details. Additional artifacts including deeper technical dives and guides will be released soon. Contributions are encouraged including new scenarios, scenario proposals, issues, suggestions, feature requests or simply sharing feedback. To learn more and access CNAPPgoat visit thislink.
About Ermetic
Ermetic reveals and prioritizes security gaps in AWS, Azure and GCP and enables organizations to remediate them immediately. The Ermetic cloud native application protection platform (CNAPP) uses an identity-first approach to unify and automate cloud infrastructure entitlement management (CIEM), cloud security posture management (CSPM), cloud workload protection and Kubernetes security posture management (KSPM). It unifies full asset discovery, deep risk analysis, runtime threat detection and compliance reporting, combined with pinpoint visualization and step-by-step guidance. The company is one of America’s Best Startup Employers according toForbesand led by proven technology entrepreneurs whose previous companies have been acquired by Microsoft, Palo Alto Networks and others. Ermetic has received funding from Accel, Forgepoint, Glilot Capital Partners, Norwest Venture Partners, Qumra Capital and Target Global.
Read More
Hyper-Converged Infrastructure, Windows Systems and Network
prnewswire | July 26, 2023
CoreWeave, a specialized cloud provider of large-scale GPU-accelerated workloads, today announced a new data center facility in Plano, Texas, to be fully operational by December 31, 2023. The $1.6 billion data center is CoreWeave's first facility in Texas and will support economic activity and job growth in the area.
"We are pleased to partner with Plano and the local community to open this cutting-edge data center and create new jobs," said CoreWeave CEO and Co-founder Michael Intrator. "The 450,000 square foot facility will help meet the unprecedented demand for high-performance cloud solutions for artificial intelligence, machine learning, pixel streaming and other emerging technologies that CoreWeave is uniquely positioned to deliver," said Intrator.
This news comes on the heels of continued growth for the company. Recently, CoreWeave announced the opening of a modern data center in New York City that provides ultra-low latency to over 20 million inhabitants across the metropolitan area. In April, CoreWeave announced a $221 million Series B round, followed by $200 million in Series B extension funding for a total of $421 million in capital raised for the round.
"With the demand for machine learning, AI and visual effects/rendering sharply rising, we are thrilled to partner with CoreWeave as the company invests in its first data center in Texas, capable of high-computing solutions for such specialized needs," said Mayor of Plano, John B. Muns.
About CoreWeave
Founded in 2017, CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry's fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute-intensive use cases — machine learning and AI, VFX and rendering, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.
Read More