Q&A with Sangram Vajre, Co-founder & Chief Evangelist at Terminus

MEDIA 7 | January 9, 2020

Sangram Vajre, Co-Founder & Chief Evangelist at Terminus is also an author and host of the podcast FlipmyFunnel. He is one of the leading minds in B2B marketing.

MEDIA 7: What are you passionate about?
SANGRAM VAJRE:
Three things: Lead professionally. Grow personally. Love family.

M7: Terminus has been recognized as one of Georgia’s 40 fastest-growing companies by ACG Atlanta. What factors contribute to this pace?
SV: 
One of our core values is #OneTeam – which means we think and act as one team and know that if we treat our team right, they will treat our customers amazing. There are no great companies, only great people that make those companies.


"Marketers who say they are going to transform their organization without support from Sales and buy-in from management fail to see success."

M7: Terminus is the leader of the account-based movement. What according to you are the common mistakes marketers make with ABM?
SV:
 Marketers who say they are going to transform their organization without support from Sales and buy-in from management fail to see success. The marketers who make a few sales people wildly successful, join campaigns to help them win deals, win their hearts and minds.

M7: Could you tell us a little bit about your podcast Flipmyfunnel? How did that idea come about?
SV:
 I was already talking to customers and amazing people in the industry and every time I would walk away with the thought that wish I recorded that conversation. Well, I just started doing that which turned into a podcast series that has now over 500 episodes and continue to rate in the top 50 business podcast.


"Terminus can quickly get a sales rep alerted when their target account is on the website which helps them prioritize which target accounts they need to spend more time with."

M7: What are some of the best indicators that a prospect is really engaged with your brand?
SV:
Visit to your website and the frequency of it. There are technologies like Terminus that can quickly get a sales rep alerted when their target account is on the website which helps them prioritize which target accounts they need to spend more time with. This could become one of the most important indicators of early success for companies in 2020.


M7: What marketing channels do you use and which ones do you see as the most promising given your target customers?
SV:
It’s always the combination that works since everyone is different but the goal is to surround them with your message on their channels so when they are ready, they think of you.


"Mature and forward thinking CMOs are starting to help their sales team win more and faster by focusing on pipeline velocity and expansion deals."

M7: What aspects of ABM do you think might change in the future?
SV:
I believe ABM is B2B. Most companies are still focused on top of the funnel. Mature and forward thinking CMOs are starting to help their sales team win more and faster by focusing on pipeline velocity and expansion deals.


M7: What is your favorite quote?
SV:
Selling is essentially transfer of feelings – Zig Ziglar.

ABOUT TERMINUS

Terminus is the leader of the account-based movement and the crucial link that connects B2B marketing and sales teams with their ideal customers. The Terminus solution arms marketing teams with an account-centric platform that delivers the intelligence and automation needed to scale ABM and revolutionize the way B2B marketing is done. Hundreds of organizations worldwide, including Salesforce, GE, Verizon, 3M and CA Technologies, turn to Terminus to more effectively target, engage and grow their best-fit accounts. Terminus offers savvy marketers the technology and proven expertise to radically improve ABM strategies and campaigns, increasing ROI and producing exceptional results. For more information, visit Terminus.

More C-Suite on deck

Related News

APPLICATION INFRASTRUCTURE

Web3 Infrastructure Startup Zeeve raises $2.65M in seed fundraising headed by Leo Capital

Zeeve | June 30, 2022

Leo Capital and Blu Ventures contributed $2.65 million to the seed round of Zeeve, an enterprise-grade no-code platform for automating blockchain infrastructure. The money obtained from this round will be utilized to strengthen product development, expand the technical team, and broaden the company's appeal to DApp developers and multinational organizations. Using its no-code platform, Zeeve makes it simple to install Blockchain nodes and Decentralized Apps on enterprise-grade infrastructure. The stakeholders may manage their nodes and networks with powerful analytics and real-time notifications, and nodes can be deployed in a matter of minutes. The majority of the important permissioned blockchain protocols, such as Hyperledger Fabric, R3 Corda, Fluree, and Hyperledger Sawtooth, as well as public blockchain protocols, such as Bitcoin, Ethereum, Polygon, Binance Smart Chain, Tron, Avalanche, and Fantom, are supported by Zeeve's solution. Zeeve was established in 2021 by Dr. Ravi Chamria, a serial entrepreneur, tech evangelist, and co-founders Ghan Vashistha and Sankalp Sharma. Zeeve has since become a leader in the development of simple-to-deploy web3 infrastructure, with the trust of more than 10,000 developers, Blockchain startups, and businesses. "The Internet has come a long way - from the simple web pages of web1.0 to the decentralized web3.0. Lots of exciting innovations have happened in the web3.0 space like DeFi, NFTs, Decentralized Insurance, Prediction Markets, etc. We should expect to see a lot more innovation over the next five years, revolutionizing how we use the internet. With further advancements in blockchain technology, we may soon see web3 utilized for everything from online commerce to voting and governance." Dr. Ravi Chamria, CEO, Zeeve Web3 is reportedly being hailed as the internet of the future by Harvard Business Review. Web3 has the power to increase everyone's access to the internet. More quickly than in previous web iterations, Web3 infrastructure may be used by new enterprises to create communities around their brands and product concepts. By linking to content networks powered by blockchain technology and granting users some level of data governance, even currently operating platforms may use such prospects. All of this suggests that the web will appear very different — and far more open — than it does right now in the future. "In this new era of the internet, companies like Zeeve play a pivotal part in making it easy for enterprises and Blockchain startups to deploy blockchain nodes and consume APIs to connect with Blockchains. Zeeve's offering helps DevOps teams ease their operational, security, and performance challenges while deploying and managing Blockchain nodes and networks," says Tarun Upaday, Partner, Blu Ventures.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Metallic Data Management as a Service on Oracle Cloud Infrastructure Will Accelerate Enterprise Hybrid Cloud Adoption

Commvault | July 01, 2022

Metallic DMaaS on Oracle Cloud is now a part of Commvault's strategic relationship with Oracle, a leader in intelligent data services across on-premises, cloud, and SaaS settings. Metallic's market-leading services will be made available on Oracle Cloud Infrastructure (OCI) and will be accessible in all commercial OCI regions worldwide as part of Commvault's multi-cloud strategy. For business customers wishing to hasten their OCI transition, Metallic and OCI will offer improved price-performance, built-in enhanced security, and streamlined recovery and management. In addition, Oracle users may now safeguard crucial data assets in the cloud or on-premises by utilizing OCI Storage for superior air-gapped ransomware protection while preserving flexibility across customer-controlled storage or a SaaS-delivered data protection service inclusive of managed cloud storage. Metallic DMaaS supports the protection of data against corruption, unauthorized access, and other threats across critical business sectors, such as insurance, financial services, manufacturing, and defense, in the fight against ransomware and cyberattacks. Customers can quickly backup their digital footprint in any consumption model, including cloud-native and on-premises workloads, including databases, virtual machines, Kubernetes, file and object storage, and workloads running on databases and virtual machines. "The combination of Metallic DMaaS and OCI is a big win for customers looking for data mobility, agility, and security as they link on-premises Oracle solutions to OCI and evolve their data management capabilities." Vinny Choinski, senior analyst, Enterprise Strategy Group Metallic's data protection now covers OCI VMs, Oracle Databases, and Oracle Container Engine, thanks to the addition of support for safeguarding OCI workloads and writing to OCI Storage. Additionally, Oracle Linux is accessible to over 400,000 Oracle enterprise customers and the more than 100,000 clients wishing to use Oracle Cloud Infrastructure to protect their mission-critical data but who have previously relied on Commvault technology. As a part of the Oracle PartnerNetwork, Commvault will promote and sell Metallic DMaaS alongside Oracle in a partnership that will hasten Metallic's attempts to become worldwide. Available in the Oracle Cloud Marketplace is Metallic DMaaS. "We're excited to partner with Commvault and enable our customers to restore and recover their most mission-critical cloud data. Data protection and compliance requirements are necessities in today's business environment, which is why we're confident that OCI's built-in, always-on security features combined with Metallic DMaaS will provide additional peace of mind for our joint customers," said Clay Magouyrk, executive vice president, Oracle Cloud Infrastructure.

Read More

APPLICATION INFRASTRUCTURE

Web3 Infrastructure Startup Zeeve raises $2.65M in seed fundraising headed by Leo Capital

Zeeve | June 30, 2022

Leo Capital and Blu Ventures contributed $2.65 million to the seed round of Zeeve, an enterprise-grade no-code platform for automating blockchain infrastructure. The money obtained from this round will be utilized to strengthen product development, expand the technical team, and broaden the company's appeal to DApp developers and multinational organizations. Using its no-code platform, Zeeve makes it simple to install Blockchain nodes and Decentralized Apps on enterprise-grade infrastructure. The stakeholders may manage their nodes and networks with powerful analytics and real-time notifications, and nodes can be deployed in a matter of minutes. The majority of the important permissioned blockchain protocols, such as Hyperledger Fabric, R3 Corda, Fluree, and Hyperledger Sawtooth, as well as public blockchain protocols, such as Bitcoin, Ethereum, Polygon, Binance Smart Chain, Tron, Avalanche, and Fantom, are supported by Zeeve's solution. Zeeve was established in 2021 by Dr. Ravi Chamria, a serial entrepreneur, tech evangelist, and co-founders Ghan Vashistha and Sankalp Sharma. Zeeve has since become a leader in the development of simple-to-deploy web3 infrastructure, with the trust of more than 10,000 developers, Blockchain startups, and businesses. "The Internet has come a long way - from the simple web pages of web1.0 to the decentralized web3.0. Lots of exciting innovations have happened in the web3.0 space like DeFi, NFTs, Decentralized Insurance, Prediction Markets, etc. We should expect to see a lot more innovation over the next five years, revolutionizing how we use the internet. With further advancements in blockchain technology, we may soon see web3 utilized for everything from online commerce to voting and governance." Dr. Ravi Chamria, CEO, Zeeve Web3 is reportedly being hailed as the internet of the future by Harvard Business Review. Web3 has the power to increase everyone's access to the internet. More quickly than in previous web iterations, Web3 infrastructure may be used by new enterprises to create communities around their brands and product concepts. By linking to content networks powered by blockchain technology and granting users some level of data governance, even currently operating platforms may use such prospects. All of this suggests that the web will appear very different — and far more open — than it does right now in the future. "In this new era of the internet, companies like Zeeve play a pivotal part in making it easy for enterprises and Blockchain startups to deploy blockchain nodes and consume APIs to connect with Blockchains. Zeeve's offering helps DevOps teams ease their operational, security, and performance challenges while deploying and managing Blockchain nodes and networks," says Tarun Upaday, Partner, Blu Ventures.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Inspur Announces MLPerf v2.0 Results for AI Servers

Inspur | July 04, 2022

The open engineering consortium MLCommons released the latest MLPerf Training v2.0 results, with Inspur AI servers leading in closed division single-node performance. MLPerf is the world’s most influential benchmark for AI performance. It is managed by MLCommons, with members from more than 50 global leading AI companies and top academic institutions, including Inspur Information, Google, Facebook, NVIDIA, Intel, Harvard University, Stanford University, and the University of California, Berkeley. MLPerf AI Training benchmarks are held twice a year to track improvements in computing performance and provide authoritative data guidance for users. The latest MLPerf Training v2.0 attracted 21 global manufacturers and research institutions, including Inspur Information, Google, NVIDIA, Baidu, Intel-Habana, and Graphcore. There were 264 submissions, a 50% increase over the previous round. The eight AI benchmarks cover current mainstream usage AI scenarios, including image classification with ResNet, medical image segmentation with 3D U-Net, light-weight object detection with RetinaNet, heavy-weight object detection with Mask R-CNN, speech recognition with RNN-T, natural language processing with BERT, recommendation with DLRM, and reinforcement learning with MiniGo. Among the closed division benchmarks for single-node systems, Inspur Information with its high-end AI servers was the top performer in natural language processing with BERT, recommendation with DLRM, and speech recognition with RNN-T. It won the most titles among single-node system submitters. For mainstream high-end AI servers equipped with eight NVIDIA A100 Tensor Core GPUs, Inspur Information AI servers were top ranked in five tasks (BERT, DLRM, RNN-T, ResNet and Mask R-CNN). Continuing to lead in AI computing performance Inspur AI servers continue to achieve AI performance breakthroughs through comprehensive software and hardware optimization. Compared to the MLPerf v0.5 results in 2018, Inspur AI servers showed significant performance improvements of up to 789% for typical 8-GPU server models. The leading performance of Inspur AI servers in MLPerf is a result of its outstanding design innovation and full-stack optimization capabilities for AI. Focusing on the bottleneck of intensive I/O transmission in AI training, the PCIe retimer-free design of Inspur AI servers allows for high-speed interconnection between CPUs and GPUs for reduced communication delays. For high-load, multi-GPU collaborative task scheduling, data transmission between NUMA nodes and GPUs is optimized to ensure that data I/O in training tasks is at the highest performance state. In terms of heat dissipation, Inspur Information takes the lead in deploying eight 500W high-end NVIDIA Tensor Core A100 GPUs in a 4U space, and supports air cooling and liquid cooling. Meanwhile, Inspur AI servers continue to optimize pre-training data processing performance, and adopt combined optimization strategies such as hyperparameter and NCCL parameter, as well as the many enhancements provided by the NVIDIA AI software stack, to maximize AI model training performance. Greatly improving Transformer training performance Pre-trained massive models based on the Transformer neural network architecture have led to the development of a new generation of AI algorithms. The BERT model in the MLPerf benchmarks is based on the Transformer architecture. Transformer’s concise and stackable architecture makes the training of massive models with huge parameters possible. This has led to a huge improvement in large model algorithms, but necessitates higher requirements for processing performance, communication interconnection, I/O performance, parallel extensions, topology and heat dissipation for AI systems. In the BERT benchmark, Inspur AI servers further improved BERT training performance by using methods including optimizing data preprocessing, improving dense parameter communication between NVIDIA GPUs and automatic optimization of hyperparameters, etc. Inspur Information AI servers can complete BERT model training of approximately 330 million parameters in just 15.869 minutes using 2,850,176 pieces of data from the Wikipedia data set, a performance improvement of 309% compared to the top performance of 49.01 minutes in Training v0.7. To this point, Inspur AI servers have won the MLPerf Training BERT benchmark for the third consecutive time. Inspur Information’s two AI servers with top scores in MLPerf Training v2.0 are NF5488A5 and NF5688M6. The NF5488A5 is one of the first servers in the world to support eight NVIDIA A100 Tensor Core GPUs with NVIDIA NVLink technology and two AMD Milan CPUs in a 4U space. It supports both liquid cooling and air cooling. It has won a total of 40 MLPerf titles. NF5688M6 is a scalable AI server designed for large-scale data center optimization. It supports eight NVIDIA A100 Tensor Core GPUs and two Intel Ice Lake CPUs, up to 13 PCIe Gen4 IO, and has won a total of 25 MLPerf titles. About Inspur Information Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world’s 2nd largest server manufacturer. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology sectors such as open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.

Read More

HYPER-CONVERGED INFRASTRUCTURE

Metallic Data Management as a Service on Oracle Cloud Infrastructure Will Accelerate Enterprise Hybrid Cloud Adoption

Commvault | July 01, 2022

Metallic DMaaS on Oracle Cloud is now a part of Commvault's strategic relationship with Oracle, a leader in intelligent data services across on-premises, cloud, and SaaS settings. Metallic's market-leading services will be made available on Oracle Cloud Infrastructure (OCI) and will be accessible in all commercial OCI regions worldwide as part of Commvault's multi-cloud strategy. For business customers wishing to hasten their OCI transition, Metallic and OCI will offer improved price-performance, built-in enhanced security, and streamlined recovery and management. In addition, Oracle users may now safeguard crucial data assets in the cloud or on-premises by utilizing OCI Storage for superior air-gapped ransomware protection while preserving flexibility across customer-controlled storage or a SaaS-delivered data protection service inclusive of managed cloud storage. Metallic DMaaS supports the protection of data against corruption, unauthorized access, and other threats across critical business sectors, such as insurance, financial services, manufacturing, and defense, in the fight against ransomware and cyberattacks. Customers can quickly backup their digital footprint in any consumption model, including cloud-native and on-premises workloads, including databases, virtual machines, Kubernetes, file and object storage, and workloads running on databases and virtual machines. "The combination of Metallic DMaaS and OCI is a big win for customers looking for data mobility, agility, and security as they link on-premises Oracle solutions to OCI and evolve their data management capabilities." Vinny Choinski, senior analyst, Enterprise Strategy Group Metallic's data protection now covers OCI VMs, Oracle Databases, and Oracle Container Engine, thanks to the addition of support for safeguarding OCI workloads and writing to OCI Storage. Additionally, Oracle Linux is accessible to over 400,000 Oracle enterprise customers and the more than 100,000 clients wishing to use Oracle Cloud Infrastructure to protect their mission-critical data but who have previously relied on Commvault technology. As a part of the Oracle PartnerNetwork, Commvault will promote and sell Metallic DMaaS alongside Oracle in a partnership that will hasten Metallic's attempts to become worldwide. Available in the Oracle Cloud Marketplace is Metallic DMaaS. "We're excited to partner with Commvault and enable our customers to restore and recover their most mission-critical cloud data. Data protection and compliance requirements are necessities in today's business environment, which is why we're confident that OCI's built-in, always-on security features combined with Metallic DMaaS will provide additional peace of mind for our joint customers," said Clay Magouyrk, executive vice president, Oracle Cloud Infrastructure.

Read More

Spotlight

Terminus

Terminus

Terminus is the leader of the account-based movement and the crucial link that connects B2B marketing and sales teams with their ideal customers. The Terminus solution arms marketing teams with an account-centric platform that delivers the intelligence and automation needed to scale ABM and revoluti...

Events

Resources