. https://www.datacenterknowledge.com/machine-learning/what-s-best-computing-infrastructure-ai
blog article
More than ever before, the most consequential question an IT organization must answer about every new data center workload is where to run it. The newest enterprise computing workloads today are variants of machine learning, or AI, be it deep learning-model training or inference (putting the trained model to use), and there are already so many options for AI infrastructure that finding the best one is hardly straight-forward for an enterprise. There’s a variety of AI hardware options on the market, a wide and quickly growing range of AI cloud services, and various data center options for hosting AI hardware. One company that’s in the thick of it across this entire machine learning infrastructure ecosystem is Nvidia, which not only makes and sells most processors for the world’s AI workloads (the Nvidia GPUs), it also builds a lot of the software that runs on those chips, sells its own AI supercomputers, and, more recently, prescreens data center providers to help customers find ones that are able to host their AI machines. READ MORE