Seemingly overnight, machine learning (ML) exited the world of aspirational technology and entered the mainstream. Organizations of every size and across nearly every industry want in on the action—and ML is realistically within reach for all because of the cloud. The cloud brings together data, low-cost storage, security, and ML services along with high-performance CPU- and GPU-based instances for model training and deployment. Now, organizations can store as much data and have as much high-performance compute as they need elastically, so realizing the value of ML can happen much faster.
For the answers to that question and more, we turned to Dr. Bratin Saha, vice president of machine learning services at Amazon AI. Read on to discover guidance and best practices for evaluating the infrastructure requirements of your ML workloads—and for ensuring you make the right choices to meet those needs.