Machine learning is being deployed in a growing number of applications that demand real-time, accurate, and robust predictions under heavy serving loads. However, most machine learning frameworks and systems only address model training, not deployment. Clipper is an open source, general-purpose model-serving system that addresses these challenges. Interposing between applications that consume predictions and the machine learning models that produce predictions, Clipper simplifies the model deployment process by adopting a modular serving architecture and isolating models in their own containers, allowing them to be evaluated using the same runtime environment as that used during training. Clipper’s modular architecture provides simple mechanisms for scaling out models to meet increased throughput demands and performing fine-grained physical resource allocation for each model. Further, by abstracting models behind a uniform serving interface, Clipper allows developers to compose many machine learning models within a single application to support increasingly common techniques such as ensemble methods, multiarmed bandit algorithms, and prediction cascades. Dan Crankshaw offers an overview of the Clipper serving system and explains how to use it to serve Apache Spark and TensorFlow models on Kubernetes. Dan concludes by discussing some recent work on statistical performance monitoring for machine learning models.