Model Serving Event
April 7 2022. 8 AM PT / 11 AM ET / 5 PM CET
Learn how to serve models, sunset old models, do complex shadow and blue/green deployments, monitor different aspects of the model and more.
See who’s talking below with our two minute intro videos.
- Step 1. Check out the short videos below.
- Step 2. Vote for your favorite at the bottom of the page.
- Step 3. Done!
Valohai walks you through the big differences between batch and online inference and why you would pick one over the other and what the trade-offs are with each approach.
Seldon talks you through deploying various kinds of ML models to production, starting with a simple text generation model and moving to more complex use cases.
Modzy gives you a breakdown of Chassis, an open source project, that lets you turn models into containers that make them easy to serve into production.
Iguazio skips the basics of serving, since most folks are probably already familiar with them, and moves to more complex online serving and why GPUs can help assist with those kinds of workloads.