At my keyote for the Red Hat OpenShift Commons AI Conference I talked about building an AI Red Team whose job it is to fix AI when it goes wrong. With algorithms making more and more decisions in our lives, from who gets a job, to who gets hired and fired, and even who goes to jail it’s more critical than ever to get our intelligent systems talking to us so people can step in when things go wrong. In the coming decade, organizations will face incredible pressure from regulators and the general public and that means every team needs a plan in place to find and fix mistakes fast or risk a PR nightmare and face financial disaster..
- Why We Started the AIIA and What It Means for the Rapid Evolution of the Canonical Stack of Machine Learning
- The New Landscape of Machine Learning for the New Year
- How to Monitor Machine Learning Models in Production like a Pro
- Strategies that Deliver a Big Boost to Your Machine Learning Computational Efficiency
- Seldon and Pachyderm – Two Foundational Pieces of the Machine Learning Loop Come Together to Take Your Model from Training to Production