At my keyote for the Red Hat OpenShift Commons AI Conference I talked about building an AI Red Team whose job it is to fix AI when it goes wrong. With algorithms making more and more decisions in our lives, from who gets a job, to who gets hired and fired, and even who goes to jail it’s more critical than ever to get our intelligent systems talking to us so people can step in when things go wrong. In the coming decade, organizations will face incredible pressure from regulators and the general public and that means every team needs a plan in place to find and fix mistakes fast or risk a PR nightmare and face financial disaster..
Recent Comments