Learn how a novel approach to explainability based on adversarial machine learning can be used to explain the predictions of deep neural networks, powered by a single GPU Tesla V 100-SXM2 to produce better results faster than LIME and SHAP. This talk covers our approach which identifies the relative importance of input features in relation to the predictions based on the behavior of an adversarial attack on the DNN and uses this information to produce the explanations. We include examples that demonstrate how explainability accompanies AI security, and compare the speed and performance of this new approach to other leading explainability techniques.
This blog has been republished by AIIA. To view the original video, please click HERE.