X-Europe Webinar #10 Tools for Explainable Artificial Intelligence
Unverified black-box model is a path to failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.
############### Speaker
Hubert Baniecki is a Data Science student at Faculty of Mathematics and Information Science, Warsaw University of Technology. Working as a Research Software Engineer at MI2 DataLab (research group lead by Przemyslaw Biecek). Developing tools for Explainable AI and contributing to the open-source community (R & Python packages). Researching ML in the context of interpretability, adversarial attacks and interactive model exploration.
LinkedIn https://www.linkedin.com/in/hbaniecki/
############### Description
DrWhy.AI [1] is the collection of tools for Explainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and examination of predictive models [2]. The main concept implies using model-agnostic post hoc explanations to visualize black-box model complex behaviour.
In the talk, I will provide a background for why XAI has become an integral part of the model development process. Later I plan to briefly showcase several tools implementing interfaces that enhance the model explanation process in Python and R. I will mainly focus on DALEX [3] (a tool with >650 stars on GitHub). Listeners will have a chance to get familiar with the basics of model explanation and bias detection on a practical use case.
[1] https://github.com/ModelOriented/DrWhy