Register or log in to access this video
Encourage professionals from other disciplines to embrace the benefits of AI
The need for interpretable artificial intelligence systems grows along with the prevalence of artificial intelligence applications use in everyday life. The current situation is such that these systems are able to predict and decide upon various cases more accurately and speedily than a human.
Despite the successes, these systems have their own limitations and drawbacks. The most significant ones are the lack of transparency and interpretability behind their behaviors, which leaves users with little understanding of how particular decisions are made by these models. Explainable AI systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design and evaluate explainable systems.
In addition to explaining predictions, providing a global perspective is important to ascertain trust in the model. However, most of the current research has been devoted to explaining individual predictions. In this talk, I present the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. Also, I provide newcomers to the field of XAI with an overview that can serve as reference material in order to stimulate future research advances, but also to encourage professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.