Explainable AI refers to methods and techniques that enable humans (including non-experts in machine learning) to understand the results of the decision-making policy of AI models. With the continuous development of intelligent systems in application fields, such as autonomous vehicles, it is essential to provide explanations to stakeholders like users, developers, and regulators for practical and social-legal reasons when automated systems make decisions or recommendations.
At the current technology level, the decision-making of driverless cars could not be universally understood and such deficiency prevents the technology from being socially accepted. The public acceptance of autonomous vehicles depends heavily on transparency, credibility, and compliance. Interoperability, therefore, is seen as an important requirement and autonomous vehicles should thus be able to explain what they have seen and done and what they are going to do.
In future studies, we will focus on the following research questions:1) How to generate user-specific adaptive explanations without overwhelming users with too many or too detailed explanations? 2) Humans do not explain their understanding through probability or attention maps. Instead, they explain through high-level semantic concepts. Is it possible to produce such human-level explanations? 3) There exists multiple explanations such as natural language, perception visualization, audio etc. to justify the autonomous driving's underlying reasoning process. Which (combination) explanation mode provides a better prediction rate? Which (combination) explanation mode provides a better user experience?