AIthena research results on Explainable AI at ML4AD 2023 in New Orleans

Till Beemelmanns from ika recently participated in the Machine Learning for Autonomous Driving Symposium (ML4AD) collocated with NeurIPS in New Orleans (https://ml4ad.github.io/). This workshop marked the 8th edition of the event, bringing together researchers and industry leaders from across the globe.

The focus of ML4AD 2023 was on the various machine learning challenges and advancements that are shaping the future of autonomous driving. Discussions spanned a wide range of topics, from perception and explainable AI to End-to-End AV architectures and the role of Large Language Models for automated driving.

Beemelmanns’ contribution to the symposium came in the form of a poster presentation titled “Explainable Multi-Camera 3D Object Detection with Transformer-Based Saliency Maps.” This research explores methods for making complex Vision Transformers (ViTs) more interpretable, particularly for safety-critical applications like self-driving cars.

The ability to understand a model’s decision-making process is vital for building trust in artificial intelligence systems. Beemelmanns’ work proposes a novel method for generating saliency maps for ViTs used in 3D object detection, with the goal of improving transparency regarding the inner workings of these AI models.

The ML4AD symposium provided a valuable platform to share AIthena’s research results with leading researchers in the field, including companies like Waymo, Nvidia, Bosch and Toyota.

Overall, Beemelmanns’ participation reflects the ongoing efforts to develop safe and reliable autonomous vehicles through the power of machine learning.

You find “Explainable Multi-Camera 3D Object Detection with Transformer-Based Saliency Maps” publication (authors: Till Beemelmanns, Wassim Zahr, Lutz Eckstein – RWTH Aachen University) in Library section.