Get to know AIthena consortium partners – ika

This week we introduce next AIthena consortium partner: ika – Institute for Automotive Engineering RWTH Aachen University.

At ika, we research future, efficient, sustainable, and safe solutions for a wide range of use cases. In cities, in the countryside, individually or by public transport, but always sustainable, barrier-free, and flexible – to identify and meet the complex and often conflicting requirements for mobility solutions, we have set up our research teams in an interdisciplinary and wide-ranging way.

In the research area Vehicle Intelligence & Automated Driving, the institute bundles the necessary competencies from conceptual design to system and function development to verification and validation for the realization of connected and automated vehicles and their functions. The research area researches and develops methods and tools for this purpose, both in simulation and for use in real vehicles. The latest artificial intelligence methods, which can solve the complex challenges of automated and networked mobility with a high degree of reliability and in a way that is comprehensible to humans, play a major role here.

What is the role of ika and the team in the project?

Till Beemelmanns is in the Lead for Task 3.2 “Data and information fusion to reduce conflicting perception” and Use Case 1. The goal is to develop an explainable and trustworthy perception system for an urban environment.

On the other hand, Guido Küppers is leading the Task 3.5 “Explainable and robust decision making” and the associated Use Case 3. Here the goal is to develop a Hybrid-AI software stack for trustworthy behavior generation of automated vehicles in urban environments.

Ika’s test vehicle will be used as a demonstrator for both use cases in the AIthena project. The test vehicle is equipped with a high-performance on-board computer and V2X (Vehicle-to-everything) communication interfaces. The new sensor setup consists of multiple state-of-the art LiDAR and camera sensors, which provide a 360° surround view. In addition, we participate in other tasks of the project, mainly to integrate the developed functionalities into the overall methodology.

What are you currently working on in the project?

In Task 3.2 we are currently researching how we can make multi-modal AI perception models and the sensor processing algorithms more explainable. AI models are usually regarded as black boxes, but our goal is to make them more interpretable for users, developers, and authorities. Furthermore, we also investigate how we can make AI models more robust and reliable. In the future, we aim to integrate our approaches into the test vehicle for live demonstrations.

On the other hand, Task 3.5 will focus the development on a Hybrid-AI software stack for behavior generation. To achieve a robust functionality, we want to fuse all available sources of information such as onboard perception, map data and V2X and use them for the generation of the behavior. To increase trustworthiness, the system will explain the planned maneuver before it is executed.

From your perspective, how do you see the contribution of the AIthena project to building trustworthy, explainable, and accountable AI-based CCAM?

In AIthena, we will conduct research and develop new methods for explainable AI-based perception and decision-making algorithms. This will contribute to the field of explainable AI research and provide impulses for the industry. We hope our contribution will strengthen the users’ trust in automated mobility systems.

You can find more information about ika at Home – ika (rwth-aachen.de)

Figure 1 The ika test vehicle with its new sensor rack consisting of multiple state-of-the art LiDAR and camera sensors. Picture: Till Beemelmanns (Picture can be used for social media if the author is referenced).
Figure 1 The ika test vehicle with its new sensor rack consisting of multiple state-of-the art LiDAR and camera sensors. Picture: Till Beemelmanns (Picture can be used for social media if the author is referenced).