Use Case 3

Trustworthy and Human understandable Decision-making

UC-3 Trustworthy and Human understandable Decision-making​
Rationale​
Rationale​
Once the situation is understood by the system and predictions have been made, a decision about path planning and manoeuvre execution can be taken to maximize safety, comfort, eco-driving or other mission variables. Trustworthiness is required for AI components running or supporting the decision, so the user understands why, when and how an automated driving decision is taken.​
Objectives
Objectives
Combine Machine Learning (ML) decision-making with human understandable definitions of traffic rules encoded in the HD Maps. Decisions are visualized (if possible) such that human occupants get the chance to understand them before they are executed by the AI.
Demonstrator​
Demonstrator​

Explainable and robust decision making (manoeuvre and trajectory)​

Aim: Combine ML decision-making with human understandable definitions of traffic rules encoded in the HD Maps. Decisions are visualized (if possible) such that human occupants get the chance to understand them before they are executed by the AI.​

Approach towards trustworthy AI:​

  • Develop fusion models for decision-making using perception, localisation, HD Maps and external information via V2X communication​
  • Improve situation awareness using Hybrid AI system (knowledge and data-driven AI)​
  • Human-aligned agent​
Decision
– Trustworthy and human understandable decision making

AI path planning and manoeuvring execution should maximize safety, comfort, and eco-driving. The user understands why, when, and how a decision is taken.

UC-3 Leader:
ika | RWTH
Partners involved:
TNO
TUE / Einhoven