Use Case 3
Trustworthy and Human understandable Decision-making
UC-3 Trustworthy and Human understandable Decision-making
Rationale
Rationale
Objectives
Objectives
Demonstrator
Demonstrator
Explainable and robust decision making (manoeuvre and trajectory)
Aim: Combine ML decision-making with human understandable definitions of traffic rules encoded in the HD Maps. Decisions are visualized (if possible) such that human occupants get the chance to understand them before they are executed by the AI.
Approach towards trustworthy AI:
- Develop fusion models for decision-making using perception, localisation, HD Maps and external information via V2X communication
- Improve situation awareness using Hybrid AI system (knowledge and data-driven AI)
- Human-aligned agent
Decision
– Trustworthy and human understandable decision making
AI path planning and manoeuvring execution should maximize safety, comfort, and eco-driving. The user understands why, when, and how a decision is taken.