Building trustworthy, explainable,
and accountable CCAM technologies
Connected and Cooperative Automated Mobility (CCAM) solutions are increasingly present in vehicle technologies, which benefit from artificial intelligence (AI) through AI-based perception, situational awareness, and decision-making components.
But AI can be unfair, biased, and can be extremely sensitive to unexpected inputs. Building explainable and trustworthy AI is the next mandatory step of technology development, incorporating among other equally important properties: robustness, privacy, explainability,
News
AIthena consortium meeting in Graz
The AIthena project (AI-Based CCAM: Trustworthy, Explainable, and Accountable) is contributing to the creation of an Explainable AI (XAI) in Cooperative, Connected and Automated Mobility (CCAM) development and testing frameworks by exploring three main AI pillars:...
Insights from the AITHENA project’s progress on the human-centric AI approach
In the latter half of this year, the AITHENA project finalised one of its first major outputs from Work Package 1, led by Rupprecht Consult, establishing the methodology for the development and validation activities of the AITHENA project. The core objective is to...
AIthena D3.2 Report on initial AI algorithm development
Development of trustworthy and explainable AI algorithms in CCAM What AI algorithms are being developed in the AIthena project to advance autonomous vehicle technology and deployment? What are the challenges faced in developing these algorithms? Autonomous vehicles...