Building trustworthy, explainable,
and accountable CCAM technologies
Connected and Cooperative Automated Mobility (CCAM) solutions are increasingly present in vehicle technologies, which benefit from artificial intelligence (AI) through AI-based perception, situational awareness, and decision-making components.
But AI can be unfair, biased, and can be extremely sensitive to unexpected inputs. Building explainable and trustworthy AI is the next mandatory step of technology development, incorporating among other equally important properties: robustness, privacy, explainability,
News
Advancing Trustworthy CCAM: Insights from AIthena’s report on the design and development of tools
Deliverable 4.2 of the AIthena project, titled 'Report on physical set-up, digital twin and hybrid testing approaches', offers a thorough overview of the integration of physical testing infrastructures, digital twins, and hybrid methodologies to enhance the...
Final AI algorithms from the AIthena project to advance trustworthy autonomous mobility
Deliverable D3.3 of the AIthena project, ‘Report on final AI algorithm development’, showcases major advancements in artificial intelligence designed to make autonomous vehicles safer, more transparent and more trustworthy. The report presents the final versions of...
Advancing trustworthy AI in CCAM – Key policy recommendations from the AIthena project
As Europe accelerates towards a future of connected, cooperative and automated mobility (CCAM), it becomes a critical policy imperative to ensure that AI-driven systems are trustworthy, transparent, safe and human-centred. AIthena’s deliverable D6.3, 'Lessons learned...


