Insights from the AITHENA project’s progress on the human-centric AI approach

In the second half of 2024, the AITHENA project finalised one of its first major outputs from Work Package 1, led by Rupprecht Consult, establishing the methodology for the development and validation activities of the AITHENA project. The core objective is to incorporate trustworthiness to AI development, using a human-centric approach, by articulating an intricate structure for the evaluation process across the four main pillars – fairness, transparency, accountability, and privacy. The AITHENA project delivers on this through its deliverable D1.1 Methodology for Assessing the Ethics, Transparency, Accountability, and Privacy of AI-Based Systems in CCAM applications. The document introduces a set of checklists and guidelines tailored for developers and testers of CCAM functions, ensuring that the technologies developed are centred around human needs and adhere to rigorous ethical standards.

Each chapter evaluating fairness, transparency, accountability, and privacy in AI-based CCAM applications begins with a set of checklist questions designed to prompt reflection among the target audience of AI-based CCAM functions. Many of the questions are binary (yes or no), and each question is directly linked to a paragraph in the chapter providing examples and/or references for further reading on the topic, assisting readers in considering the relevant aspects of fairness, transparency, accountability, and privacy. While a quantitative response may not be possible in every instance, these questions help readers navigate different aspects of human-centricity, allowing them to either return to the checklist or continue reading through the document. There is no “pass or fail” grade provided, as this would be difficult to quantify and varies from situation to situation.

This format facilitates users in identifying points of concern or issues they may not have previously considered, directing them to relevant sources of information. Consequently, it is possible to move throughout the document, using prompts from the questions to discover more about each topic. Each chapter also contains a section identifying challenges or gaps in research or knowledge. Given that both AI and CCAM are in relatively early stages and their combination is even newer, many open questions remain. This document serves as a snapshot in time (mid-2023 to mid-2024) of the current issues faced. It should be built upon and further developed, leveraging an approach using checklists which is more appropriate than a set of indicators focusing solely on algorithms and sensors. A clear and shared understanding of terminology is crucial in this context. Therefore, a glossary of terms is provided towards the end of this deliverable.

The research activities within the AITHENA project will focus on developing Explainable AI (XAI) models through practical demonstrations that highlight perception, situational awareness, decision-making, and traffic management systems. The impact of these solutions will reach a diverse range of stakeholders, including autonomous vehicle (AV) users, non-AV users such as pedestrians and cyclists, CCAM solution developers, and policymakers. It is essential to gather requirements from these user groups, focusing on their mobility needs, expectations, and concerns regarding CCAM implementation. Additionally, AITHENA addressed technical use cases according to their strategies, solutions, scenarios, and requirements, considering the identified needs and expectations of user groups. The Work Package 1 effectively addresses this by identifying user group needs and defining the use cases in the AITHENA project through its deliverable D1.2 User group needs report and technical use case definition.

The AITHENA project’s initial deliverables mark a significant stride toward embedding trustworthiness in AI development within CCAM applications. The methodology and comprehensive checklists provided in deliverable D1.1 empower developers and testers to critically evaluate and enhance their AI systems, ensuring they align with human needs and ethical standards. Moreover, the focus on user group requirements and technical use case definitions in deliverable D1.2 highlights the project’s commitment to addressing the practical and diverse needs of stakeholders across the CCAM ecosystem.

Looking ahead, the ongoing research and development efforts will further refine and build upon these foundational outputs, driving progress in Explainable AI models and their real-world applications. By fostering a collaborative, informed, and ethically grounded approach, the AITHENA project is poised to make a lasting impact on the future of CCAM solutions, paving the way for safer, more trustworthy, and human-centric autonomous systems.

Author: Dr. Lakshya Pandit, Rupprecht Consult GmbH