AIthena D3.1 Life cycle management framework for ML models

Standardisation of AI machine learning model cards in CCAM

What is a machine learning model? How are AI algorithms standardised? And how does the AIthena project ensure accountability, trustworthiness, and ethical principles and standards in CCAM?

Machine learning (ML) is the process by which AI algorithms are developed. This process typically involves four stages: data, training, testing and deployment. These four stages make up the lifecycle of a machine learning model, with each stage being equally important as any one of them can have a major impact on the others.

AI models are built using machine learning algorithms, each model is developed for a specific purpose or action, the training and the testing data, known as ‘ground truth’, has an impact in the model, it is important that this data is fair, non-biased and follow other ethical principles. These machine learning models need to follow a continuous monitoring during their lifecycle to achieve better results in the outcomes they are designed to achieve and to gain trust among all users.

Researchers are currently looking for ways to harmonise methods for developing ML models to ensure that they meet key principles such as explainability, trustworthiness, and adherence to ethical standards. One method that is gaining popularity to promote these goals is the use of machine learning model cards in the development of AI algorithms.

A machine learning model card is a comprehensive description of how the machine learning process is being set up. The model card includes information such as the author, the intended use and purpose of the ML model, how data will be collected and used, how ethics, bias and privacy will be handled, how results and performance will be evaluated, and the limitations and risks of the ML model.

The AIthena project has introduced a user-friendly ML model card for developers working on four different use cases related to AI in the context of Cooperative, Connected, and Automated Mobility (CCAM). This model card acts as a detailed checklist covering essential aspects of AI accountability, trustworthiness, and ethical standards.

This image shows how developers of AI systems can communicate relevant information about the system to different users.
@ AIthena – This image shows how developers of AI systems can communicate relevant information about the system to different users.

The AIthena model card is designed to address issues such as fairness, with built-in bias mitigation techniques, and includes protocols for privacy protection and accountability. These measures ensure that AI decision-making is transparent, providing a clearer explanation of why a system (such as a self-driving car) makes one decision over another. The aim is to demystify the ‘black box’ problem in AI, where decision-making processes lack transparency or logical explanation.

By applying these principles, the AIthena project aims to create user-centred AI systems that prioritise explainability and trustworthiness, giving users greater insight into how and why AI systems make their decisions. This is important in the context of CCAM, where trust in AI-driven decisions, such as those made by automated vehicles, is critical.

“In this deliverable, we outline the initial work in WP3. As previously mentioned, creating an AI algorithm involves multiple stages. It is crucial to approach each stage with a robust framework that adheres to ethical principles. Achieving trustworthiness in AI requires addressing it not only at the end of development but also from the early design phase and throughout the AI’s lifecycle.

This deliverable describes all these stages, along with various tools and methods to enhance transparency, accountability, among others, at each step. Given the unique requirements of AI in CCAM (Connected, Cooperative, and Automated Mobility) contexts, and considering the new EU AI Act classifies this application as high-risk, we developed a model card specifically for CCAM AI-based systems.

A model card is a tool for reporting relevant information to different users of the system. The presented model card addresses the transparency requirements of the AI Act and incorporates the seven key requirements for trustworthy AI as outlined by the High-Level Expert Group on AI (HLGE), all within the context of CCAM applications. This approach guides developers in creating more robust and trustworthy algorithms while clearly communicating limitations and risks to users.”

Paola Natalia Cañas Rodriguez (Vicomtech) – Lead Partner of Deliverable 3.1.

To learn more about the full ML model card of the AIthena project, you can read the deliverable here AITHENA-D3.1-Life-cycle-management-framework-for-machine-learning-models.pdf

After this deliverable (AITHENA D3.1), the research continued and you can read more in a published paper entitled: “A Methodology to Enhance Transparency for Trustworthy Artificial Intelligence for Cooperative, Connected, and Automated Mobility”, which includes the model card tailored for CCAM AI-based systems available at 12-08-01-0010: A Methodology to Enhance Transparency for Trustworthy Artificial Intelligence for Cooperative, Connected, and Automated Mobility – Journal Article.