The AIthena project, a collaborative initiative aimed at advancing AI-based solutions for Connected, Cooperative, and Automated Mobility (CCAM), has made significant strides in its Work Package 4 (WP4): Tools and Testing Facilities. Led by IDIADA, WP4 focuses on developing the necessary tools, infrastructure, and testing environments to support the validation and deployment of AI-driven CCAM systems. This work package is critical for ensuring that the AI models developed within AIthena are robust, reliable, and ready for real-world applications.
Building the Foundation for CCAM Testing
WP4 focuses on key objectives to advance AI-based Cooperative, Connected, and Automated Mobility (CCAM) systems. It aims to define a toolchain architecture that supports the development and deployment of these systems, alongside establishing an ICT framework for seamless data and AI management. The work includes preparing both physical and virtual testing environments, such as simulation platforms and cloud infrastructure for MLOps (Machine Learning Operations), to validate AI models. Additionally, WP4 involves setting up vehicles with necessary sensor configurations for efficient data collection and designing infrastructure and software tools for generating real and virtual datasets to train and validate AI models. Together, these efforts create a solid foundation for the successful integration of AI-driven technologies in CCAM systems.

Progress and Outcomes: Milestones Achieved
A significant achievement has been the preparation of physical testing environments, carried out under Task 4.2. Key activities include integrating advanced sensors into vehicles, such as LiDAR, cameras, and GNSS systems, to ensure comprehensive data collection. This data is gathered from organic driving on public roads and proving grounds, supporting the testing of AI-based CCAM solutions. Additionally, edge computing platforms are utilized to process live data and execute AI systems, enabling real-time decision-making during tests.
For example, IDIADA’s CAVRide vehicle is equipped with a 360º LiDAR, front and lateral cameras, and a high-precision GNSS system, that enables comprehensive data collection for various testing scenarios, while Siemens car has defined a smart recording platform to detect objects in real time testing.

In addition to physical testing, significant progress has been made in the development of simulation environments. In Task 4.3 partners have been working on setting up simulation environments for virtual validation of AI-based CCAM systems.
This includes the generation of synthetic data using physics-based sensor models, such as cameras and LiDAR, to test AI systems under various conditions. The simulation environments have been extended to include adverse weather and lighting conditions, ensuring robust testing of AI models.
Siemens, for instance, has been working on extending its synthetic dataset to include radar sensor models. This synthetic data is crucial for training and validating AI models, especially in scenarios where real-world data may be limited or difficult to obtain.
Together with IDIADA, Siemens is also creating digital twins of real-world scenarios to enable virtual testing and validation (scenario virtualization).

To highlight, Vicomtech has worked on decoupling sensors, actions and conditions for virtual testing, using CARLA and Unreal pedestrian models.

Another significant achievement in WP4 has been the development of hybrid testing approaches that combine virtual and physical testing environments.
This includes integrating real drivers into simulation environments to test human-machine interactions and combining physical vehicle components with simulation models to validate AI-based CCAM systems in a controlled yet realistic environment.
These hybrid testing approaches, known as X-in-the-Loop (XiL), are crucial for front-loading the validation of AI-based CCAM systems and ensuring their reliability in real-world applications.
These XiL infrastructures will be crucial for front-loading the validation of AI-based CCAM systems.

To support the large volumes of data generated during testing, WP4 has also been developing cloud-based infrastructure to enable scalable MLOps (Machine Learning Operations).
This includes preparing cloud platforms o facilitate the ingestion of large volumes of test data, deploying and monitoring the performance of AI models, and enabling automated data labelling, augmentation, and anonymization. These automated processing methodologies ensure closed-loop iterations on AI models, allowing for continuous improvement and refinement.
This cloud infrastructure is essential for managing the vast amounts of data generated during testing and ensuring that AI models can be trained and validated efficiently.
Partners involved are preparing in Task 4.5 the AWS resources for data storage, processing, and annotation.
Next Steps: Looking Ahead
WP4 is playing a pivotal role in the AIthena project by providing the necessary tools, infrastructure, and testing environments to validate AI-based CCAM systems. The progress made so far, particularly in physical testing, simulation, and cloud infrastructure, lays a strong foundation for the future development and deployment of trustworthy AI solutions in the mobility sector.
Additionally, WP4 will continue to refine its testing environments, extend synthetic datasets, and further develop XiL infrastructures to support the validation of AI-based CCAM systems. As WP4 moves forward, its contributions will be critical in ensuring that the AI models developed within AIthena are not only reliable but also ready for real-world applications, ultimately paving the way for safer and more efficient autonomous mobility systems.
Author: Nil Munté Guerrero (Applus+ IDIADA)