AIthena D3.2 Report on initial AI algorithm development

Development of trustworthy and explainable AI algorithms in CCAM

What AI algorithms are being developed in the AIthena project to advance autonomous vehicle technology and deployment? What are the challenges faced in developing these algorithms?

Autonomous vehicles need a clear understanding and perception of their environment to make reliable decisions. Using a variety of sensors and cameras built into the vehicle, it can make its own decisions, such as whether to continue driving, maintain its current speed, or to brake if another vehicle, pedestrian or cyclist is too close to its path. These are the kinds of decisions that AI is making all the time in automated vehicles.

As part of the AIthena project, researchers are developing a range of AI algorithms to improve the explainability, trustworthiness and accountability of Connected and Cooperative Automated Mobility (CCAM) technologies. For example, the Perception Robustness Model is an algorithm that uses multimodal sensors such as cameras and Light Detection and Ranging (LiDAR) technology for vehicles to accurately map their surroundings. One of the challenges faced by automated vehicles is environmental variability, such as changes in the weather (fog or snow), or low light conditions that affect object detection. To address these challenges, researchers are testing a LiDAR-camera fusion method to improve reliability under these ‘corruption events’ – situations that can affect sensor performance. This process of testing, learning, and redeploying helps to ensure a more explainable and accountable response from the AI.

@ AITHENA D3.2

Another algorithm under development is perception model optimisation, which addresses the limitations of on-board computing power in vehicles. Since it is not currently feasible to have high-end servers in every vehicle, AIthena’s research partners are exploring the use of compressed data computing. This means transferring raw data from a vehicle to a cloud system, running the AI model in the cloud and sending the desired actions to the vehicle. For this approach to be successful, the vehicle must always have access to a fast, secure, and reliable telecommunications network. Specifically, the researchers are building on existing models that use data compression methods and real-time object detection systems (e.g., Pelee Net and MobileNet).

Building trust in CCAM technologies is essential, and trust can be built through the human interpretability of a vehicle’s decisions and the robustness of a vehicle’s actions. Trust is fostered when humans can understand how a vehicle makes its decisions, while robustness refers to reliable and credible actions that a vehicle consistently takes in different environments. By continuously developing and testing different AI algorithms in autonomous vehicles, more explainable, trustworthy, and accountable AI-driven CCAM technologies can be deployed.

To learn more about the AI algorithms developed in the AIthena project, you can read the Deliverable AITHENA-D3.2-Report-on-initial-AI-algorithm-development.pdf.