One of the goals of the AIthena project is to demonstrate the AIthena methodology in four critical use cases. So, let’s take a closer look at these four use cases and why they were chosen.
The software stack of an automated vehicle can be divided into the three abstract tasks of perception, understanding, and decision-making. The task of perception is to process raw sensor data and perform an initial modelling of the vehicle environment. This model is enriched with additional information like map data or information received through external communication through the modules associated with the task of understanding. The finalised environment model serves as the information basis for decision-making. Based on this information, the respective modules generate reasonable decisions that lead to vehicle trajectories which can be followed by applying the vehicle controls.
Within AIthena the entire processing pipeline of an automated driving software stack is investigated in three specific use cases accounting for Perception (UC1), Understanding (UC2) and Decision-Making (UC3) respectively. In addition, Use Case 4 (UC4) will apply the AIthena methodology from a Traffic Management perspective.
While UC1, UC2 and UC3 are on the vehicle level and each use case, or scenario within a use case, determines how a single vehicle (eventually) behaves in traffic, UC4 focuses on the impact several of such vehicles have on the traffic dynamics on a network level.
To provide a clearer understanding of the various use cases, a brief introduction will be given.
UC-1: Trustworthy Perception Systems for CCAM:
The development of trusted AI systems is imperative for comprehending object perception, sensor data utilization, redundancy, fusion, and discrepancy resolution. This UC seeks to address these critical aspects to foster reliable and explainable pedestrian detection in urban environments for CCAM applications.
The primary objective of this initiative is to facilitate the utilization of a pedestrian detection system in a use case demonstrator. This aims to enhance safety measures, especially in scenarios where pedestrian detection lies within the safety-critical path.
The UC aims to showcase a multifaceted approach by integrating Model-Driven, Data-Driven, and Sensor-Driven methodologies. This comprehensive strategy ensures the reliability, explainability, and transparency of the pedestrian detection system and the associated AI functionalities across the software, sensor, and AI stack.
UC-2: AI extended Situational Awareness/Understanding:
Information from perception layers, communications and map information is merged into Local Dynamic Maps (LDM), to interconnect layers (from static to dynamic) to reach accurate and complete knowledge about the scene. AI models can be used to learn to predict possible evolutions of the scene and predict collisions or other situations. Trusted AI is needed to understand what data was used to train the predictors, and which edge cases are covered.
Collision prediction can be learned with AI models, which learn from images or other raw data from sensors, and produce detected events, such as Time-To-Collision (TTC) values, Cut-in probability, or other equivalent collision-risk estimators, as forms of prediction of the near future road situation.
A robust prediction of traffic participants’ intended motion enables the safe and predictable operation of AVs. In interactive urban traffic environments, vehicles as well as pedestrians and other traffic participants navigate on highly complex road networks under a variety of environmental conditions while interacting with different kinds of road users. In this context, motion planning can only guarantee the safety of all participants, if the characteristics of the scenarios are acknowledged.
UC-3: Trustworthy and Human understandable decision-making:
This task is related to the task of vehicle guidance in an automated driving software stack. The local environment is perceived through the vehicle sensors and perception algorithms, and the modules of situation awareness are responsible for combining the information derived from the sensors with additional information like map data or information received through external communication with other traffic participants or digital infrastructure. Based on this information, the modules responsible for vehicle guidance need to generate reasonable decisions that lead to vehicle trajectories that are followed by applying the vehicle controls.
The focus of this use case and the corresponding demonstrator is to explain decisions to the user of the system before, while, or after they have been executed. Next to this, trustworthiness should increase by providing a robust system that is capable of handling situations understandably, even if the present situation confronts the vehicle with unforeseen circumstances.
UC-4: AI-based Traffic management:
Automated vehicles (AVs) currently behave differently than vehicles driven by humans. This different behaviour has an impact on traffic dynamics. To understand the scope of that impact, the behaviour of AVs needs to be studied at the transport level. To be able to benchmark the impact of the AV behaviour, a reference is needed. The reference is what is considered “good behaviour”, also referred to as “desired behaviour” or “acceptable behaviour”. This is the behaviour that will be applied to non-AVs.
To study the dynamics on the transport level, macroscopic scenarios will be used. These scenarios will be implemented and evaluated using microsimulation software (e.g., SUMO, Vissim, Aimsun).
This use case will examine which traffic management information can support the AI model(s) to improve comfort, safety, and/or efficiency.