Artificial Intelligence has become the essential engine of autonomous driving, gradually replacing classical algorithms for complex perception and decision-making tasks. However, the transition from a high-performing AI model in the laboratory to a certifiable Embedded AI on open roads represents an immense technological gap.
For OEMs and Tier 1 suppliers, the challenge no longer lies solely in designing the algorithm, but in its validation. How can one prove that a non-deterministic system will make the right decision in 100% of critical cases?
Context: If you first wish to understand the fundamentals of this technology before tackling its validation, we recommend our article on AI integration in the vehicle.
This article analyzes the technical challenges of embedded AI and the pivotal role of simulation in bridging the gap between learning and road safety.
The “Black Box” Paradox: Performance vs. Explainability
Integrating Deep Neural Networks (DNN) into ADAS/AD architectures introduces a fundamental rupture. Unlike traditional software code, where every rule is written by an engineer (“If obstacle detected, then brake”), AI learns by example.
From Deterministic Code to Probabilistic Learning
This approach allows for unmatched flexibility in handling the complexity of the real world. However, it makes the system probabilistic and opaque. This is the “black box” problem: we know the AI works, but it is difficult to explain why it made a specific decision at a specific moment, or to guarantee that a minor modification of the input (a changed pixel, a reflection on a sign) will not radically alter the output.
The SOTIF Imperative (ISO 21448)
This lack of determinism places AI at the heart of SOTIF (Safety of the Intended Functionality) concerns. Unlike functional safety (ISO 26262), which deals with hardware failures, SOTIF aims to reduce risks related to functional limitations and unknown scenarios. For AI, this means identifying areas where the model lacks robustness, often caused by incomplete training data.
Challenge #1: Data Quality and Representativeness (Data Gap)
The performance of an embedded AI depends directly on the quality of the data it has “seen” during its training.
Overfitting and “Corner Cases”
Kilometers driven on real roads generate petabytes of data, but 99% of this data corresponds to mundane driving situations. However, AI must be trained on Corner Cases: a pedestrian suddenly appearing from behind a truck, lane markings erased by rain, or sudden glare.
If the AI has never encountered these rare situations, it risks failure. Collecting this data solely through physical driving is economically and temporally impossible.
The Contribution of Synthetic Data
This is where simulation comes in. By generating synthetic data via photorealistic virtual environments (such as those produced by SCANeR with Unreal Engine), engineers can:
- Enrich datasets with thousands of variations of critical scenarios.
- Ensure perfect “Ground Truth,” eliminating manual annotation errors.
- Rebalance datasets to avoid biases.
Challenge #2: Real-time Inference on Embedded Targets
Training an AI on powerful cloud servers is one thing; executing it in real-time in a vehicle is another.
Hardware Constraints and Latency
Autonomous vehicles impose severe constraints in terms of energy consumption, thermal dissipation, and computing power (SWaP – Size, Weight, and Power). Embedded AI must perform inference (detection, classification, trajectory) in milliseconds to ensure safety.
Optimizing neural networks (pruning, quantization) for dedicated chips (SoCs like NVIDIA Orin, Qualcomm, Mobileye) risks degrading model accuracy.
Testing Final Hardware with HIL (Hardware-in-the-Loop)
To validate that the optimized AI reacts correctly on the final target, virtual bench tests are indispensable. They consist of connecting the real computer (ECU) to a simulator that feeds it sensor signals in real-time.
This is precisely where HIL testing is crucial to verify not only the AI’s decision logic but also its actual response time within the vehicle’s electronic architecture, ensuring that no latency compromises safety.
Challenge #3: Robustness Against Degraded Environments
One of the greatest challenges for perception AI is managing adverse weather and lighting conditions. A camera that performs well in broad daylight can become blind when facing low sun or heavy rain.
Physics-based sensor simulation allows for testing the robustness of algorithms against these physical perturbations (thermal noise, diffraction, atmospheric attenuation). This is the sine qua non condition to guarantee reliable smart sensor perception and ensure the AI will not be fooled by visual artifacts or weather conditions it interprets incorrectly.
How Simulation Secures AI Deployment
Faced with these challenges, traditional validation by road testing is obsolete. A hybrid approach, centered on massive simulation, is necessary to reach the billions of virtual kilometers required to demonstrate the statistical safety of AI.
Beyond statistical coverage, simulation offers a decisive advantage: the reproducibility of chaos. When an AI adopts an unpredictable or dangerous behavior in simulation (a “hallucination” or a false detection), the scenario can be replayed exactly to understand the root cause and correct the neural network. This is a correction loop impossible to achieve on open roads where the unexpected is, by definition, fleeting.
SCANeR: An Infinite Training Ground for Neural Networks
The SCANeR ecosystem meets these requirements by offering a unified platform for:
- Data Generation: Massive creation of varied scenarios for training (Data Farming).
- Fault Injection: Testing AI resilience against failing or noisy sensors.
- HIL/VIL Validation: Integrating the AI model into the vehicle’s real control loop.
In Brief: The Pillars of AI Validation
| AI Challenge | Simulation Solution | AVS Benefit |
| Lack of critical data | Synthetic Data Generation | Exhaustive coverage of Corner Cases without physical risk. |
| Non-deterministic Black Box | Massive Simulation & Variation | Statistical identification of model limits (SOTIF). |
| Real-Time Constraints | HIL Test Benches | Validation of inference on the real target ECU. |
Embedded AI is the key to autonomy, but its reliability can only be guaranteed by a rigorous validation strategy, where the virtual world prepares the AI to face the complexity of the real world.
