CHALLENGE
Develop a simulation system capable of generating realistic driving scenarios and measuring user performance in a scenario related to drowsiness.
SOLUTION
Develop a night driving simulation within the SCANeR studio environment that can simultaneously track driving performance and multiple physiological signals, including eye movements, brain dynamics via electroencephalogram (EEG), heart rate variation, body temperature, skin conductance, facial and hand posture, as well as other sensor data shown in Figure 1.

DESCRIPTION
The proposed solution required the development of two components: a SCANeR studio scenario and an API to connect our sensors to the simulation software.
To implement the first part of this solution, we developed a scenario along with supporting scripts to simulate an autonomous vehicle losing control, as shown in Figures 2 and 3. In this scenario, the driver begins in autonomous mode, cruising in a lane at a constant speed of 100 km/h. At a random point, the vehicle experiences an unplanned deviation to the left or right, simulating a faulty response from the autonomous system. At that moment, the driver is required to take manual control by pressing a button on the dashboard in order to bring the vehicle back to the center of the lane. The driver’s reaction time is recorded internally within the program, and the outputs are provided to the experiment supervisor for analysis and interpretation.

This experiment can be repeated with variations in scenario parameters to assess how different conditions influence the effects of driver fatigue. Specifically, we envision using SCANeR studio’s environmental features to compare daytime and nighttime driving, as well as dry and rainy conditions. Also, thanks to the flexibility of SCANeR’s scripting, we can adapt the existing scenario by modifying vehicle dynamics, enabling further validation of driving performance under both alert and fatigued states through different driving modes. The collected multimodal signals will then be correlated with the driving performance to support advancements in road safety.
Regarding the API development to synchronize data from multiple sources, we utilized the existing SCANeR API framework, which transmits data packets from external devices to SCANeR studio via User Datagram Protocol (UDP). Data transfer is handled through a C++ program that enables the simulation’s export channels to read live data from our sensor devices. This setup not only ensures data synchronization but also allows external classification models to control the scenario flow.

BENEFITS OF SCANeR
First, SCANeR studio enables all data streams within the project to be synchronized with the driving simulation while offering the flexibility to manipulate every parameter of the driving environment. This allows the research team to perform subsequent data analysis to correlate driving performance with multimodal signals for real-time assessment of fatigue levels.
Second, the driving simulation scenario created with SCANeR studio can immerse the driver in a realistic environment when integrated with the Motion Platform simulator. This ensures that the driver’s feedback is consistent with real-world conditions.
NEXT STEPS
- Further improve and develop additional API scripts for each of our intended sensor devices.
- Leverage and develop statistical and AI-based approaches for comprehensive data analysis.
- Integrate the results of data analytics into the driving scenario (through API scripts in SCANeR studio) for real-time and closed-loop detection of driver fatigue.
REFERENCES
Fatigued Driving. (2021, October 11). Australian Automobile Association. https://www.aaa.asn.au/research/fatigued-driving/
CONTACT
If you want more information about this project, don’t hesitate to contact YuKai.Wang@uts.edu.au