Use case at UTS


Within Australia, it is estimated that 20-30% of car crashes are due to fatigue (Fatigued Driving, 2021). This is particularly the case for the 18-24 years old. Where there is a decrease in sleeping opportunities and the mental functions related to driving decisions are not fully developed (Shekari Soleimanloo et al., 2017). With such an immense problem facing society at large, there have been many attempts made to improve drowsy driving detection, ranging from lane heading measurement under the works of Morris, Pilcher, and Switzer (2015) to the development of computer vision-based approaches under the works of Patel M. et al (2010). Although there is some evidence of automation in vehicles assisting with this fatigue driving, there is still major room for improving the response to the issues (Kaduk et al., 2020; Lin et al., 2021).


Develop a simulation system capable of realistic driving scenarios and measuring user performance in a scenario with relationship to drowsiness.


Develop a simulation for a night driving scenario within the SCANeR Studio environment that can simultaneously track driving performance and multiple physiological signals including eye movements, brain dynamics via electroencephalogram, heart rate variation, body temperature, skin conductive, face/hand posture and our other sensor devices as shown in Figure 1.

Figure 1: The use case at UTS


First, SCANeR Studio provides synchronisation of the data paths within the project, whilst affording the flexibility of manipulating all parameters of the driving environment. The research team can then do subsequent data analytics to correlate the driving performance and multimodal signals for assessing the levels of fatigue in real-time. Second, the driving scenario realized with SCANeR Studio can immerse the driver in the desired environment by connecting the SCANeR with the Motion Platform simulator. This guarantee the feedback from the driver is compatible with the real-world situation.


The solution detailed required two components to be developed, a SCANeR Studio Scenario and the application programming interface to connect our sensors to the SCANeR scenario.

In order to fulfil the former portion of this solution, we developed a scenario and the accompanying scripts to mimic an autonomous vehicle that loses control as shown in Figures 2 and 3. Specifically, the attending driver is initially cruising on a lane in autonomous mode with a fixed speed of100 km/hour. A left or right deviation in the driving trajectory is randomly then input into the vehicle which would simulate a faulty reaction from the autonomous system. At which point the driver would need to activate manual driving, through a button on the dashboard, and return the vehicle to the centre of the lane. The reaction speed of the driver would be measured internally within the program and the output would be provided to the experimental supervisor to tabulate and draw conclusions on.

Figure 2: A snapshot of this scenario developed

This experiment could then be conducted with alterations to scenario parameters to examine how pronounced the effects of fatigue are given different conditions. Specifically, we envision using the SCANeR Studio’s environmental features to use a night vs daytime mode, and with and without rain. Also, thanks to the flexibility of scripting in SCANeR we can rework our existing scenario to alter the actual vehicle dynamics to further validate the driving performance (awake vs. fatigue) via different driving modes. Moreover, the recorded multimodal signals will be correlated with the driving performance to benefit road safety.

Regarding the API development to sync data from multiple sources, we utilised the existing SCANeR API framework which communicates data packets through the User Datagram Protocol. The transfer of data is achieved through a C++ program which allows the established export channels within the simulation to read the live data from our sensor devices. This not only allows us to synchronise the data, but also allow external classification models to control the scenario flow.

Figure 3: First-person view of the driver during night mode.


1 academic key.


  • Further improving and developing more API scripts for each of our intended sensor devices.
  • Leveraging/Developing statistic and AI approaches for comprehensive data analytics
  • Integrating the outcomes of data analytics into the driving scenario (through API scripts on SCANeR Studio) for real-time and closed-loop detection of fatigue driving.


  • Fatigued Driving. (2021, October 11). Australian Automobile Association.
  • Kaduk, S. I., Roberts, A. P. J., & Stanton, N. A. (2020). Driving performance, sleepiness, fatigue, and mental workload throughout the time course of semi‐automated driving—Experimental data from the driving simulator. Human Factors and Ergonomics in Manufacturing & Service Industries, 31(1), 143–154.
  • Shekari Soleimanloo, S., White, M. J., Garcia-Hansen, V., and Smith, S. S. (2017). The effects of sleep loss on young drivers’ performance: A systematic review. PLOS ONE, 12(8).
  • T. Lin, C. H. Chuang. Y. C. Hung, C. N. Fang, D. Wu and Y. K. Wang. (2020). A driving performance forecasting system based on brain dynamic state analysis using 4-D convolutional neural networks. IEEE Transactions on Cybernetics, 51(10).