7 research outputs found

    Driving simulator indices directory

    No full text
    Directory content: This directory contains 21 .CSV files of 45 driving-based variables (recorded at 125 Hz) for each driver (ID_Number_SimulatorData). Moreover, it includes 21 .MP4 files with videos (recorded at 30 Hz) showing the simulator main central screen (ID_Number_SimulatorScreenVideo, for more details on the simulator configuration see Method and instruments section). Finally, a .CSV file reports the data descriptions (Legend_DataSimulator).Method and instruments: Participants drove a semi-dynamic (a four-degree-of-freedom motion platform) driving simulator (Nervtech™, Ljubljana, Slovenia) recreating a middle-sized electric automatic vehicle. In this simulator, the interaction with the virtual vehicle takes place via devices typically present in an automatic transmission car and therefore the primary controls of the simulator are physical. To control the vehicle, participants used a Skoda Octavia steering wheel (Škoda Auto a.s, Mladá Boleslav, Czech Republic), and gas and brake pedals (Sensodrive GmbH, Weßling, Germany) while seated on a Ford-Max seat installed on a rotating base turning swivel platform (Ford Motor Company, Dearborn, Míchigan, US). The system also includes Audi Q7 blinkers (Audi, Ingolstadt, Germany) and a dedicated Samsung Galaxy A tablet (Samsung Group, Seoul, South Corea). The latter allows drivers to answer questionnaires or phone messages (if requested). Speedometer and analog tachometer gauges are displayed on a dedicated screen placed behind the steering wheel. A Logitech 5.1 audio surround system (Logitech International S.A., Lausanne, Switzerland) reproduces engine sound, traffic noise, vocal warnings (e.g., the driving modality shift requests), and navigation instructions.Participants were seated, without breaks, for about 180 minutes, either driving or supervising the automation (depending on the driving modality) around the same road scenario without traffic. Participants were instructed to maintain the car in the right lane throughout the experiment (posted speed limit set to 130 km/h). The driving scenario consisted of a six-lane monotonous highway circuit developed using SCANeR studio software (AVSimulation, Boulogne-143 Billancourt, France; version DT 2.5). The road was ∼ 33.5 km long with a 3.5 m wide median. It included guardrails on both sides of the carriageway, surrounded by an empty and monotonous grassy meadow. The driving scenario was displayed via three 49″ screens with a resolution of 3840 x 2160 pixels. The screens were set into a panoramic arrangement to simulate the horizon of the virtual world (∼130° field of view), and drivers sat about 135 cm away from the main central screen. Finally, the video output of the central simulator screen was recorded through the Open Broadcaster Software [OBS] (OBS project, available at https://obsproject.com/; version 25.0.8).</p

    Subjective ratings directory

    No full text
    Directory content: This directory contains six .CSV files with screening (i.e., IDs ScreeningData) and subjective (i.e., IDs SubjectiveData; MEQr, SSS, BORG, NASA-TLX, MSAQ, for the acronyms see below) data. Data descriptions are reported in an .xlsx file (Legend_DataSubjective).Method and instruments: Through the experiment, we asked drivers to fill in seven questionnaires (digital format). First, we asked drivers to fill in a questionnaire aiming to collect sociodemographic data (e.g., age, handiness). Then, we used the reduced version of the Morningness – Eveningness Questionnaire (MEQr; Adan and Almirall, 1991) and the Spanish Driving Behavior Questionnaire (SDBQ; López de Cózar et al., 2006). The MEQr is a 5-item questionnaire assessing preferences in sleep-wake and activity schedule and allowing the classification of individuals into one of the following subtypes: definitely morning type (22–25 points); moderately morning type (18–21); neither type (12–17); moderately evening type (8–11); and definitely evening type (4–7). The SDBQ is a 34-item questionnaire that allows the identification of adaptive and maladaptive driving styles. Drivers answered the SDBQ items on a Likert scale ranging from 0 (never) to 10 (always).To assess the drivers’ perceived sleepiness and fatigue in three separate measuring times (i.e., pre-driving session, after 90-min of driving, and at the end of the session [after ∼ 180-min]), we administered the Standford Sleepiness Scale (SSS; Hoddes et al., 1973) and the Borg Scale of Perceived Exertion (BORG; Borg, 1998). The SSS provides a global measure of how alert a person is feeling, ranging between 1 and 7. The BORG indicates the level of fatigue, and consists of a numerical scale (ranging from 6 to 20) anchored by “not exertion at all” (score 6) to “maximal exertion” (score 20). To fill both questionnaires after 90 minutes of driving, the participants used the dedicated tablet inside the simulator (for further details, see Driving simulator indices directory). If the vehicle was set in manual driving modality, drivers were instructed to temporarily stop the vehicle.At the end of the driving session, to assess the degree of task complexity and the level of motion sickness experienced, we used the NASA-Task Load Index (NASA-TLX; Hart, 2006) and the Motion Sickness Assessment Scale (MSAQ; Gianaros et al., 2001). The NASA-TLX assesses the task load through six bipolar dimensions: mental, physical, and temporal demand, own performance, effort, and frustration, using a total score between 0 and 100 (higher values indicate higher perceived task load). The MSAQ includes 16 brief statements describing the most common motion sickness symptoms (e.g., “I felt sick to my stomach”) using a Likert scale ranging from 1 (“not at all”) to 9 (“severely”).</p

    Physiological indices directory

    No full text
    Directory content: This directory contains 21 .CSV files with electrodermal activity (EDA), electrocardiogram (ECG), blood volume pulse (BVP), and respiration data (recorded at 400 Hz) for each driver (i.e., ID_Number_BiosignalsData). All files were pre-processed in two steps to ensure the correct sampling rate: (i) removing lines with timestamps repetitions, and (ii) resampled to 400 Hz using regularly spaced and linear interpolation. This directory also includes a .CSV file reporting the data description (Legend_DataBiosignal).Method and instruments: We used a BiosignalsPlux Research Kit (PLUX Wireless Biosignals, Lisbon, Portugal) to monitor participants’ ECG, BVP, EDA, and respiration data. The BiosignalsPlux system includes a wearable hub with an 8-channel configuration (analog ports) of 16-bit per channel resolution, using Bluetooth data transmission technology for synchronization with the driving simulator.We used disposable, self-adhesive, pre-gelled Ag/AgCl electrodes (24 mm diameter) for the ECG and EDA measurements. The EDA was recorded employing a dedicated single-lead local differential bipolar DC sensor (0-3 Hz bandwidth, 0-100 µS range), with two leads (a positive and a negative lead, 5.0 ± 0.5 cm length each), each one ending with a dedicated electrode socket. Once we cleaned the skin with an alcohol-free disinfectant, we placed the electrodes on the thenar (negative electrode) and hypothenar (positive electrode) eminences of the left hand. We made sure to let enough space on the hand palm between the two electrodes to minimize the risk of signal artifacts due to the pressure of the hand on the steering wheel. The ECG was recorded with a single-lead local differential bipolar sensor (0.5-100 Hz bandwidth, ± 1.47 mV range), including a positive, a negative, and a reference cable, each one ending with a dedicated electrode socket. Once we cleaned the skin, we placed the electrodes on the participant’s chest (Lead II configuration): one electrode on the depression below each of the shoulder blades (reference on the left side, positive on the right side) and one electrode (negative) on the fifth intercostal space of the left side.The BVP was measured through an optical, non-invasive ear-clip sensor (0.02-2.1 Hz bandwidth, 535±10 nm centroid wavelength), including a light emitter (LED) and detector. The sensor (LED and detector) was placed at the center of the left ear lobe.The respiration data was recorded using an elastic, adjustable chest belt that included a piezoelectric sensor (0.059-1 Hz bandwidth, ± 1.50 V range). We placed the belt on the participant’s chest, over a cotton short-sleeve T-shirt, ∼2 cm below the pectoral muscles, and connected to the hub using a dedicated cable of about 110 cm total length.</p

    Eye movement directory

    No full text
    Directory content: This directory contains 20 .CSV files including 187 parameters related to eye movements (recorded at 120 Hz) and cameras calibrations for each driver (ID_Number_EyeMovementSignals). From the original 21 files, due to log system failures during the recording session, data from one driver (driver ID #69) are missing. All files were pre-processed in two steps to ensure the correct sampling rate: we (i) removed lines with timestamps repetitions, and (ii) resampled to 120 Hz using regularly spaced and linear interpolation. This directory also includes a .CSV file reporting data description (Legend_DataEyeMovement) and a .SEW file (3DWorldModelDescription; readable as .txt file) reporting the 3D World Model definition.Method and instrument: We collected drivers’ eye movements through the Smart Eye PRO system remote eye tracker (120 Hz; v9.0; Smart Eye AB, Gothenburg, Sweden). The device consists of four IR cameras (2 central cameras [12 mm lens] and 2 lateral cameras [8 mm lens]) placed at ∼ 115 cm of distance from the driver’s face (angled from below ∼ 45°), covering a total field view of ∼ 180°. Multiple sources of IR lights (three for each lateral camera) were employed to enhance the driver’s face contrast. For each driver, we adjusted the system (camera’s direction, brightness [aperture], and focus) and we then performed a 12-point gaze calibration.</p

    Thermal imaging directory

    No full text
    Directory content: This directory contains 18 .AVI files which are videos (recorded at 30 Hz) reporting driver’ facial and upper body skin temperature while driving (ID_Number_FlirCameraVideo). Each video has the respective metadata .CSV file (ID_Number_FlirCameraMetaData). From the original 21 files, due to a log system failure during the recording session (driver ID #74), and absence of consent to having their personal images disseminated (drivers IDs #66 and 78), three .AVI files and three .CSV files are missing.Method and instrument: To collect the driver’s facial and upper-body skin temperature, we used a high-resolution science grade LWIR camera (A325sc – FLIR; Teledyne FLIR LLC, Wilsonville, Oregon, USA). The camera (resolution of 640 × 480 pixels) was placed on a support ∼140 cm above the moving platform and ∼140 cm from the driver's face (angled from above ∼26°). Automatic focus was always employed for the video recording. We asked drivers to put on a cotton short-sleeve T-shirt and therefore all drivers wore the same upper body clothing.</p

    RGB and Depth videos directory

    No full text
    Directory content: This directory contains 38 .MP4 files with videos (recorded at 30 Hz) showing drivers’ movements and body position while driving. Those videos are presented both in colored/RGB (19; ID_Number_ColorCameraVideoMask) and B&W versions (19; ID_Number_DepthCameraVideo). The RGB video were pre-processed to insert a blurred mask on participants’ face to preserve anonymity through a customized Matlab script (Mathworks Inc., Natick,MA, USA). We also included the metadata .CSV files for both colored (19; ID_Number_ColorCameraVideoMetaData) and B&W (19; ID_Number_DepthCameraMetaData) videos. Due to absence of consent to having their personal images disseminated (drivers IDs #66 and 78), four .MP4 files and four .CSV files have not been included.Method and instruments: To record RGB and depth videos, we used a depth camera (Intel® Realsense D435 model, 1080p, 15-megapixel HD resolution, infrared RGB camera; Intel Corporation, Santa Clara, CA, US). The camera (resolution of 1920× 1080 pixels) was placed on a support ∼150 cm above the moving platform and ∼130 cm from the driver's face (angled from above ∼30°). The recorded signal was continuous.</p

    Annotations directory

    No full text
    Directory content: This directory contains 21 .CSV files reporting the data about 23 specific drivers’ mannerisms and behaviors (e.g., rubbing/holding face, yawning) observed during the driving session (ID_Number_LabelData).Method and instruments: To collect the annotation data, two trained and independent raters employed a customized video analysis tool (HADRIAN’s EYE software; Di Stasi et al., 2023) to identify and annotate drivers’ fatigue- and sleepiness-related mannerisms and behaviors. The tool allows the synchronized reproduction of the videos obtained through the RGB camera (for further details, see RGB and Depth videos directory) and the recording of the main central screen of the simulator (for further details, see Driving simulator indices directory). Both videos were automatically divided by the HADRIAN’s EYE software into a series of 5-min 39 chunks that were then shuffled and presented to the raters in a randomized order to minimize bias (e.g., overestimating the level of fatigue towards the end of the experimental session). Then, a customized Matlab code (Mathworks Inc., Natick, MA, USA) was used to detect discrepancies in the outputs of the two raters. Two types of discrepancies were detected: (i) type, and (ii) timing of the detected mannerism/behavior. Finally, in case of discrepancies, a third independent rater reviewed the videos to solve the issue.</p
    corecore