11 research outputs found

    Comparing spatially static and dynamic vibrotactile take-over requests in the driver seat

    No full text
    Vibrotactile stimuli can be effective as warning signals, but their effectiveness as directional take-over requests in automated driving is yet unknown. This study aimed to investigate the correct response rate, reaction times, and eye and head orientation for static versus dynamic directional take-over requests presented via vibrating motors in the driver seat. In a driving simulator, eighteen participants performed three sessions: 1) a session involving no driving (Baseline), 2) driving a highly automated car without additional task (HAD), and 3) driving a highly automated car while performing a mentally demanding task (N-Back). Per session, participants received four directional static (in the left or right part of the seat) and four dynamic (moving from one side towards the opposite left or right of the seat) take-over requests via two 6 × 4 motor matrices embedded in the seat back and bottom. In the Baseline condition, participants reported whether the cue was left or right, and in the HAD and N-Back conditions participants had to change lanes to the left or to the right according to the directional cue. The correct response rate was operationalized as the accuracy of the self-reported direction (Baseline session) and the accuracy of the lane change direction (HAD & N-Back sessions). The results showed that the correct response rate ranged between 94% for static patterns in the Baseline session and 74% for dynamic patterns in the N-Back session, although these effects were not statistically significant. Steering wheel touch and steering input reaction times were approximately 200 ms faster for static patterns than for dynamic ones. Eye tracking results revealed a correspondence between head/eye-gaze direction and lane change direction, and showed that head and eye-gaze movements where initiated faster for static vibrations than for dynamic ones. In conclusion, vibrotactile stimuli presented via the driver seat are effective as warnings, but their effectiveness as directional take-over requests may be limited. The present study may encourage further investigation into how to get drivers safely back into the loop.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Biomechatronics & Human-Machine Contro

    Supplementary data for the paper: Rolling out the red (and green) carpet: supporting driver decision making in automation-to-manual transitions

    No full text
    Supplementary data for the paper: Eriksson, A.*, Petermeijer, S. M.*, Zimmerman, M., De Winter, J. C. F., Bengler, K. J., & Stanton, N. (in press). Rolling out the red (and green) carpet: supporting driver decision making in automation to manual transitions. IEEE Transactions on Human-Machine Systems. . (* = joint first authors

    Designing an AI-companion to support the driver in highly autonomous cars

    No full text
    In this paper, we propose a model for an AI-Companion for conditionally automated cars, able to maintain awareness of the driver regarding the environment but also to able design take-over requests (TOR) on the fly, with the goal of better support the driver in case of a disengagement. Our AI-Companion would interact with the driver in two ways: first, it could provide feedback to the driver in order to raise the driver Situation Awareness (SA), prevent them to get out of the supervision loop and so, improve takeover during critical situations by decreasing their cognitive workload. Second, in the case of TOR with a smart choice of modalities for convey the request to the driver. In particular, the AI-Companion can interact with the driver using many modalities, such as visual messages (warning lights, images, text, etc.), auditory signals (sound, speech, etc.) and haptic technologies (vibrations in different parts of the seat: back, headrest, etc.). The ultimate goal of the proposed approach is to design smart HMIs in semi-autonomous vehicles that are able to understand 1) the user state and fitness to drive, 2) the current external situation (vehicle status and behavior) in order to minimize the automation surprise and maximizing safety and trust, and 3) leverage AI to provide adaptive TOR and useful feedback to the driver

    Complementing Haptic Shared Control with visual feedback for obstacle avoidance

    No full text
    For automated vehicles (SAE Level 2-3) part of the challenge lies in communicating to the driver what control actions the automation is taking and will take, and what its capabilities are. A promising approach is haptic shared control (HSC), which uses continuous torques on the steering wheel to communicate the automation’s current control actions. However, torques on the steering wheel cannot communicate future spatiotemporal constraints, that might be required to judge appropriate overtaking or obstacle avoidance. A visualisation of predicted vehicle trajectory, along with velocity-dependent constraints with respect to achievable trajectories is proposed. The goal of this paper is to experimentally compare obstacle avoidance behaviour while driving with the designed visualisation against driving with a previously designed HSC, as well as the two support systems combined. It is expected that adding visual feedback improves obstacle avoidance and user acceptance, and reduces control effort with respect to HSC only. In a driving simulator experiment, 26 participants drove three trials with each feedback condition (visual, HSC, and combination) and had to avoid obstacles that appeared with a Time to collision of either 1.85 s (critical) or 4.7 s (non-criticall). Results showed that, compared to HSC only, the HSC and visual combination yielded slightly smaller safety margins to the obstacle, a significant reduction of control activity on straights, and increased subjective acceptance rating. Visual and HSC offered a beneficial synergy, as it seemed the visual feedback allowed drivers to anticipate the effect of their steering actions on the car’s trajectory more accurately, and the HSC reduced the intra-subject variability. Future research should investigate the effects of added visual feedback in more detail, specifically in terms of the effectiveness to communicate automation capabilities and driver gaze behavior.Human-Robot InteractionControl & Simulatio

    Turmoil behind the automated wheel:an embodied perspective on current HMI developments in partially automated vehicles

    Get PDF
    \u3cp\u3eCars that include combinations of automated functions, such as Adaptive Cruise Control (ACC) and Lane Keeping (LK), are becoming more and more available to consumers, and higher levels of automation are under development. In the use of these systems, the role of the driver is changing. This new interaction between the driver and the vehicle may result in several human factors problems if not sufficiently supported. These issues include driver distraction, loss of situational awareness and high workload during mode transitions. A large conceptual gap exists on how we can create safe, efficient and fluent interactions between the car and driver both during automation and mode transitions. This study looks at different HMIs from a new perspective: Embodied Interaction. The results of this study identify design spaces that are currently underutilized and may contribute to safe and fluent driver support systems in partially automated cars.\u3c/p\u3

    Is accommodation a confounder in pupillometry research?

    No full text
    Much psychological research uses pupil diameter measurements to investigate the cognitive and emotional effects of visual stimuli. A potential problem is that accommodating at a nearby point causes the pupil to constrict. This study examined to what extent accommodation is a confounder in pupillometry research. Participants solved multiplication problems at different distances (Experiment 1) and looked at line drawings with different monocular depth cues (Experiment 2) while their pupil diameter, refraction, and vergence angle were recorded using a photorefractor. Experiment 1 showed that the pupils dilated while performing the multiplications, for all presentation distances. Pupillary constriction due to accommodation was not strong enough to override pupil dilation due to cognitive load. Experiment 2 showed that monocular depth cues caused a small shift in refraction in the expected direction. We conclude that, for the young student sample we used, pupil diameter measurements are not substantially affected by accommodation.Medical Instruments & Bio-Inspired TechnologyHuman-Robot Interactio

    Long non-coding RNA MEG3 inhibits adipogenesis and promotes osteogenesis of human adipose-derived mesenchymal stem cells via miR-140-5p

    No full text
    lncRNAs are an emerging class of regulators involved in multiple biological processes. MEG3, an lncRNA, acts as a tumor suppressor, has been reported to be linked with osteogenic differentiation of MSCs. However, limited knowledge is available concerning the roles of MEG3 in the multilineage differentiation of hASCs. The current study demonstrated that MEG3 was downregulated during adipogenesis and upregulated during osteogenesis of hASCs. Further functional analysis showed that knockdown of MEG3 promoted adipogenic differentiation, whereas inhibited osteogenic differentiation of hASCs. Mechanically, MEG3 may execute its role via regulating miR-140-5p. Moreover, miR-140-5p was upregulated during adipogenesis and downregulated during osteogenesis in hASCs, which was negatively correlated with MEG3. In conclusion, MEG3 participated in the balance of adipogenic and osteogenic differentiation of hASCs, and the mechanism may be through regulating miR-140-5p.National Natural Science Foundation of China [81371118, 81200763, 81670963]; Ph.D. Programs Foundation of Ministry of Education of China [20130001110101]; Project for Culturing Leading Talents in Scientific and Technological Innovation of Beijing [Z171100001117169]; Peking University School and Hospital of Stomatology [PKUSS20140104]SCI(E)ARTICLE1-251-6043
    corecore