285 research outputs found

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    3D Face tracking and gaze estimation using a monocular camera

    Get PDF
    Estimating a user’s gaze direction, one of the main novel user interaction technologies, will eventually be used for numerous applications where current methods are becoming less effective. In this paper, a new method is presented for estimating the gaze direction using Canonical Correlation Analysis (CCA), which ïŹnds a linear relationship between two datasets deïŹning the face pose and the corresponding facial appearance changes. Afterwards, iris tracking is performed by blob detection using a 4-connected component labeling algorithm. Finally, a gaze vector is calculated based on gathered eye properties. Results obtained from datasets and real-time input conïŹrm the robustness of this metho

    A dual-cameras-based driver gaze mapping system with an application on non-driving activities monitoring

    Get PDF
    Characterisation of the driver's non-driving activities (NDAs) is of great importance to the design of the take-over control strategy in Level 3 automation. Gaze estimation is a typical approach to monitor the driver's behaviour since the eye gaze is normally engaged with the human activities. However, current eye gaze tracking techniques are either costly or intrusive which limits their applicability in vehicles. This paper proposes a low-cost and non-intrusive dual-cameras based gaze mapping system that visualises the driver's gaze using a heat map. The challenges introduced by complex head movement during NDAs and camera distortion are addressed by proposing a nonlinear polynomial model to establish the relationship between the face features and eye gaze on the simulated driver's view. The Root Mean Square Error of this system in the in-vehicle experiment for the X and Y direction is 7.80±5.99 pixel and 4.64±3.47 pixel respectively with the image resolution of 1440 x 1080 pixels. This system is successfully demonstrated to evaluate three NDAs with visual attention. This technique, acting as a generic tool to monitor driver's visual attention, will have wide applications on NDA characterisation for intelligent design of take over strategy and driving environment awareness for current and future automated vehicles

    Low-cost Geometry-based Eye Gaze Detection using Facial Landmarks Generated through Deep Learning

    Full text link
    Introduction: In the realm of human-computer interaction and behavioral research, accurate real-time gaze estimation is critical. Traditional methods often rely on expensive equipment or large datasets, which are impractical in many scenarios. This paper introduces a novel, geometry-based approach to address these challenges, utilizing consumer-grade hardware for broader applicability. Methods: We leverage novel face landmark detection neural networks capable of fast inference on consumer-grade chips to generate accurate and stable 3D landmarks of the face and iris. From these, we derive a small set of geometry-based descriptors, forming an 8-dimensional manifold representing the eye and head movements. These descriptors are then used to formulate linear equations for predicting eye-gaze direction. Results: Our approach demonstrates the ability to predict gaze with an angular error of less than 1.9 degrees, rivaling state-of-the-art systems while operating in real-time and requiring negligible computational resources. Conclusion: The developed method marks a significant step forward in gaze estimation technology, offering a highly accurate, efficient, and accessible alternative to traditional systems. It opens up new possibilities for real-time applications in diverse fields, from gaming to psychological research

    Spotting Agreement and Disagreement: A Survey of Nonverbal Audiovisual Cues and Tools

    Get PDF
    While detecting and interpreting temporal patterns of non–verbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. Nevertheless, it is an important one to achieve if the goal is to realise a naturalistic communication between humans and machines. Machines that are able to sense social attitudes like agreement and disagreement and respond to them in a meaningful way are likely to be welcomed by users due to the more natural, efficient and human–centered interaction they are bound to experience. This paper surveys the nonverbal cues that could be present during agreement and disagreement behavioural displays and lists a number of tools that could be useful in detecting them, as well as a few publicly available databases that could be used to train these tools for analysis of spontaneous, audiovisual instances of agreement and disagreement
    • 

    corecore