786 research outputs found

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    New approaches for mixed reality in urban environments: the CINeSPACE project

    Get PDF
    The CINeSPACE (www.cinespace.eu) project allows tourists to access the rich cultural heritage of urban environments by literally morphing the user into the past through the use of multimedia archives. Tourists use the device which includes both a PDA type of device with a GIS interface displayed on a touch screen to help the user navigate and select multimedia content, and video binoculars to create the augmented reality effects. In addition to this mode of interaction, a survey of Mixed Reality user interaction paradigms will be presented. A key feature of Mixed Reality user interfaces is the object identification and annotation methods available to the user, of which a survey, including a review of the GeoConcepts ontology annotation methodology used in the CINeSPACE device, will be presented.Peer Reviewe

    EVEN-VE: Eyes Visibility Based Egocentric Navigation for Virtual Environments

    Get PDF
    Navigation is one of the 3D interactions often needed to interact with a synthetic world. The latest advancements in image processing have made possible gesture based interaction with a virtual world. However, the speed with which a 3D virtual world responds to a user’s gesture is far greater than posing of the gesture itself. To incorporate faster and natural postures in the realm of Virtual Environment (VE), this paper presents a novel eyes-based interaction technique for navigation and panning. Dynamic wavering and positioning of eyes are deemed as interaction instructions by the system. The opening of eyes preceded by closing for a distinct time-threshold, activates forward or backward navigation. Supporting 2-Degree of Freedom head’s gestures (Rolling and Pitching) panning is performed over the xy-plane. The proposed technique was implemented in a case-study project; EWI (Eyes Wavering based Interaction). With EWI, real time detection and tracking of eyes are performed by the libraries of OpenCV at the backend. To interactively follow trajectory of both the eyes, dynamic mapping is performed in OpenGL. The technique was evaluated in two separate sessions by a total of 28 users to assess accuracy, speed and suitability of the system in Virtual Reality (VR). Using an ordinary camera, an average accuracy of 91% was achieved. However, assessment made by using a high quality camera testified that accuracy of the system could be raised to a higher level besides increase in navigation speed. Results of the unbiased statistical evaluations suggest/demonstrate applicability of the system in the emerging domains of virtual and augmented realities

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Digital technologies in architecture and engineering: Exploring an engaged interaction within curricula

    Get PDF
    This chapter focuses on the development and adoption of new Multimedia, Computer Aided Design, and other ICT technologies for both Architecture and Computer Sciences curricula and highlights the multidisciplinary work that can be accomplished when these two areas work together. We describe in detail the addressed educational skills and the developed research and we highlight the contributions towards the improvements of teaching and learning in those areas. We discuss in detail the role of Digital technologies, such as Virtual Reality, Augmented Reality, Multimedia, 3D Modelling software systems, Design Processes and its evaluation tools, such as Shape Grammar and Space Syntax, within the Architecture curricula.info:eu-repo/semantics/acceptedVersio

    New approaches for mixed reality in urban environments: the CINeSPACE project

    Get PDF
    The CINeSPACE (www.cinespace.eu) project allows tourists to access the rich cultural heritage of urban environments by literally morphing the user into the past through the use of multimedia archives. Tourists use the device which includes both a PDA type of device with a GIS interface displayed on a touch screen to help the user navigate and select multimedia content, and video binoculars to create the augmented reality effects. In addition to this mode of interaction, a survey of Mixed Reality user interaction paradigms will be presented. A key feature of Mixed Reality user interfaces is the object identification and annotation methods available to the user, of which a survey, including a review of the GeoConcepts ontology annotation methodology used in the CINeSPACE device, will be presented.Peer Reviewe

    How can Extended Reality Help Individuals with Depth Misperception?

    Get PDF
    Despite the recent actual uses of Extended Reality (XR) in treatment of patients, some areas are less explored. One gap in research is how XR can improve depth perception for patients. Accordingly, the depth perception process in XR settings and in human vision are explored and trackers, visual sensors, and displays as assistive tools of XR settings are scrutinized to extract their potentials in influencing users’ depth perception experience. Depth perception enhancement is relying not only on depth perception algorithms, but also on visualization algorithms, display new technologies, computation power enhancements, and vision apparatus neural mechanism knowledge advancements. Finally, it is discussed that XR holds assistive features not only for the improvement of vision impairments but also for the diagnosis part. Although, each specific patient requires a specific set of XR setting due to different neural or cognition reactions in different individuals with same the disease
    corecore