16,849 research outputs found

    Multi-user Gaze-based Interaction Techniques on Collaborative Touchscreens

    Get PDF
    Eye-gaze is a technology for implicit, fast, and hands-free input for a variety of use cases, with the majority of techniques focusing on single-user contexts. In this work, we present an exploration into gaze techniques of users interacting together on the same surface. We explore interaction concepts that exploit two states in an interactive system: 1) users visually attending to the same object in the UI, or 2) users focusing on separate targets. Interfaces can exploit these states with increasing availability of eye-tracking. For example, to dynamically personalise content on the UI to each user, and to provide a merged or compromised view on an object when both users' gaze are falling upon it. These concepts are explored with a prototype horizontal interface that tracks gaze of two users facing each other. We build three applications that illustrate different mappings of gaze to multi-user support: an indoor map with gaze-highlighted information, an interactive tree-of-life visualisation that dynamically expands on users' gaze, and a worldmap application with gaze-aware fisheye zooming. We conclude with insights from a public deployment of this system, pointing toward the engaging and seamless ways how eye based input integrates into collaborative interaction

    Trends and Techniques in Visual Gaze Analysis

    Full text link
    Visualizing gaze data is an effective way for the quick interpretation of eye tracking results. This paper presents a study investigation benefits and limitations of visual gaze analysis among eye tracking professionals and researchers. The results were used to create a tool for visual gaze analysis within a Master's project.Comment: pages 89-93, The 5th Conference on Communication by Gaze Interaction - COGAIN 2009: Gaze Interaction For Those Who Want It Most, ISBN: 978-87-643-0475-

    Estimating Point of Regard with a Consumer Camera at a Distance

    Full text link
    In this work, we have studied the viability of a novel technique to estimate the POR that only requires video feed from a consumer camera. The system can work under uncontrolled light conditions and does not require any complex hardware setup. To that end we propose a system that uses PCA feature extraction from the eyes region followed by non-linear regression. We evaluated three state of the art non-linear regression algorithms. In the study, we also compared the performance using a high quality webcam versus a Kinect sensor. We found, that despite the relatively low quality of the Kinect images it achieves similar performance compared to the high quality camera. These results show that the proposed approach could be extended to estimate POR in a completely non-intrusive way.Mansanet Sandin, J.; Albiol Colomer, A.; Paredes Palacios, R.; Mossi García, JM.; Albiol Colomer, AJ. (2013). Estimating Point of Regard with a Consumer Camera at a Distance. En Pattern Recognition and Image Analysis. Springer Verlag. 7887:881-888. doi:10.1007/978-3-642-38628-2_104S8818887887Baluja, S., Pomerleau, D.: Non-intrusive gaze tracking using artificial neural networks. Technical report (1994)Breiman, L.: Random forests. Machine Learning (2001)Logitech HD Webcam C525, http://www.logitech.com/es-es/webcam-communications/webcams/hd-webcam-c525Chang, C.-C., Lin, C.-J.: LIBSVM: A library for support vector machines. ACM TIST (2011), Software, http://www.csie.ntu.edu.tw/~cjlin/libsvmDrucker, H., Burges, C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines (1996)Hansen, D.W., Ji, Q. In: the eye of the beholder: A survey of models for eyes and gaze. IEEE Transactions on PAMI (2010)Ji, Q., Yang, X.: Real-time eye, gaze, and face pose tracking for monitoring driver vigilance. Real-Time Imaging (2002)Kalman, R.E.: A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering (1960)Microsoft Kinect, http://www.microsoft.com/en-us/kinectforwindowsTimmerman, M.E.: Principal component analysis (2nd ed.). i. t. jolliffe. Journal of the American Statistical Association (2003)Morimoto, C.H., Mimica, M.R.M.: Eye gaze tracking techniques for interactive applications. Comput. Vis. Image Underst. (2005)Pirri, F., Pizzoli, M., Rudi, A.: A general method for the point of regard estimation in 3d space. In: Proceedings of the IEEE Conference on CVPR (2011)Reale, M.J., Canavan, S., Yin, L., Hu, K., Hung, T.: A multi-gesture interaction system using a 3-d iris disk model for gaze estimation and an active appearance model for 3-d hand pointing. IEEE Transactions on Multimedia (2011)Saragih, J.M., Lucey, S., Cohn, J.F.: Face alignment through subspace constrained mean-shifts. In: International Conference of Computer Vision, ICCV (2009)Kar-Han, T., Kriegman, D.J., Ahuja, N.: Appearance-based eye gaze estimation. In: Applications of Computer Vision (2002)Takemura, K., Kohashi, Y., Suenaga, T., Takamatsu, J., Ogasawara, T.: Estimating 3d point-of-regard and visualizing gaze trajectories under natural head movements. In: Symposium on Eye-Tracking Research and Applications (2010)Villanueva, A., Cabeza, R., Porta, S.: Eye tracking: Pupil orientation geometrical modeling. Image and Vision Computing (2006)Williams, O., Blake, A., Cipolla, R.: Sparse and semi-supervised visual mapping with the s3gp. In: IEEE Computer Society Conference on CVPR (2006

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User

    Get PDF
    Gaze interaction holds a lot of promise for seamless human-computer interaction. At the same time, current wearable mobile eye trackers require user augmentation that negatively impacts natural user behavior while remote trackers require users to position themselves within a confined tracking range. We present GazeDrone, the first system that combines a camera-equipped aerial drone with a computational method to detect sidelong glances for spontaneous (calibration-free) gaze-based interaction with surrounding pervasive systems (e.g., public displays). GazeDrone does not require augmenting each user with on-body sensors and allows interaction from arbitrary positions, even while moving. We demonstrate that drone-supported gaze interaction is feasible and accurate for certain movement types. It is well-perceived by users, in particular while interacting from a fixed position as well as while moving orthogonally or diagonally to a display. We present design implications and discuss opportunities and challenges for drone-supported gaze interaction in public

    Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC

    Get PDF
    In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals

    Investigating alterations of social interaction in psychiatric disorders with dual interactive eye tracking and virtual faces

    Get PDF
    This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY).Peer reviewedPublisher PD
    • …
    corecore