21 research outputs found

    Semi-automatic annotation of eye-tracking recordings in terms of human torso, face and hands

    Get PDF
    status: publishe

    Object recognition and person detection for mobile eye-tracking research. A case study with real-life customer journeys

    Get PDF

    Semi-automatic annotation of eye-tracking recordings in terms of human torso, face and hands

    Get PDF

    Computer vision techniques for automatic analysis of mobile eye-tracking data

    No full text
    In the last four decades eye-tracking research has established itself as a powerful paradigm for studying human visual behaviour. More recently, efforts have been made to extend the application field for eye-tracking research beyond the boundaries of lab-based experiments. For example, research on marketing or on human-human interaction definitely benefit from real-life experiments. Since 1999 the concept of mobile eye-tracking is introduced. A mobile eye-tracker is de facto a sophisticated pair of glasses with a front camera, capturing the field of view, and a second camera which is directed towards the eyes and records the eye movements. Both recordings are combined to determine at which position in the field of view one is looking. The popularity of mobile eye-trackers as a measurement of user experience and behaviour in very diverse application areas is increasing rapidly. Unfortunately, this is tempered by the unfavourable property that a mobile eye-tracker produces a large amount of data that needs to be analysed. The analysis of an eye-tracking experiment can be defined as: 鈥榙etermine for how long and how often a person looks at a relevant object'. Indeed, depending on the purpose of each eye-tracking experiment, these relevant objects may vary from products on a shelf in the context of market research, up to the face of a person in an experiment on human-human interaction. In the last decade several attempts have been made to facilitate this analytical challenge. Unfortunately, the existing methods require experimental control and therefore impose restrictions on the concept of real-life mobile eye-tracking. The marker-based analysis, for example, allows for a partial automatic analysis. However, this method confines the flexibility of mobile eye-tracking. Other solutions such as automatic semantic analysis are only applicable for the analysis of a limited range of eye-tracking applications. Therefore, many eye-tracking researchers are often forced to manually analyse the recordings, which is a painstaking and time-consuming task. To overcome these issues, in this dissertation we proposed a computer vision-based framework for the semi-automatic analysis of mobile eye-tracking recordings. The goal of this PhD project was to apply computer vision algorithms for the automatic analysis of mobile eye-tracking recordings. By using computer vision algorithms to automatically detect relevant objects in images captured by the scene camera of a mobile eye-tracker we are for example able to automatically determine whether or not a person looked at the objects and how often and for how long one was looking at these relevant objects. Without doubt, efforts to automate this type of analysis can contribute to the increasing popularity of mobile eye-tracking in a broad range of applications. Developing such an analysis framework is not a trivial task since several challenges need to be tackled. First, it is of vital importance that the accuracy of the analysis is as high as possible. Furthermore, it is advisable that the automatic analysis is faster than manual analysis and even more important, that by using our framework the manual workload significantly decreases. Third, the images that we process are recorded in unconstrained environments using a wearable device. This results in challenging images in which low illumination and motion blur is often present, making the automatic analysis much more complex. Furthermore, we aim to analyse the visual behaviour w.r.t. small moving objects such as the hand gestures of another person, making the analysis even more challenging. Throughout this PhD project, we focused on four main classes to be recognised. Our analysis framework is capable of analysing the visual behaviour w.r.t. objects (such as specific products in a shopping experiment), human bodies and faces, human hands and gestures. Furthermore, we proposed a semi-automatic analysis approach in which manual intervention and automatic analysis are efficiently intertwined to ensure high accuracy even in challenging conditions. To fully validate the capabilities of our analysis framework, we recorded a broad range of eye-tracking recordings and used our framework for the validation. This profound validation revealed the applicability of our approach for various types of eye-tracking experiments.De Beugher S., ''Computer vision techniques for automatic analysis of mobile eye-tracking data'', Proefschrift voorgedragen tot het behalen van het doctoraat in de industri毛le ingenieurswetenschappen, KU Leuven, November 2016, Leuven, Belgium.nrpages: 177status: publishe

    A semi-automatic annotation tool for unobtrusive gesture analysis

    No full text
    status: publishe

    Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection

    Get PDF
    In this paper we present a novel method for the automatic analysis of mobile eye-tracking data in natural environments. Mobile eye-trackers generate large amounts of data, making manual analysis very time-consuming. Available solutions, such as marker-based analysis minimize the manual labour but require experimental control, making real-life experiments practically unfeasible. We present a novel method for processing this mobile eye-tracking data by applying object, face and person detection algorithms. Furthermore we present a temporal smoothing technique to improve the detection rate and we trained a new detection model for occluded person and face detections. This enables the analysis to be performed on the object level rather than the traditionally used coordinate level. We present speed and accuracy results of our novel detection scheme on challenging, large-scale real-life experiments.De Beugher S., Br么ne G., Goedem茅 T., ''Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection'', 9th international conference on computer vision theory and applications - VISAPP 2014, pp. 625-633, January 5-8, 2014, Lisbon, Portugal. no issnstatus: publishe

    Semi-automatic Hand Annotation of Egocentric Recordings

    No full text
    We present a fast and accurate algorithm for the detection of human hands in real-life 2D image sequences. We focus on a specific application of hand detection, viz. the annotation of egocentric recordings. A well known type of egocentric camera is the mobile eye-tracker, which is often used in research on human-human interaction. Nowadays, this type of data is typically annotated manually for relevant features (e.g. visual fixations of gestures), which is a time-consuming and error-prone task. We present a semi-automatic approach for the detection of human hands in images. Such an approach reduces the amount of manual analysis drastically while guaranteeing high accuracy. In our algorithm we combine several well-known detection techniques together with an advanced elimination scheme to reduce false detections. We validate our approach using a challenging dataset containing over 4300 hand instances. This validation allows us to explore the capabilities and boundaries of our approach.status: publishe

    Semi-automatic Hand Detection: A case study on real life mobile eye-tracker data

    No full text
    In this paper we present a highly accurate algorithm for the detection of human hands in real-life 2D image sequences. Current state of the art algorithms show relatively poor detection accuracy results on unconstrained, challenging images. To overcome this, we introduce a detection scheme in which we combine several well known detection techniques combined with an advanced elimination mechanism to reduce false detections. Furthermore we present a novel (semi-)automatic framework achieving detection rates up to 100%, with only minimal manual input. This is a useful tool in supervised applications where an error-free detection result is required at the cost of a limited amount of manual effort. As an application, this paper focuses on the analysis of video data of human-human interaction, collected with the scene camera of mobile eye-tracking glasses. This type of data is typically annotated manually for relevant features (e.g. visual fixations on gestures), which is a time-consuming, tedious and error-prone task. The usage of our semi-automatic approach reduces the amount of manual analysis dramatically. We also present a new fully annotated benchmark dataset on this application which we made publicly available.De Beugher S., Br么ne G., Goedem茅 T., ''Semi-automatic hand detection: A case study on real life mobile eye-tracker data'', Proceedings 10th international conference on computer vision theory and applications - VISAPP 2015, vol. 2, pp. 121-129, March 11-14, 2015, Berlin, Germany. no issnstatus: publishe
    corecore