613 research outputs found
Estimation of Driver's Gaze Region from Head Position and Orientation using Probabilistic Confidence Regions
A smart vehicle should be able to understand human behavior and predict their
actions to avoid hazardous situations. Specific traits in human behavior can be
automatically predicted, which can help the vehicle make decisions, increasing
safety. One of the most important aspects pertaining to the driving task is the
driver's visual attention. Predicting the driver's visual attention can help a
vehicle understand the awareness state of the driver, providing important
contextual information. While estimating the exact gaze direction is difficult
in the car environment, a coarse estimation of the visual attention can be
obtained by tracking the position and orientation of the head. Since the
relation between head pose and gaze direction is not one-to-one, this paper
proposes a formulation based on probabilistic models to create salient regions
describing the visual attention of the driver. The area of the predicted region
is small when the model has high confidence on the prediction, which is
directly learned from the data. We use Gaussian process regression (GPR) to
implement the framework, comparing the performance with different regression
formulations such as linear regression and neural network based methods. We
evaluate these frameworks by studying the tradeoff between spatial resolution
and accuracy of the probability map using naturalistic recordings collected
with the UTDrive platform. We observe that the GPR method produces the best
result creating accurate predictions with localized salient regions. For
example, the 95% confidence region is defined by an area that covers 3.77%
region of a sphere surrounding the driver.Comment: 13 Pages, 12 figures, 2 table
Sensing, interpreting, and anticipating human social behaviour in the real world
Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, Gesichtsausdrücke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzählige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, Führung, oder der Qualität des Verhältnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv während ihres täglichen sozialen Lebens von Maschinen unterstützt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. Darüber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein Stück weit mehr Realität werden zu lassen. Diese Arbeit liefert wichtige Beiträge zur autmatischen Erkennung menschlichen Blickverhaltens in alltäglichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unüberwachte Methoden zur Augenkontakterkennung bisher lediglich für dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. Tägliche Aktivitäten sind eine Herausforderung für Geräte zur mobile Augenbewegungsmessung, da Verschiebungen dieser Geräte zur Verschlechterung ihrer Kalibrierung führen können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen Endgeräten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter Ausrüstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und führen die erste datensatzübergreifende Evaluierung zur Detektion von sich entwickelndem Führungsverhalten durch. Zum Abschluss der Arbeit präsentieren wir die ersten Ansätze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die Fähigkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufügen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu übersehen. Wir präsentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen Endgeräten während täglicher Aktivitäten, als auch während dyadischer Interaktionen mittels Videotelefonie
EYEDIAP Database: Data Description and Gaze Tracking Evaluation Benchmarks
The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. The EYEDIAP database intends to overcome this limitation by providing a common framework for the training and evaluation of gaze estimation approaches. In particular, this database has been designed to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object. This technical report contains an extended description of the database, we include the processing methodology for the elements provided along with the raw data, the database organization and additional benchmarks we consider relevant to evaluate diverse properties of a given gaze estimator
A Bayesian hierarchy for robust gaze estimation in human–robot interaction
In this text, we present a probabilistic solution for robust gaze estimation in the context of human–robot interaction. Gaze estimation, in the sense of continuously assessing gaze direction of an interlocutor so as to determine his/her focus of visual attention, is important in several important computer vision applications, such as the development of non-intrusive gaze-tracking equipment for psychophysical experiments in neuroscience, specialised telecommunication devices, video surveillance, human–computer interfaces (HCI) and artificial cognitive systems for human–robot interaction (HRI), our application of interest. We have developed a robust solution based on a probabilistic approach that inherently deals with the uncertainty of sensor models, but also and in particular with uncertainty arising from distance, incomplete data and scene dynamics. This solution comprises a hierarchical formulation in the form of a mixture model that loosely follows how geometrical cues provided by facial features are believed to be used by the human perceptual system for gaze estimation. A quantitative analysis of the proposed framework's performance was undertaken through a thorough set of experimental sessions. Results show that the framework performs according to the difficult requirements of HRI applications, namely by exhibiting correctness, robustness and adaptiveness
Tracking and modeling focus of attention in meetings [online]
Abstract
This thesis addresses the problem of tracking the focus of
attention of people. In particular, a system to track the focus
of attention of participants in meetings is developed. Obtaining
knowledge about a person\u27s focus of attention is an important
step towards a better understanding of what people do, how and
with what or whom they interact or to what they refer. In
meetings, focus of attention can be used to disambiguate the
addressees of speech acts, to analyze interaction and for
indexing of meeting transcripts. Tracking a user\u27s focus of
attention also greatly contributes to the improvement of
humancomputer interfaces since it can be used to build interfaces
and environments that become aware of what the user is paying
attention to or with what or whom he is interacting.
The direction in which people look; i.e., their gaze, is closely
related to their focus of attention. In this thesis, we estimate
a subject\u27s focus of attention based on his or her head
orientation. While the direction in which someone looks is
determined by head orientation and eye gaze, relevant literature
suggests that head orientation alone is a su#cient cue for the
detection of someone\u27s direction of attention during social
interaction. We present experimental results from a user study
and from several recorded meetings that support this hypothesis.
We have developed a Bayesian approach to model at whom or what
someone is look ing based on his or her head orientation. To
estimate head orientations in meetings, the participants\u27 faces
are automatically tracked in the view of a panoramic camera and
neural networks are used to estimate their head orientations
from preprocessed images of their faces. Using this approach,
the focus of attention target of subjects could be correctly
identified during 73% of the time in a number of evaluation meet
ings with four participants.
In addition, we have investigated whether a person\u27s focus of
attention can be predicted from other cues. Our results show
that focus of attention is correlated to who is speaking in a
meeting and that it is possible to predict a person\u27s focus of
attention
based on the information of who is talking or was talking before
a given moment.
We have trained neural networks to predict at whom a person is
looking, based on information about who was speaking. Using this
approach we were able to predict who is looking at whom with 63%
accuracy on the evaluation meetings using only information about
who was speaking. We show that by using both head orientation
and speaker information to estimate a person\u27s focus, the
accuracy of focus detection can be improved compared to just
using one of the modalities for focus estimation.
To demonstrate the generality of our approach, we have built a
prototype system to demonstrate focusaware interaction with a
household robot and other smart appliances in a room using the
developed components for focus of attention tracking. In the
demonstration environment, a subject could interact with a
simulated household robot, a speechenabled VCR or with other
people in the room, and the recipient of the subject\u27s speech
was disambiguated based on the user\u27s direction of attention.
Zusammenfassung
Die vorliegende Arbeit beschäftigt sich mit der automatischen
Bestimmung und Verfolgung des Aufmerksamkeitsfokus von Personen
in Besprechungen.
Die Bestimmung des Aufmerksamkeitsfokus von Personen ist zum
Verständnis und zur automatischen Auswertung von
Besprechungsprotokollen sehr wichtig. So kann damit
beispielsweise herausgefunden werden, wer zu einem bestimmten
Zeitpunkt wen angesprochen hat beziehungsweise wer wem zugehört
hat. Die automatische Bestimmung des Aufmerksamkeitsfokus kann
desweiteren zur Verbesserung von Mensch-MaschineSchnittstellen
benutzt werden.
Ein wichtiger Hinweis auf die Richtung, in welche eine Person
ihre Aufmerksamkeit richtet, ist die Kopfstellung der Person.
Daher wurde ein Verfahren zur Bestimmung der Kopfstellungen von
Personen entwickelt. Hierzu wurden künstliche neuronale Netze
benutzt, welche als Eingaben vorverarbeitete Bilder des Kopfes
einer Person erhalten, und als Ausgabe eine Schätzung der
Kopfstellung berechnen. Mit den trainierten Netzen wurde auf
Bilddaten neuer Personen, also Personen, deren Bilder nicht in
der Trainingsmenge enthalten waren, ein mittlerer Fehler von
neun bis zehn Grad für die Bestimmung der horizontalen und
vertikalen Kopfstellung erreicht.
Desweiteren wird ein probabilistischer Ansatz zur Bestimmung von
Aufmerksamkeitszielen vorgestellt. Es wird hierbei ein
Bayes\u27scher Ansatzes verwendet um die Aposterior
iWahrscheinlichkeiten verschiedener Aufmerksamkteitsziele,
gegeben beobachteter Kopfstellungen einer Person, zu bestimmen.
Die entwickelten Ansätze wurden auf mehren Besprechungen mit
vier bis fünf Teilnehmern evaluiert.
Ein weiterer Beitrag dieser Arbeit ist die Untersuchung,
inwieweit sich die Blickrichtung der Besprechungsteilnehmer
basierend darauf, wer gerade spricht, vorhersagen läßt. Es wurde
ein Verfahren entwickelt um mit Hilfe von neuronalen Netzen den
Fokus einer Person basierend auf einer kurzen Historie der
Sprecherkonstellationen zu schätzen.
Wir zeigen, dass durch Kombination der bildbasierten und der
sprecherbasierten Schätzung des Aufmerksamkeitsfokus eine
deutliche verbesserte Schätzung erreicht werden kann.
Insgesamt wurde mit dieser Arbeit erstmals ein System
vorgestellt um automatisch die Aufmerksamkeit von Personen in
einem Besprechungsraum zu verfolgen.
Die entwickelten Ansätze und Methoden können auch zur Bestimmung
der Aufmerksamkeit von Personen in anderen Bereichen,
insbesondere zur Steuerung von computerisierten, interaktiven
Umgebungen, verwendet werden. Dies wird an einer
Beispielapplikation gezeigt
Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
Gaze is an important non-verbal cue involved in many facets of social interactions like communication, attentiveness or attitudes. Nevertheless, extracting gaze directions visually and remotely usually suffers large errors because of low resolution images, inaccurate eye cropping, or large eye shape variations across the population, amongst others. This paper hypothesizes that these challenges can be addressed by exploiting multimodal social cues for gaze model adaptation on top of an head-pose independent 3D gaze estimation framework. First, a robust eye cropping refinement is achieved by combining a semantic face model with eye landmark detections. Investigations on whether temporal smoothing can overcome instantaneous refinement limitations is conducted. Secondly, to study whether social interaction convention could be used as priors for adaptation, we exploited the speaking status and head pose constraints to derive soft gaze labels and infer person-specific gaze bias using robust statistics. Experimental results on gaze coding in natural interactions from two different settings demonstrate that the two steps of our gaze adaptation method contribute to reduce gaze errors by a large margin over the baseline and can be generalized to several identities in challenging scenarios
- …