13 research outputs found

    A Simple Gaze Tracker for Computer Operation by the Disabled in Education

    Get PDF
    A compact gaze tracker was developed which consists of a head band and electrodes which process the Electro-Oculo-Gram (EOG) reflecting the patient´s eye movements. We have confirmed that the processed EOG signal correlates well with gaze angle, and we show that the instrument we designed enables a child to move a target on a screen up to 40 degrees left-right from central sight. To achieve this, a signal processing circuit was designed and placed on a head band to minimize noise. Further processing is based on the identification of saccadic eye movements and on the educated calculation of the estimated gaze angle as a result of angle change in both directions. A 75% success rate was achieved to detect transitions of eye positions in 5° steps from +40° to -40°. First tests by normal children suggest that the device may prove useful for communication by the disabled (e.g. patients with no control on hand movements). In such cases, extensive personal training will tap on neurological plasticity to achieve the required performance level for computer mouse command of educational games and for interactive applications in general

    A Simple Gaze Tracker for Computer Operation by the Disabled in Education

    Get PDF
    A compact gaze tracker was developed which consists of a head band and electrodes which process the Electro-Oculo-Gram (EOG) reflecting the patient´s eye movements. We have confirmed that the processed EOG signal correlates well with gaze angle, and we show that the instrument we designed enables a child to move a target on a screen up to 40 degrees left-right from central sight. To achieve this, a signal processing circuit was designed and placed on a head band to minimize noise. Further processing is based on the identification of saccadic eye movements and on the educated calculation of the estimated gaze angle as a result of angle change in both directions. A 75% success rate was achieved to detect transitions of eye positions in 5° steps from +40° to -40°. First tests by normal children suggest that the device may prove useful for communication by the disabled (e.g. patients with no control on hand movements). In such cases, extensive personal training will tap on neurological plasticity to achieve the required performance level for computer mouse command of educational games and for interactive applications in general.Um rastreador visual compacto que foi desenvolvido consiste numa faixa para a cabeça com eletrodos que processam o Electro-Oculo-Gram (EOG), acompanhando o movimento do olhar do paciente. Confi rmamos que o sinal processado pelo EOG correlaciona-se muito bem com o ângulo do olhar, e nos mostra que o instrumento projetado possibilita a criança mover o alvo na tela de 40º esquerda-direita da vista central. Para isso, um circuito de processamento de sinal foi projetado e colocado em uma faixa de cabeça para minimizar ruídos. O processamento adicional foi baseado na identifi cação dos movimentos oculares e o cálculo estimado do ângulo da faixa resultaram na mudança do ângulo em ambas as direções. Uma taxa de 75% de sucesso foi alcançada na detecção das posições do olho numa escala de 5º desde +40º até -40º. Os primeiros testes em crianças sem defi ciência indicam que o dispositivo pode ser viável para comunicação de pessoas com defi ciência (ex. sujeitos que não tem controle dos movimentos das mãos). Nesses casos, o treinamento extensivo de profi ssionais poderá alcançar a plasticidade neurológica requerida para comandar o mouse do computador dos jogos educacionais e aplicações interativas em geral

    Eye gaze estimation from a single image of one eye

    Full text link

    A model-based gaze-tracking system

    Get PDF

    Visual tracking for multimodal human computer interaction

    Full text link

    An implementation of face-to-face grounding in an embodied conversational agent

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 53-55).When people have a face-to-face conversation, they don't just spout information blindly-they work to make sure that both participants understand what has been said. This process of ensuring that what has been said is added to the common ground of the conversation is called grounding. This thesis explores recent research into the verbal and nonverbal means for grounding, and presents an implementation of a face-to-face grounding system in an embodied conversational agent that is based on a model of grounding extracted from the research. This is the first such agent that supports nonverbal grounding, and so this thesis represents both a proof of concept and a guide for future work in this area, showing that it is possible to build a dialogue system that implements face-to-face grounding between a human and an agent based on an empirically-derived model. Additionally, this thesis describes a vision system, based on a stereo-camera head-pose tracker and using a recently proposed method for head-nod detection, that can robustly and accurately identify head nods and gaze state.by Gabriel A. Reinstein.M.Eng

    Tracking and modeling focus of attention in meetings [online]

    Get PDF
    Abstract This thesis addresses the problem of tracking the focus of attention of people. In particular, a system to track the focus of attention of participants in meetings is developed. Obtaining knowledge about a person\u27s focus of attention is an important step towards a better understanding of what people do, how and with what or whom they interact or to what they refer. In meetings, focus of attention can be used to disambiguate the addressees of speech acts, to analyze interaction and for indexing of meeting transcripts. Tracking a user\u27s focus of attention also greatly contributes to the improvement of human­computer interfaces since it can be used to build interfaces and environments that become aware of what the user is paying attention to or with what or whom he is interacting. The direction in which people look; i.e., their gaze, is closely related to their focus of attention. In this thesis, we estimate a subject\u27s focus of attention based on his or her head orientation. While the direction in which someone looks is determined by head orientation and eye gaze, relevant literature suggests that head orientation alone is a su#cient cue for the detection of someone\u27s direction of attention during social interaction. We present experimental results from a user study and from several recorded meetings that support this hypothesis. We have developed a Bayesian approach to model at whom or what someone is look­ ing based on his or her head orientation. To estimate head orientations in meetings, the participants\u27 faces are automatically tracked in the view of a panoramic camera and neural networks are used to estimate their head orientations from pre­processed images of their faces. Using this approach, the focus of attention target of subjects could be correctly identified during 73% of the time in a number of evaluation meet­ ings with four participants. In addition, we have investigated whether a person\u27s focus of attention can be pre­dicted from other cues. Our results show that focus of attention is correlated to who is speaking in a meeting and that it is possible to predict a person\u27s focus of attention based on the information of who is talking or was talking before a given moment. We have trained neural networks to predict at whom a person is looking, based on information about who was speaking. Using this approach we were able to predict who is looking at whom with 63% accuracy on the evaluation meetings using only information about who was speaking. We show that by using both head orientation and speaker information to estimate a person\u27s focus, the accuracy of focus detection can be improved compared to just using one of the modalities for focus estimation. To demonstrate the generality of our approach, we have built a prototype system to demonstrate focus­aware interaction with a household robot and other smart appliances in a room using the developed components for focus of attention tracking. In the demonstration environment, a subject could interact with a simulated household robot, a speech­enabled VCR or with other people in the room, and the recipient of the subject\u27s speech was disambiguated based on the user\u27s direction of attention. Zusammenfassung Die vorliegende Arbeit beschäftigt sich mit der automatischen Bestimmung und Ver­folgung des Aufmerksamkeitsfokus von Personen in Besprechungen. Die Bestimmung des Aufmerksamkeitsfokus von Personen ist zum Verständnis und zur automatischen Auswertung von Besprechungsprotokollen sehr wichtig. So kann damit beispielsweise herausgefunden werden, wer zu einem bestimmten Zeitpunkt wen angesprochen hat beziehungsweise wer wem zugehört hat. Die automatische Bestim­mung des Aufmerksamkeitsfokus kann desweiteren zur Verbesserung von Mensch-Maschine­Schnittstellen benutzt werden. Ein wichtiger Hinweis auf die Richtung, in welche eine Person ihre Aufmerksamkeit richtet, ist die Kopfstellung der Person. Daher wurde ein Verfahren zur Bestimmung der Kopfstellungen von Personen entwickelt. Hierzu wurden künstliche neuronale Netze benutzt, welche als Eingaben vorverarbeitete Bilder des Kopfes einer Person erhalten, und als Ausgabe eine Schätzung der Kopfstellung berechnen. Mit den trainierten Netzen wurde auf Bilddaten neuer Personen, also Personen, deren Bilder nicht in der Trainingsmenge enthalten waren, ein mittlerer Fehler von neun bis zehn Grad für die Bestimmung der horizontalen und vertikalen Kopfstellung erreicht. Desweiteren wird ein probabilistischer Ansatz zur Bestimmung von Aufmerksamkeits­zielen vorgestellt. Es wird hierbei ein Bayes\u27scher Ansatzes verwendet um die A­posterior iWahrscheinlichkeiten verschiedener Aufmerksamkteitsziele, gegeben beobachteter Kopfstellungen einer Person, zu bestimmen. Die entwickelten Ansätze wurden auf mehren Besprechungen mit vier bis fünf Teilnehmern evaluiert. Ein weiterer Beitrag dieser Arbeit ist die Untersuchung, inwieweit sich die Blickrich­tung der Besprechungsteilnehmer basierend darauf, wer gerade spricht, vorhersagen läßt. Es wurde ein Verfahren entwickelt um mit Hilfe von neuronalen Netzen den Fokus einer Person basierend auf einer kurzen Historie der Sprecherkonstellationen zu schätzen. Wir zeigen, dass durch Kombination der bildbasierten und der sprecherbasierten Schätzung des Aufmerksamkeitsfokus eine deutliche verbesserte Schätzung erreicht werden kann. Insgesamt wurde mit dieser Arbeit erstmals ein System vorgestellt um automatisch die Aufmerksamkeit von Personen in einem Besprechungsraum zu verfolgen. Die entwickelten Ansätze und Methoden können auch zur Bestimmung der Aufmerk­samkeit von Personen in anderen Bereichen, insbesondere zur Steuerung von comput­erisierten, interaktiven Umgebungen, verwendet werden. Dies wird an einer Beispielapplikation gezeigt

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans
    corecore