279 research outputs found

    Estimation of Confidence in the Dialogue based on Eye Gaze and Head Movement Information

    Get PDF
    In human-robot interaction, human mental states in dialogue have attracted attention to human-friendly robots that support educational use. Although estimating mental states using speech and visual information has been conducted, it is still challenging to estimate mental states more precisely in the educational scene. In this paper, we proposed a method to estimate human mental state based on participants’ eye gaze and head movement information. Estimated participants’ confidence levels in their answers to the miscellaneous knowledge question as a human mental state. The participants’ non-verbal information, such as eye gaze and head movements during dialog with a robot, were collected in our experiment using an eye-tracking device. Then we collect participants’ confidence levels and analyze the relationship between human mental state and non-verbal information. Furthermore, we also applied a machine learning technique to estimate participants’ confidence levels from extracted features of gaze and head movement information. As a result, the performance of a machine learning technique using gaze and head movements information achieved over 80 % accuracy in estimating confidence levels. Our research provides insight into developing a human-friendly robot considering human mental states in the dialogue

    読み方の定量的分析に基づく個人およびテキストの特徴認識

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Affective Brain-Computer Interfaces

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Modeling driver-vehicle interaction in automated driving

    Get PDF
    In automated vehicles, the collaboration of human drivers and automated systems plays a decisive role in road safety, driver comfort, and acceptance of automated vehicles. A successful interaction requires a precise interpretation and investigation of all influencing factors such as driver state, system state, and surroundings (e.g., traffic, weather). This contribution discusses the detailed structure of the driver-vehicle interaction, which takes into account the driving situation and the driver state to improve driver performance. The interaction rules are derived from a controller that is fed by the driver state within a loop. The regulation of the driver state continues until the target state is reached or the criticality of the situation is resolved. In addition, a driver model is proposed that represents the driver’s decision-making process during the interaction between driver and vehicle and during the transition of driving tasks. The model includes the sensory perception process, decision-making, and motor response. The decision-making process during the interaction deals with the cognitive and emotional states of the driver. Based on the proposed driver-vehicle interaction loop and the driver model, an experiment with 38 participants is performed in a driving simulator to investigate (1) if both emotional and cognitive states become active during the decision-making process and (2) what the temporal sequence of the processes is. Finally, the evidence gathered from the experiment is analyzed. The results are consistent with the suggested driver model in terms of the cognitive and emotional state of the driver during the mode change from automated system to the human driver.In automatisierten Fahrzeugen spielt die Zusammenarbeit vom menschlichen Fahrer und automatisierten Systemen eine entscheidende Rolle für die Verkehrssicherheit, den Fahrerkomfort und die Akzeptanz von automatisierten Fahrzeugen. Eine erfolgreiche Interaktion erfordert eine präzise Interpretation aller Einflussfaktoren wie dem Fahrerzustand, dem Systemzustand und den Umwelteinflüssen (z. B. Verkehr, Wetter). In diesem Beitrag wird eine detaillierte Struktur der Fahrer-Fahrzeug-Interaktion diskutiert, welche die Fahrsituation und den Fahrerzustand berücksichtigt, um anschließend die Leistung des Fahrers zu verbessern. Die Interaktion wird von einem Regler geleitet, der den Fahrerzustand als Eingang innerhalb einer Schleife erhält. Die Regelung des Fahrerzustands erfolgt bis der Sollzustand erreicht wird. Darüber hinaus wird ein Fahrermodell vorgeschlagen, das den Entscheidungsprozess des Fahrers während der Interaktion zwischen dem Fahrer und dem Fahrzeug und während des Übergangs der Fahraufgaben darstellt. Das Modell umfasst den sensorischen Wahrnehmungsprozess, die Entscheidungsfindung und die motorische Reaktion. Der Entscheidungsprozess während der Interaktion befasst sich mit den kognitiven und emotionalen Zuständen des Fahrers. Auf der Grundlage der vorgeschlagenen Fahrer-Fahrzeug-Interaktionsschleife und des Fahrermodells wird ein Experiment mit 38 Teilnehmern in einem Fahrsimulator durchgeführt, um zu untersuchen, (1) ob sowohl emotionale als auch kognitive Zustände während des Entscheidungsprozesses aktiv werden und (2) wie die zeitliche Abfolge der Prozesse aussieht. Schließlich werden die aus dem Experiment gewonnenen Daten analysiert. Die Ergebnisse stimmen mit dem vorgeschlagenen Fahrermodell in Bezug auf den kognitiven und emotionalen Zustand des Fahrers während des Moduswechsels vom automatisierten System zum menschlichen Fahrer überein

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Science of Facial Attractiveness

    Get PDF
    corecore