938 research outputs found

    Exploring Cognitive States: Methods for Detecting Physiological Temporal Fingerprints

    Get PDF
    Cognitive state detection and its relationship to observable physiologically telemetry has been utilized for many human-machine and human-cybernetic applications. This paper aims at understanding and addressing if there are unique psychophysiological patterns over time, a physiological temporal fingerprint, that is associated with specific cognitive states. This preliminary work involves commercial airline pilots completing experimental benchmark task inductions of three cognitive states: 1) Channelized Attention (CA); 2) High Workload (HW); and 3) Low Workload (LW). We approach this objective by modeling these "fingerprints" through the use of Hidden Markov Models and Entropy analysis to evaluate if the transitions over time are complex or rhythmic/predictable by nature. Our results indicate that cognitive states do have unique complexity of physiological sequences that are statistically different from other cognitive states. More specifically, CA has a significantly higher temporal psychophysiological complexity than HW and LW in EEG and ECG telemetry signals. With regards to respiration telemetry, CA has a lower temporal psychophysiological complexity than HW and LW. Through our preliminary work, addressing this unique underpinning can inform whether these underlying dynamics can be utilized to understand how humans transition between cognitive states and for improved detection of cognitive states

    Temporal pathways to learning: how learning emerges in an open-ended collaborative activity

    Get PDF
    The learning process depends on the nature of the learning environment, particularly in the case of open-ended learning environments, where the learning process is considered to be non-linear. In this paper, we report on the findings of employing a multimodal Hidden Markov Model (HMM) based methodology to investigate the temporal learning processes of two types of learners that have learning gains and a type that does not have learning gains in an open-ended collaborative learning activity. Considering log data, speech behavior, affective states and gaze patterns, we find that all learners start from a similar state of non-productivity, but once out of it they are unlikely to fall back into that state, especially in the case of the learners that have learning gains. Those who have learning gains shift between two problem solving strategies, each characterized by both exploratory and reflective actions, as well as demonstrate speech and gaze patterns associated with these strategies, that differ from those who don't have learning gains. Further, the teams that have learning gains also differ between themselves in the manner in which they employ the problem solving strategies over the interaction, as well as in the manner they express negative emotions while exhibiting a particular strategy. These outcomes contribute to understanding the multiple pathways of learning in an open-ended collaborative learning environment, and provide actionable insights for designing effective interventions

    Chapter From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities

    Get PDF
    Interdisciplinary concept of dissipative soliton is unfolded in connection with ultrafast fibre lasers. The different mode-locking techniques as well as experimental realizations of dissipative soliton fibre lasers are surveyed briefly with an emphasis on their energy scalability. Basic topics of the dissipative soliton theory are elucidated in connection with concepts of energy scalability and stability. It is shown that the parametric space of dissipative soliton has reduced dimension and comparatively simple structure that simplifies the analysis and optimization of ultrafast fibre lasers. The main destabilization scenarios are described and the limits of energy scalability are connected with impact of optical turbulence and stimulated Raman scattering. The fast and slow dynamics of vector dissipative solitons are exposed

    Estimation of Confidence in the Dialogue based on Eye Gaze and Head Movement Information

    Get PDF
    In human-robot interaction, human mental states in dialogue have attracted attention to human-friendly robots that support educational use. Although estimating mental states using speech and visual information has been conducted, it is still challenging to estimate mental states more precisely in the educational scene. In this paper, we proposed a method to estimate human mental state based on participants’ eye gaze and head movement information. Estimated participants’ confidence levels in their answers to the miscellaneous knowledge question as a human mental state. The participants’ non-verbal information, such as eye gaze and head movements during dialog with a robot, were collected in our experiment using an eye-tracking device. Then we collect participants’ confidence levels and analyze the relationship between human mental state and non-verbal information. Furthermore, we also applied a machine learning technique to estimate participants’ confidence levels from extracted features of gaze and head movement information. As a result, the performance of a machine learning technique using gaze and head movements information achieved over 80 % accuracy in estimating confidence levels. Our research provides insight into developing a human-friendly robot considering human mental states in the dialogue

    Recognition of Emotion from Speech: A Review

    Get PDF

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Multimodal Affect Recognition: Current Approaches and Challenges

    Get PDF
    Many factors render multimodal affect recognition approaches appealing. First, humans employ a multimodal approach in emotion recognition. It is only fitting that machines, which attempt to reproduce elements of the human emotional intelligence, employ the same approach. Second, the combination of multiple-affective signals not only provides a richer collection of data but also helps alleviate the effects of uncertainty in the raw signals. Lastly, they potentially afford us the flexibility to classify emotions even when one or more source signals are not possible to retrieve. However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution and format. In this chapter, we explore the aforementioned challenges while presenting the latest scholarship on the topic. Hence, we first discuss the various modalities used in affect classification. Second, we explore the fusion of modalities. Third, we present publicly accessible multimodal datasets designed to expedite work on the topic by eliminating the laborious task of dataset collection. Fourth, we analyze representative works on the topic. Finally, we summarize the current challenges in the field and provide ideas for future research directions

    PRESENCE: A human-inspired architecture for speech-based human-machine interaction

    No full text
    Recent years have seen steady improvements in the quality and performance of speech-based human-machine interaction driven by a significant convergence in the methods and techniques employed. However, the quantity of training data required to improve state-of-the-art systems seems to be growing exponentially and performance appears to be asymptotic to a level that may be inadequate for many real-world applications. This suggests that there may be a fundamental flaw in the underlying architecture of contemporary systems, as well as a failure to capitalize on the combinatorial properties of human spoken language. This paper addresses these issues and presents a novel architecture for speech-based human-machine interaction inspired by recent findings in the neurobiology of living systems. Called PRESENCE-"PREdictive SENsorimotor Control and Emulation" - this new architecture blurs the distinction between the core components of a traditional spoken language dialogue system and instead focuses on a recursive hierarchical feedback control structure. Cooperative and communicative behavior emerges as a by-product of an architecture that is founded on a model of interaction in which the system has in mind the needs and intentions of a user and a user has in mind the needs and intentions of the system

    Applications of Affective Computing in Human-Robot Interaction: state-of-art and challenges for manufacturing

    Get PDF
    The introduction of collaborative robots aims to make production more flexible, promoting a greater interaction between humans and robots also from physical point of view. However, working closely with a robot may lead to the creation of stressful situations for the operator, which can negatively affect task performance. In Human-Robot Interaction (HRI), robots are expected to be socially intelligent, i.e., capable of understanding and reacting accordingly to human social and affective clues. This ability can be exploited implementing affective computing, which concerns the development of systems able to recognize, interpret, process, and simulate human affects. Social intelligence is essential for robots to establish a natural interaction with people in several contexts, including the manufacturing sector with the emergence of Industry 5.0. In order to take full advantage of the human-robot collaboration, the robotic system should be able to perceive the psycho-emotional and mental state of the operator through different sensing modalities (e.g., facial expressions, body language, voice, or physiological signals) and to adapt its behaviour accordingly. The development of socially intelligent collaborative robots in the manufacturing sector can lead to a symbiotic human-robot collaboration, arising several research challenges that still need to be addressed. The goals of this paper are the following: (i) providing an overview of affective computing implementation in HRI; (ii) analyzing the state-of-art on this topic in different application contexts (e.g., healthcare, service applications, and manufacturing); (iii) highlighting research challenges for the manufacturing sector
    • …
    corecore