13,149 research outputs found
Automatic emotional state detection using facial expression dynamic in videos
In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states.
The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems
Multi-party Interaction in a Virtual Meeting Room
This paper presents an overview of the work carried out at the HMI group of the University of Twente in the domain of multi-party interaction. The process from automatic observations of behavioral aspects through interpretations resulting in recognized behavior is discussed for various modalities and levels. We show how a virtual meeting room can be used for visualization and evaluation of behavioral models as well as a research tool for studying the effect of modified stimuli on the perception of behavior
Flexible human-robot cooperation models for assisted shop-floor tasks
The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative
robots, i.e., robots able to work alongside and together with humans, could
bring to the whole production process. In this context, an enabling technology
yet unreached is the design of flexible robots able to deal at all levels with
humans' intrinsic variability, which is not only a necessary element for a
comfortable working experience for the person but also a precious capability
for efficiently dealing with unexpected events. In this paper, a sensing,
representation, planning and control architecture for flexible human-robot
cooperation, referred to as FlexHRC, is proposed. FlexHRC relies on wearable
sensors for human action recognition, AND/OR graphs for the representation of
and reasoning upon cooperation models, and a Task Priority framework to
decouple action planning from robot motion planning and control.Comment: Submitted to Mechatronics (Elsevier
Motion-Scenario Decoupling for Rat-Aware Video Position Prediction: Strategy and Benchmark
Recently significant progress has been made in human action recognition and
behavior prediction using deep learning techniques, leading to improved
vision-based semantic understanding. However, there is still a lack of
high-quality motion datasets for small bio-robotics, which presents more
challenging scenarios for long-term movement prediction and behavior control
based on third-person observation. In this study, we introduce RatPose, a
bio-robot motion prediction dataset constructed by considering the influence
factors of individuals and environments based on predefined annotation rules.
To enhance the robustness of motion prediction against these factors, we
propose a Dual-stream Motion-Scenario Decoupling (\textit{DMSD}) framework that
effectively separates scenario-oriented and motion-oriented features and
designs a scenario contrast loss and motion clustering loss for overall
training. With such distinctive architecture, the dual-branch feature flow
information is interacted and compensated in a decomposition-then-fusion
manner. Moreover, we demonstrate significant performance improvements of the
proposed \textit{DMSD} framework on different difficulty-level tasks. We also
implement long-term discretized trajectory prediction tasks to verify the
generalization ability of the proposed dataset.Comment: Rat, Video Position Predictio
- …