422 research outputs found

    Automated detection of impulsive movements in HCI

    Get PDF
    This paper introduces an algorithm for automatically measuring impulsivity. This can be used as a major expressive movement feature in the development of systems for realtime analysis of emotion expression from human full-body movement, a research area which has received increased attention in the affective computing community. In particular, our algorithm is developed in the framework of the EUH2020- ICT Project DANCE aiming at investigating techniques for sensory substitution in blind people, in order to enable perception of and participation in non-verbal, artistic whole-body experiences. The algorithm was tested by applying it to a reference archive of short dance performances. The archive includes a collection of both impulsive and fluid movements. Results show that our algorithm can reliably distinguish impulsive vs. sudden performances

    Analysis of intrapersonal synchronization in full-body movements displaying different expressive qualities

    Get PDF
    Intrapersonal synchronization of limb movements is a relevant feature for assessing coordination of motoric behavior. In this paper, we show that it can also distinguish between full-body movements performed with different expressive qualities, namely rigidity, uidity, and impulsivity. For this purpose, we collected a dataset of movements performed by professional dancers, and annotated the perceived movement qualities with the help of a group of experts in expressive movement analysis. We computed intra personal synchronization by applying the Event Synchronization algorithm to the time-series of the speed of arms and hands. Results show that movements performed with different qualities display a significantly different amount of intra personal synchronization: Impulsive movements are the most synchronized, the uid ones show the lowest values of synchronization, and the rigid ones lay in between

    A Learning-based Stochastic MPC Design for Cooperative Adaptive Cruise Control to Handle Interfering Vehicles

    Full text link
    Vehicle to Vehicle (V2V) communication has a great potential to improve reaction accuracy of different driver assistance systems in critical driving situations. Cooperative Adaptive Cruise Control (CACC), which is an automated application, provides drivers with extra benefits such as traffic throughput maximization and collision avoidance. CACC systems must be designed in a way that are sufficiently robust against all special maneuvers such as cutting-into the CACC platoons by interfering vehicles or hard braking by leading cars. To address this problem, a Neural- Network (NN)-based cut-in detection and trajectory prediction scheme is proposed in the first part of this paper. Next, a probabilistic framework is developed in which the cut-in probability is calculated based on the output of the mentioned cut-in prediction block. Finally, a specific Stochastic Model Predictive Controller (SMPC) is designed which incorporates this cut-in probability to enhance its reaction against the detected dangerous cut-in maneuver. The overall system is implemented and its performance is evaluated using realistic driving scenarios from Safety Pilot Model Deployment (SPMD).Comment: 10 pages, Submitted as a journal paper at T-I

    Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities

    Get PDF
    In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements

    Thinking Fast or slow? Understanding Answering Behavior Using Dual-Process Theory through Mouse Cursor Movements

    Get PDF
    Users’ underlying cognitive states govern their behaviors online. For instance, an extreme cognitive burden during live system use would negatively influence important user behaviors such as using the system and purchasing a product. Thus, inferring the user's cognitive state has practical significance for the commercialized systems. We use Dual-Process Theory to explain how the mouse cursor movements can be an effective measure of cognitive load. In an experimental study with five hundred and thirty-four subjects, we induced cognitive burden then monitored mouse cursor movements when the participants answered questions in an online survey. We found that participants' mouse cursor movements slow down when they are engaged in cognitively demanding tasks. With the newly derived measures, we were able to infer the state of heightened cognitive load with an overall accuracy of 70.22%. The results enable researchers to measure users' cognitive load with more granularity and present a new, theoretically sound method to assess the user's cognitive state

    Analysis of the qualities of human movement in individual action

    Get PDF
    The project was organized as a preliminary study for Use Case #1 of the Horizon 2020 Research Project \u201cDance in the Dark\u201d (H2020 ICT Project n.645553 - http://dance.dibris.unige.it). The main objective of the DANCE project is to study and develop novel techniques and algorithms for the automated measuring of non-verbal bodily expression and the emotional qualities conveyed by human movement, in order to enable the perception of nonverbal artistic whole-body experiences to visual impaired people. In the framework of the eNTERFACE \u201915 Workshop we investigated methods for analyzing human movements in terms of expressive qualities. When analyzing an individual action we were mainly concentrating on the quality of motion and on elements suggesting different emotions. We developed a system to automatically extract several movement features and transfer them to the auditory domain through interactive sonification. We performed an experiment with 26 participants and collected a dataset made of video and audio recordings plus accelerometer data. Finally, we performed a perception study through questionnaires, in order to evaluate and validate the system. As real time application of our system we developed a game named \u201dMove in the Dark\u201d, which has been presented in the Mundaneum Museum of Mons, Belgium and Festival della Scienza, Genova, Italy (27 November 2015)

    A serious games platform for validating sonification of human full-body movement qualities

    Get PDF
    In this paper we describe a serious games platfrom for validating sonification of human full-body movement qualities. This platform supports the design and development of serious games aiming at validating (i) our techniques to measure expressive movement qualities, and (ii) the mapping strategies to translate such qualities in the auditory domain, by means of interactive sonification and active music experience. The platform is a part of a more general framework developed in the context of the EU ICT H2020 DANCE "Dancing in the dark" Project n.645553 that aims at enabling the perception of nonverbal artistic whole-body experiences to visual impaired people

    Toward an affect-sensitive multimodal human-computer interaction

    No full text
    The ability to recognize affective states of a person... This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence -- the ability to recognize a user's affective states -- in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI -- an automatic personalized analyzer of a user's nonverbal affective feedback

    Towards a multimodal repository of expressive movement qualities in dance

    Get PDF
    In this paper, we present a new multimodal repository for the analysis of expressive movement qualities in dance. First, we discuss guidelines and methodology that we applied to create this repository. Next, the technical setup of recordings and the platform for capturing the synchronized audio-visual, physiological, and motion capture data are presented. The initial content of the repository consists of about 90 minutes of short dance performances movement sequences, and improvisations performed by four dancers, displaying three expressive qualities: Fluidity, Impulsivity, and Rigidity
    • 

    corecore