10,369 research outputs found

    AudExpCreator: A GUI-based Matlab tool for designing and creating auditory experiments with the Psychophysics Toolbox

    Full text link
    We present AudExpCreator, a GUI-based Matlab tool for designing and creating auditory experiments. AudExpCreator allows users to generate auditory experiments that run on Matlab's Psychophysics Toolbox without having to write any code; rather, users simply follow instructions in GUIs to specify desired design parameters. The software comprises five auditory study types, including behavioral studies and integration with EEG and physiological response collection systems. Advanced features permit more complicated experimental designs as well as maintenance and update of previously created experiments. AudExpCreator alleviates programming barriers while providing a free, open-source alternative to commercial experimental design software.Comment: 15 pages, 6 figure

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Reducing reversal errors in localizing the source of sound in virtual environment without head tracking

    Get PDF
    International audienceThis paper presents a study about the effect of using additional audio cueing and Head-Related Transfer Function (HRTF) on human performance in sound source localization task without using head movement. The existing techniques of sound spatialization generate reversal errors. We intend to reduce these errors by introducing sensory cues based on sound effects. We conducted and experimental study to evaluate the impact of additional cues in sound source localization task. The results showed the benefit of combining the additional cues and HRTF in terms of the localization accuracy and the reduction of reversal errors. This technique allows significant reduction of reversal errors compared to the use of the HRTF separately. For instance, this technique could be used to improve audio spatial alerting, spatial tracking and target detection in simulation applications when head movement is not included
    • …
    corecore