8 research outputs found
Human Emotion Recognition Based On Galvanic Skin Response signal Feature Selection and SVM
A novel human emotion recognition method based on automatically selected
Galvanic Skin Response (GSR) signal features and SVM is proposed in this paper.
GSR signals were acquired by e-Health Sensor Platform V2.0. Then, the data is
de-noised by wavelet function and normalized to get rid of the individual
difference. 30 features are extracted from the normalized data, however,
directly using of these features will lead to a low recognition rate. In order
to gain the optimized features, a covariance based feature selection is
employed in our method. Finally, a SVM with input of the optimized features is
utilized to achieve the human emotion recognition. The experimental results
indicate that the proposed method leads to good human emotion recognition, and
the recognition accuracy is more than 66.67%
A sensing architecture for empathetic data systems
Today's increasingly large and complex databases require novel and machine aided ways of exploring data. To optimize the selection and presentation of data, we suggest an unconventional approach. Instead of exclusively relying on explicit user input to specify relevant information or to navigate through a data space, we exploit the power and potential of the users' unconscious processes in addition. To this end, the user is immersed in a mixed reality environment while his bodily reactions are captured using unobtrusive wearable devices. The users' reactions are analyzed in real-time and mapped onto higher-level psychological states, such as surprise or boredom, in order to trigger appropriate system responses that direct the users' attention to areas of potential interest in the visualizations. The realization of such a close experience-based human-machine loop raises a number of technical challenges, such as the real-time interpretation of psychological user states. The paper at hand describes a sensing architecture for empathetic data systems that has been developed as part of such a loop and how it tackles the diverse challenges
Detección robusta de la orientación de la cabeza del usuario a partir de una cámara RGBZ
La localización de caras es una caracterÃstica ampliamente utilizada
actualmente en diferentes productos software. Además, con la
aparición de sensores RGBZ (como la Kinect o la RealSense) se ha
añadido la capacidad no sólo detectar dónde hay una cabeza, si no de
obtener información tridimensional sobre la misma.
En este proyecto se diseña, desarrolla y analiza un software que
permita obtener, mediante el uso de las cámaras RGBZ anteriormente
mencionadas, la orientación 3D de la cabeza del usuario que esté delante
de ellas, es decir, los ángulos que determinan hacia qué dirección
está mirando el usuario. Para ello se ha diseñado un algoritmo basado
en el método Iterative Closest Point, de forma que por cada imagen
capturada por la cámara se detecte qué ángulos presenta la cabeza.
También se ha desarrollado una plataforma externa utilizando un
servomotor y un microcontrolador Arduino, permitiendo realizar pruebas
de los diferentes parámetros del algoritmo para validar sus resultados,
mediante una plataforma giratoria sobre la que se puede orientar
con precisión una reproducción a escala de una cabeza 3D.La localització de cares es una caracterÃstica à mpliament utilitzada en diferents productes software actualment. A més, amb l’aparició de sensors RGBZ (com la Kinect o la RealSense) s’ha afegit la capacitat de, no només detectar a on hi ha una cara, si no d’obtenir la informació tridimensional d’aquesta. En aquest projecte es dissenya, desenvolupa i s’analitza un software que permeti obtenir, mitjançant l’ús de les cà meres RGBZ anteriorment nombrades, la orientació del cap de l’usuari que es trobi davant d’elles, és a dir, dels angles que defineixen cap a quina direcció està mirant l’usuari. Per aconseguir-ho s’ha dissenyat un algoritme basat en el mètode Iterative Closest Point, de manera que per cada imatge capturada per la cà mera es detecti quins angles presenta el cap. També s’ha desenvolupat una plataforma externa utilitzant un motor i un microcontrolador Arduino, a on es poden realitzar proves dels diferents parà metres de l’algoritme per validar els resultats mitjançant una plataforma giratòria sobre la qual s’ha col·locat una reproducció a escala d’un cap en tres dimensions que es pot orientar amb precisió.Face localization has become a hugely demanded feature in many
different sofware products. In addition, with the appearence of RGBZ
sensors (such as the Kinect and the RealSense) the capacity of not
only detecting where the face is located but also obtaining the 3D
orientation of the face has been added.
In this project we aim to design, develop and test a software able
to, using the RGBZ sensors, detect the pose of the head of a user in
front of the camera, that is, extract the three angles that define the
direction of the head. To do that, we developed an algorithm based
on the Iterative Closest Point family. For each image provided by the
camera, the angles are detected.
An external platform was also developed using a servomotor and an
Arduino microcontroller, able to perform tests of the different parameters
of the algorithm to validate the results using a rotating base
that can turn precisely a reproduction of a real-size 3D printed head
Identification de la zone regardée sur un écran d'ordinateur à partir du flou
Quand vient le temps de comprendre le comportement d’une personne, le regard est une source d’information importante. L’analyse des comportements des consommateurs, des criminels, ou encore de certains états cognitifs passe par l’interprétation du regard dans une scène à travers le temps. Il existe un besoin réel d’identification de la zone regardée sur un écran ou tout autre médium par un utilisateur. Pour cela, la vision humaine fait la composition de plusieurs images pour permettre de comprendre la relation tridimensionnelle qui existe entre les objets et la scène. La perception 3D d’une scène réelle passe alors à travers plusieurs images. Mais qu’en est-il lorsqu’il n’y a qu’une seule image
AutoSelect: What You Want Is What You Get
AutoSelect: What You Want Is What You Get : real-time processing of visual attention and affect / N. Bee ... - In: Perception and interactive technologies : international tutorial and research workshop, PIT 2006, Kloster Irsee, Germany, June 19-21, 2006 ; proceedings / Elisabeth André ... (eds.). - Berlin [u.a.] : Springer, 2006. - S. 40-52. - (Lecture notes in computer science ; 4021 : Lecture notes in artificial intelligence
AutoSelect: What You Want Is What You Get
AutoSelect: What You Want Is What You Get : real-time processing of visual attention and affect / N. Bee ... - In: Perception and interactive technologies : international tutorial and research workshop, PIT 2006, Kloster Irsee, Germany, June 19-21, 2006 ; proceedings / Elisabeth André ... (eds.). - Berlin [u.a.] : Springer, 2006. - S. 40-52. - (Lecture notes in computer science ; 4021 : Lecture notes in artificial intelligence
AutoSelect: What You Want Is What You Get
AutoSelect: What You Want Is What You Get : real-time processing of visual attention and affect / N. Bee ... - In: Perception and interactive technologies : international tutorial and research workshop, PIT 2006, Kloster Irsee, Germany, June 19-21, 2006 ; proceedings / Elisabeth André ... (eds.). - Berlin [u.a.] : Springer, 2006. - S. 40-52. - (Lecture notes in computer science ; 4021 : Lecture notes in artificial intelligence