17 research outputs found

    Internet of Things: IoT Infrastructures: Second international summit, IoT 360°

    Get PDF
    The two-volume set LNICST 169 and 170 constitutes the thoroughly refereed post-conference proceedings of the Second International Internet of Things Summit, IoT 360° 2015, held in Rome, Italy, in October 2015. The IoT 360° is an event bringing a 360 degree perspective on IoT-related projects in important sectors such as mobility, security, healthcare and urban spaces. The conference also aims to coach involved people on the whole path between research to innovation and the way through to commercialization in the IoT domain. This volume contains 62 revised full papers at the following four conferences: The International Conference on Safety and Security in Internet of Things, SaSeIoT, the International Conference on Smart Objects and Technologies for Social Good, GOODTECHS, the International Conference on Cloud, Networking for IoT systems, CN4IoT, and the International Conference on IoT Technologies for HealthCare, HealthyIo

    ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors

    No full text
    We present ASCERTAIN-a multimodal databaASe for impliCit pERsonali Ty and Affect recognitIoN using commercial physiological sensors. To our knowledge, ASCERTAIN is the first database to connect personality traits and emotional states via physiological responses. ASCERTAIN contains big-five personality scales and emotional self-ratings of 58 users along with their Electroencephalogram (EEG), Electrocardiogram (ECG), Galvanic Skin Response (GSR) and facial activity data, recorded using off-The-shelf sensors while viewing affective movie clips. We first examine relationships between users' affective ratings and personality scales in the context of prior observations, and then study linear and non-linear physiological correlates of emotion and personality. Our analysis suggests that the emotion-personality relationship is better captured by non-linear rather than linear statistics. We finally attempt binary emotion and personality trait recognition using physiological features. Experimental results cumulatively confirm that personality differences are better revealed while comparing user responses to emotionally homogeneous videos, and above-chance recognition is achieved for both affective and personality dimensions.</p

    Implicit user-centric personality recognition based on physiological responses to emotional videos

    No full text
    We present a novel framework for recognizing personality traits based on users' physiological responses to affective movie clips. Extending studies that have correlated explicit/implicit affective user responses with Extraversion and Neuroticism traits, we perform single-trial recognition of the big-five traits from Electrocardiogram (ECG), Galvanic Skin Response (GSR), Electroencephalogram (EEG) and facial emotional responses compiled from 36 users using off-the-shelf sensors. Firstly, we examine relationships among personality scales and (explicit) affective user ratings acquired in the context of prior observations. Secondly, we isolate physiological correlates of personality traits. Finally, unimodal and multimodal personality recognition results are presented. Personality differences are better revealed while analyzing responses to emotionally homogeneous (e.g., high valence, high arousal) clips, and significantly above-chance recognition is achieved for all five traits.</p

    Boosting-based transfer learning for multi-view head-pose classification from surveillance videos

    No full text
    This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets

    Exploring Multitask and Transfer Learning Algorithms for Head Pose Estimation in Dynamic Multiview Scenarios

    No full text
    Considerable research progress in the areas of computer vision and multimodal analysis have now made the examination of complex phenomena such as social interactions possible. An important cue toward determining social interactions is the head pose of interacting members. While most automated social interaction analysis methods have focused on round-table meetings where head pose estimation (HPE) is easier given the high resolution of captured faces and the analyzed targets are static (seated), recent works have examined unstructured meeting scenes such as cocktail parties. While unstructured meeting scenes, where targets are free to move, provide additional cues such as proxemics for behavior analysis, they are also challenging to analyze owing to (i) the need to use distant, large field-of-view cameras which can only capture low-resolution faces of targets, and (ii) the variations in targets' facial appearance as they move, owing to changing camera perspective and scale. This chapter reviews recent works addressing HPE under target motion. In particular, we examine the use of transfer learning and multitask learning for HPE. Transfer learning is particularly useful when the training and the test data have different attributes (e.g., training data contains pose annotations for static targets, but test data involves moving targets), while multitask learning can be explicitly designed to address facial appearance variations under motion. Exhaustive experiments performed using both methodologies are presented

    Exploring Transfer Learning Approaches for Head Pose Classification from Multi-view Surveillance Images

    No full text
    Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces are captured at low-resolution and have a blurred appearance. Domain adaptation approaches are useful for transferring knowledge from the training (source) to the test (target) data when they have different attributes, minimizing target data labeling efforts in the process. This paper examines the use of transfer learning for efficient multi-view head pose classification with minimal target training data under three challenging situations: (i) where the range of head poses in the source and target images is different, (ii) where source images capture a stationary person while target images capture a moving person whose facial appearance varies under motion due to changing perspective, scale and (iii) a combination of (i) and (ii). On the whole, the presented methods represent novel transfer learning solutions employed in the context of multi-view head pose classification. We demonstrate that the proposed solutions considerably outperform the state-of-the-art through extensive experimental validation. Finally, the DPOSE dataset compiled for benchmarking head pose classification performance with moving persons, and to aid behavioral understanding applications is presented in this work
    corecore