9,059 research outputs found

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure

    An Advanced Home ElderCare Service

    Get PDF
    With the increase of welfare cost all over the developed world, there is a need to resort to new technologies that could help reduce this enormous cost and provide some quality eldercare services. This paper presents a middleware-level solution that integrates monitoring and emergency detection solutions with networking solutions. The proposed system enables efficient integration between a variety of sensors and actuators deployed at home for emergency detection and provides a framework for creating and managing rescue teams willing to assist elders in case of emergency situations. A prototype of the proposed system was designed and implemented. Results were obtained from both computer simulations and a real-network testbed. These results show that the proposed system can help overcome some of the current problems and help reduce the enormous cost of eldercare service

    Privacy Mining from IoT-based Smart Homes

    Full text link
    Recently, a wide range of smart devices are deployed in a variety of environments to improve the quality of human life. One of the important IoT-based applications is smart homes for healthcare, especially for elders. IoT-based smart homes enable elders' health to be properly monitored and taken care of. However, elders' privacy might be disclosed from smart homes due to non-fully protected network communication or other reasons. To demonstrate how serious this issue is, we introduce in this paper a Privacy Mining Approach (PMA) to mine privacy from smart homes by conducting a series of deductions and analyses on sensor datasets generated by smart homes. The experimental results demonstrate that PMA is able to deduce a global sensor topology for a smart home and disclose elders' privacy in terms of their house layouts.Comment: This paper, which has 11 pages and 7 figures, has been accepted BWCCA 2018 on 13th August 201

    PhD Forum: Investigating the performance of a multi-modal approach to unusual event detection

    Get PDF
    In this paper, we investigate the parameters under- pinning our previously presented system for detecting unusual events in surveillance applications [1]. The system identifies anomalous events using an unsupervised data-driven approach. During a training period, typical activities within a surveilled environment are modeled using multi-modal sensor readings. Significant deviations from the established model of regular activity can then be flagged as anomalous at run-time. Using this approach, the system can be deployed and automatically adapt for use in any environment without any manual adjustment. Experiments carried out on two days of audio-visual data were performed and evaluated using a manually annotated ground- truth. We investigate sensor fusion and quantitatively evaluate the performance gains over single modality models. We also investigate different formulations of our cluster-based model of usual scenes as well as the impact of dynamic thresholding on identifying anomalous events. Experimental results are promis- ing, even when modeling is performed using very simple audio and visual features
    corecore