1,239 research outputs found

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure

    Simplified Video Surveillance Framework for Dynamic Object Detection under Challenging Environment

    Get PDF
    An effective video surveillance system is highly essential in order to ensure constructing better form of video analytics. Existing review of literatures pertaining to video analytics are found to directly implement algorithms on the top of the video file without much emphasis on following problems i.e. i) dynamic orientation of subject, ii)poor illumination condition, iii) identification and classification of subjects, and iv) faster response time. Therefore, the proposed system implements an analytical concept that uses depth-image of the video feed along with the original colored video feed to apply an algorithm for extracting significant information about the motion blob of the dynamic subjects. Implemented in MATLAB, the study outcome shows that it is capable of addressing all the above mentioned problems associated with existing research trends on video analytics by using a very simple and non-iterative process of implementation. The applicability of the proposed system in practical world is thereby proven

    JuxtaLearn D3.2 Performance Framework

    Get PDF
    This deliverable, D3.2, for Work Package 3 incorporating the pedagogy from WP2 and orchestration factors mapped in D3.1 reviews aspects of performance in the context of participative video making. It reviews literature on curiosity and engagement characteristics of interaction mechanisms for public displays and anticipates requirements for social network analysis of relevant public videos from WP6 task 6.3. Thus, to support JuxtaLearn performance it proposes a reflective performance framework that encompasses the material environment and objects required, the participants, and the knowledge needed
    • …
    corecore