11,070 research outputs found

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    An Integrated Framework for Sensing Radio Frequency Spectrum Attacks on Medical Delivery Drones

    Full text link
    Drone susceptibility to jamming or spoofing attacks of GPS, RF, Wi-Fi, and operator signals presents a danger to future medical delivery systems. A detection framework capable of sensing attacks on drones could provide the capability for active responses. The identification of interference attacks has applicability in medical delivery, disaster zone relief, and FAA enforcement against illegal jamming activities. A gap exists in the literature for solo or swarm-based drones to identify radio frequency spectrum attacks. Any non-delivery specific function, such as attack sensing, added to a drone involves a weight increase and additional complexity; therefore, the value must exceed the disadvantages. Medical delivery, high-value cargo, and disaster zone applications could present a value proposition which overcomes the additional costs. The paper examines types of attacks against drones and describes a framework for designing an attack detection system with active response capabilities for improving the reliability of delivery and other medical applications.Comment: 7 pages, 1 figures, 5 table

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    An Incremental Navigation Localization Methodology for Application to Semi-Autonomous Mobile Robotic Platforms to Assist Individuals Having Severe Motor Disabilities.

    Get PDF
    In the present work, the author explores the issues surrounding the design and development of an intelligent wheelchair platform incorporating the semi-autonomous system paradigm, to meet the needs of individuals with severe motor disabilities. The author presents a discussion of the problems of navigation that must be solved before any system of this type can be instantiated, and enumerates the general design issues that must be addressed by the designers of systems of this type. This discussion includes reviews of various methodologies that have been proposed as solutions to the problems considered. Next, the author introduces a new navigation method, called Incremental Signature Recognition (ISR), for use by semi-autonomous systems in structured environments. This method is based on the recognition, recording, and tracking of environmental discontinuities: sensor reported anomalies in measured environmental parameters. The author then proposes a robust, redundant, dynamic, self-diagnosing sensing methodology for detecting and compensating for hidden failures of single sensors and sensor idiosyncrasies. This technique is optimized for the detection of spatial discontinuity anomalies. Finally, the author gives details of an effort to realize a prototype ISR based system, along with insights into the various implementation choices made
    corecore