553 research outputs found

    Contextual and Human Factors in Information Fusion

    Get PDF
    Proceedings of: NATO Advanced Research Workshop on Human Systems Integration to Enhance Maritime Domain Awareness for Port/Harbour Security Systems, Opatija (Croatia), December 8-12, 2008Context and human factors may be essential to improving measurement processes for each sensor, and the particular context of each sensor could be used to obtain a global definition of context in multisensor environments. Reality may be captured by human sensorial domain based only on machine stimulus and then generate a feedback which can be used by the machine at its different processing levels, adapting its algorithms and methods accordingly. Reciprocally, human perception of the environment could also be modelled by context in the machine. In the proposed model, both machine and man take sensorial information from the environment and process it cooperatively until a decision or semantic synthesis is produced. In this work, we present a model for context representation and reasoning to be exploited by fusion systems. In the first place, the structure and representation of contextual information must be determined before being exploited by a specific application. Under complex circumstances, the use of context information and human interaction can help to improve a tracking system's performance (for instance, video-based tracking systems may fail when dealing with object interaction, occlusions, crosses, etc.).Publicad

    Real-time multisensor people tracking for human-robot spatial interaction

    Get PDF
    All currently used mobile robot platforms are able to navigate safely through their environment, avoiding static and dynamic obstacles. However, in human populated environments mere obstacle avoidance is not sufficient to make humans feel comfortable and safe around robots. To this end, a large community is currently producing human-aware navigation approaches to create a more socially acceptable robot behaviour. Amajorbuilding block for all Human-Robot Spatial Interaction is the ability of detecting and tracking humans in the vicinity of the robot. We present a fully integrated people perception framework, designed to run in real-time on a mobile robot. This framework employs detectors based on laser and RGB-D data and a tracking approach able to fuse multiple detectors using different versions of data association and Kalman filtering. The resulting trajectories are transformed into Qualitative Spatial Relations based on a Qualitative Trajectory Calculus, to learn and classify different encounters using a Hidden Markov Model based representation. We present this perception pipeline, which is fully implemented into the Robot Operating System (ROS), in a small proof of concept experiment. All components are readily available for download, and free to use under the MIT license, to researchers in all fields, especially focussing on social interaction learning by providing different kinds of output, i.e. Qualitative Relations and trajectories

    Multisensor knowledge systems: interpreting 3-D structure

    Get PDF
    Journal ArticleWe describe an approach which facilitates and makes explicit the organization of the knowledge necessary to map multisensor system requirements onto an appropriate assembly of algorithms, processors, sensors, and actuators. We have previously introduced the Multisensor Kernel System and Logical Sensor Specifications as a means for high-level specification of multisensor systems. The main goals of such a characterization are: to develop a coherent treatment of multisensor information, to allow system reconfiguration for both fault tolerance and dynamic response to environmental conditions, and to permit the explicit description of control. In this paper we show how Logical Sensors can be incorporated into an object-based approach for the interpretation of 3-D structure. Considering the inherent difficulties in interpreting general configurations of lines in space, and considering the ubiquitousness of special line configurations in man-made environments and objects, we advocate the use of computational units tuned to the occurrence of special configurations. The organized use of these units circumvents the inherent difficulties in interpreting general configurations of lines. After a brief examination of the problem of interpreting general configurations of lines in space, a number of computational units are proposed which are naturally derived from angular relations. The process of propagation (which allows interpretation to spread over the image) is also advocated. Such computational units and processes, which are simple and efficient, can be conveniently organized in a rule-based framework where the occurrence of the various special configurations can be tested. The Multisensor Knowledge System provides such a framework

    Context-based scene recognition from visual data in smart homes: an Information Fusion approach

    Get PDF
    Ambient Intelligence (AmI) aims at the development of computational systems that process data acquired by sensors embedded in the environment to support users in everyday tasks. Visual sensors, however, have been scarcely used in this kind of applications, even though they provide very valuable information about scene objects: position, speed, color, texture, etc. In this paper, we propose a cognitive framework for the implementation of AmI applications based on visual sensor networks. The framework, inspired by the Information Fusion paradigm, combines a priori context knowledge represented with ontologies with real time single camera data to support logic-based high-level local interpretation of the current situation. In addition, the system is able to automatically generate feedback recommendations to adjust data acquisition procedures. Information about recognized situations is eventually collected by a central node to obtain an overall description of the scene and consequently trigger AmI services. We show the extensible and adaptable nature of the approach with a prototype system in a smart home scenario.This research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008- 06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    Biometrics Sensor Fusion

    Get PDF

    Occlusion reasoning for multiple object visual tracking

    Full text link
    Thesis (Ph.D.)--Boston UniversityOcclusion reasoning for visual object tracking in uncontrolled environments is a challenging problem. It becomes significantly more difficult when dense groups of indistinguishable objects are present in the scene that cause frequent inter-object interactions and occlusions. We present several practical solutions that tackle the inter-object occlusions for video surveillance applications. In particular, this thesis proposes three methods. First, we propose "reconstruction-tracking," an online multi-camera spatial-temporal data association method for tracking large groups of objects imaged with low resolution. As a variant of the well-known Multiple-Hypothesis-Tracker, our approach localizes the positions of objects in 3D space with possibly occluded observations from multiple camera views and performs temporal data association in 3D. Second, we develop "track linking," a class of offline batch processing algorithms for long-term occlusions, where the decision has to be made based on the observations from the entire tracking sequence. We construct a graph representation to characterize occlusion events and propose an efficient graph-based/combinatorial algorithm to resolve occlusions. Third, we propose a novel Bayesian framework where detection and data association are combined into a single module and solved jointly. Almost all traditional tracking systems address the detection and data association tasks separately in sequential order. Such a design implies that the output of the detector has to be reliable in order to make the data association work. Our framework takes advantage of the often complementary nature of the two subproblems, which not only avoids the error propagation issue from which traditional "detection-tracking approaches" suffer but also eschews common heuristics such as "nonmaximum suppression" of hypotheses by modeling the likelihood of the entire image. The thesis describes a substantial number of experiments, involving challenging, notably distinct simulated and real data, including infrared and visible-light data sets recorded ourselves or taken from data sets publicly available. In these videos, the number of objects ranges from a dozen to a hundred per frame in both monocular and multiple views. The experiments demonstrate that our approaches achieve results comparable to those of state-of-the-art approaches

    Structure Inference for Bayesian Multisensory Scene Understanding

    Get PDF
    We investigate a solution to the problem of multi-sensor scene understanding by formulating it in the framework of Bayesian model selection and structure inference. Humans robustly associate multimodal data as appropriate, but previous modelling work has focused largely on optimal fusion, leaving segregation unaccounted for and unexploited by machine perception systems. We illustrate a unifying, Bayesian solution to multi-sensor perception and tracking which accounts for both integration and segregation by explicit probabilistic reasoning about data association in a temporal context. Such explicit inference of multimodal data association is also of intrinsic interest for higher level understanding of multisensory data. We illustrate this using a probabilistic implementation of data association in a multi-party audio-visual scenario, where unsupervised learning and structure inference is used to automatically segment, associate and track individual subjects in audiovisual sequences. Indeed, the structure inference based framework introduced in this work provides the theoretical foundation needed to satisfactorily explain many confounding results in human psychophysics experiments involving multimodal cue integration and association

    GRASP News Volume 9, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory
    • 

    corecore