12,910 research outputs found

    Collaborative Solutions to Visual Sensor Networks

    Get PDF
    Visual sensor networks (VSNs) merge computer vision, image processing and wireless sensor network disciplines to solve problems in multi-camera applications in large surveillance areas. Although potentially powerful, VSNs also present unique challenges that could hinder their practical deployment because of the unique camera features including the extremely higher data rate, the directional sensing characteristics, and the existence of visual occlusions. In this dissertation, we first present a collaborative approach for target localization in VSNs. Traditionally; the problem is solved by localizing targets at the intersections of the back-projected 2D cones of each target. However, the existence of visual occlusions among targets would generate many false alarms. Instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in 2D cones and generate the so-called certainty map of targets non-existence. We also propose distributed integration of local certainty maps by following a dynamic itinerary where the entire map is progressively clarified. The accuracy of target localization is affected by the existence of faulty nodes in VSNs. Therefore, we present the design of a fault-tolerant localization algorithm that would not only accurately localize targets but also detect the faults in camera orientations, tolerate these errors and further correct them before they cascade. Based on the locations of detected targets in the fault-tolerated final certainty map, we construct a generative image model that estimates the camera orientations, detect inaccuracies and correct them. In order to ensure the required visual coverage to accurately localize targets or tolerate the faulty nodes, we need to calculate the coverage before deploying sensors. Therefore, we derive the closed-form solution for the coverage estimation based on the certainty-based detection model that takes directional sensing of cameras and existence of visual occlusions into account. The effectiveness of the proposed collaborative and fault-tolerant target localization algorithms in localization accuracy as well as fault detection and correction performance has been validated through the results obtained from both simulation and real experiments. In addition, conducted simulation shows extreme consistency with results from theoretical closed-form solution for visual coverage estimation, especially when considering the boundary effect

    Distributed Object Tracking Using a Cluster-Based Kalman Filter in Wireless Camera Networks

    Get PDF
    Local data aggregation is an effective means to save sensor node energy and prolong the lifespan of wireless sensor networks. However, when a sensor network is used to track moving objects, the task of local data aggregation in the network presents a new set of challenges, such as the necessity to estimate, usually in real time, the constantly changing state of the target based on information acquired by the nodes at different time instants. To address these issues, we propose a distributed object tracking system which employs a cluster-based Kalman filter in a network of wireless cameras. When a target is detected, cameras that can observe the same target interact with one another to form a cluster and elect a cluster head. Local measurements of the target acquired by members of the cluster are sent to the cluster head, which then estimates the target position via Kalman filtering and periodically transmits this information to a base station. The underlying clustering protocol allows the current state and uncertainty of the target position to be easily handed off among clusters as the object is being tracked. This allows Kalman filter-based object tracking to be carried out in a distributed manner. An extended Kalman filter is necessary since measurements acquired by the cameras are related to the actual position of the target by nonlinear transformations. In addition, in order to take into consideration the time uncertainty in the measurements acquired by the different cameras, it is necessary to introduce nonlinearity in the system dynamics. Our object tracking protocol requires the transmission of significantly fewer messages than a centralized tracker that naively transmits all of the local measurements to the base station. It is also more accurate than a decentralized tracker that employs linear interpolation for local data aggregation. Besides, the protocol is able to perform real-time estimation because our implementation takes into consideration the sparsit- - y of the matrices involved in the problem. The experimental results show that our distributed object tracking protocol is able to achieve tracking accuracy comparable to the centralized tracking method, while requiring a significantly smaller number of message transmissions in the network

    Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search

    Full text link
    In search applications, autonomous unmanned vehicles must be able to efficiently reacquire and localize mobile targets that can remain out of view for long periods of time in large spaces. As such, all available information sources must be actively leveraged -- including imprecise but readily available semantic observations provided by humans. To achieve this, this work develops and validates a novel collaborative human-machine sensing solution for dynamic target search. Our approach uses continuous partially observable Markov decision process (CPOMDP) planning to generate vehicle trajectories that optimally exploit imperfect detection data from onboard sensors, as well as semantic natural language observations that can be specifically requested from human sensors. The key innovation is a scalable hierarchical Gaussian mixture model formulation for efficiently solving CPOMDPs with semantic observations in continuous dynamic state spaces. The approach is demonstrated and validated with a real human-robot team engaged in dynamic indoor target search and capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference (Cambridge, UK, July 2018

    Supporting Device Discovery and Spontaneous Interaction with Spatial References

    Get PDF
    The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection

    An Overview about Emerging Technologies of Autonomous Driving

    Full text link
    Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007, autonomous driving has been the most active field of AI applications. This paper gives an overview about technical aspects of autonomous driving technologies and open problems. We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc. Especially we elaborate on all these issues in a framework of data closed loop, a popular platform to solve the long tailed autonomous driving problems
    • …
    corecore