20 research outputs found

    Simultaneous Distributed Sensor Self-Localization and Target Tracking Using Belief Propagation and Likelihood Consensus

    Full text link
    We introduce the framework of cooperative simultaneous localization and tracking (CoSLAT), which provides a consistent combination of cooperative self-localization (CSL) and distributed target tracking (DTT) in sensor networks without a fusion center. CoSLAT extends simultaneous localization and tracking (SLAT) in that it uses also intersensor measurements. Starting from a factor graph formulation of the CoSLAT problem, we develop a particle-based, distributed message passing algorithm for CoSLAT that combines nonparametric belief propagation with the likelihood consensus scheme. The proposed CoSLAT algorithm improves on state-of-the-art CSL and DTT algorithms by exchanging probabilistic information between CSL and DTT. Simulation results demonstrate substantial improvements in both self-localization and tracking performance.Comment: 10 pages, 5 figure

    Localization for Anchoritic Sensor Networks

    Full text link
    We introduce a class of anchoritic sensor networks, where communications between sensor nodes is undesirable or infeasible, e.g., due to harsh environment, energy constraints, or security considerations

    Feature-based calibration of distributed smart stereo camera networks

    Get PDF
    A distributed smart camera network is a collective of vision-capable devices with enough processing power to execute algorithms for collaborative vision tasks. A true 3D sensing network applies to a broad range of applications, and local stereo vision capabilities at each node offer the potential for a particularly robust implementation. A novel spatial calibration method for such a network is presented, which obtains pose estimates suitable for collaborative 3D vision in a distributed fashion using two stages of registration on robust 3D features. The method is initially described in a geometrical sense, then presented in a practical implementation using existing vision and registration algorithms. The method is designed independently of networking details, making only a few basic assumptions about the underlying networkpsilas capabilities. Experiments using both software simulations and physical devices are designed and executed to demonstrate performance

    Learning Higher-order Transition Models in Medium-scale Camera Networks

    Get PDF
    We present a Bayesian framework for learning higherorder transition models in video surveillance networks. Such higher-order models describe object movement between cameras in the network and have a greater predictive power for multi-camera tracking than camera adjacency alone. These models also provide inherent resilience to camera failure, filling in gaps left by single or even multiple non-adjacent camera failures. Our approach to estimating higher-order transition models relies on the accurate assignment of camera observations to the underlying trajectories of objects moving through the network. We addresses this data association problem by gathering the observations and evaluating alternative partitions of the observation set into individual object trajectories. Searching the complete partition space is intractable, so an incremental approach is taken, iteratively adding observations and pruning unlikely partitions. Partition likelihood is determined by the evaluation of a probabilistic graphical model. When the algorithm has considered all observations, the most likely (MAP) partition is taken as the true object trajectories. From these recovered trajectories, the higher-order statistics we seek can be derived and employed for tracking. The partitioning algorithm we present is parallel in nature and can be readily extended to distributed computation in medium-scale smart camera networks. 1

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Cloud-based Networked Visual Servo Control

    Get PDF

    Self-localizing Smart Cameras and Their Applications

    Get PDF
    As the prices of cameras and computing elements continue to fall, it has become increasingly attractive to consider the deployment of smart camera networks. These networks would be composed of small, networked computers equipped with inexpensive image sensors. Such networks could be employed in a wide range of applications including surveillance, robotics and 3D scene reconstruction. One critical problem that must be addressed before such systems can be deployed effectively is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know how the cameras in the scene are positioned and oriented with respect to each other. To address the localization problem we have proposed a novel approach to localizing networks of embedded cameras and sensors. In this scheme the cameras and the nodes are equipped with controllable light sources (either visible or infrared) which are used for signaling. Each camera node can then automatically determine the bearing to all the nodes that are visible from its vantage point. By fusing these measurements with the measurements obtained from onboard accelerometers, the camera nodes are able to determine the relative positions and orientations of other nodes in the network. This localization technology can serve as a basic capability on which higher level applications can be built. The method could be used to automatically survey the locations of sensors of interest, to implement distributed surveillance systems or to analyze the structure of a scene based on the images obtained from multiple registered vantage points. It also provides a mechanism for integrating the imagery obtained from the cameras with the measurements obtained from distributed sensors. We have successfully used our custom made self localizing smart camera networks to implement a novel decentralized target tracking algorithm, create an ad-hoc range finder and localize the components of a self assembling modular robot
    corecore