1,049 research outputs found

    Interactive sensor planning

    Get PDF
    This paper describes an interactive sensor planning system, that can be used to select viewpoints subject to camera visibility, field of view and task constraints. Application areas for this method include surveillance planning, safety monitoring, architectural site design planning, and automated site modeling. Given a description, of the sensor's characteristics, the objects in the 3-D scene, and the targets to be viewed, our algorithms compute the set of admissible view points that satisfy the constraints. The system first builds topologically correct solid models of the scene from a variety of data sources. Viewing targets are then selected, and visibility volumes and field of view cones are computed and intersected to create viewing volumes where cameras can be placed. The user can interactively manipulate the scene and select multiple target features to be viewed by a camera. The user can also select candidate viewpoints within this volume to synthesize views and verify the correctness of the planning system. We present experimental results for the planning system on an actual complex city model

    Causal simulation and sensor planning in predictive monitoring

    Get PDF
    Two issues are addressed which arise in the task of detecting anomalous behavior in complex systems with numerous sensor channels: how to adjust alarm thresholds dynamically, within the changing operating context of the system, and how to utilize sensors selectively, so that nominal operation can be verified reliably without processing a prohibitive amount of sensor data. The approach involves simulation of a causal model of the system, which provides information on expected sensor values, and on dependencies between predicted events, useful in assessing the relative importance of events so that sensor resources can be allocated effectively. The potential applicability of this work to the execution monitoring of robot task plans is briefly discussed

    Constraint-based sensor planning for scene modeling

    Get PDF
    We describe an automated scene modeling system that consists of two components operating in an interleaved fashion: an incremental modeler that builds solid models from range imagery; and a sensor planner that analyzes the resulting model and computes the next sensor position. This planning component is target-driven and computes sensor positions using model information about the imaged surfaces and the unexplored space in a scene. The method is shape-independent and uses a continuous-space representation that preserves the accuracy of sensed data. It is able to completely acquire a scene by repeatedly planning sensor positions, utilizing a partial model to determine volumes of visibility for contiguous areas of unexplored scene. These visibility volumes are combined with sensor placement constraints to compute sets of occlusion-free sensor positions that are guaranteed to improve the quality of the model. We show results for the acquisition of a scene that includes multiple, distinct objects with high occlusion

    3D sensor planning framework for leaf probing

    Get PDF
    Trabajo presentado a la International Conference on Intelligent Robots and Systems celebrada en Hamburgo (Alemania) del 28 de septiembre al 2 de octubre de 2015.Modern plant phenotyping requires active sensing technologies and particular exploration strategies. This article proposes a new method for actively exploring a 3D region of space with the aim of localizing special areas of interest for manipulation tasks over plants. In our method, exploration is guided by a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provides a better understanding of the scene until a task termination criterion is reached. This approach is designed to be applicable for any task entailing 3D object exploration where some previous knowledge of its general shape is available. Its suitability is demonstrated here for an eye-in-hand arm configuration in a leaf probing application.This research has been partially funded by the CSIC project MANIPlus 201350E102.Peer Reviewe

    Sensor Planning for Object Pose Estimation and Identification

    Get PDF
    This paper proposes a novel approach to sensor planning for simultaneous object identification and 3D pose estimation. We consider the problem of determining the next-best-view for a movable sensor (or an autonomous agent) to identify an unknown object from among a database of known object models. We use an information theoretic approach to define a metric (based on the difference between the current and expected model entropy) that guides the selection of the optimal control action. We present a generalized algorithm that can be used in sensor planning for object identification and pose estimation. Experimental results are also presented to validate the proposed algorithm

    Multi-Agent Orbit Design For Perception Enhancement Purpose

    Full text link
    This paper develops a robust optimization based method to design orbits on which the sensory perception of the desired physical quantities are maximized. It also demonstrates how to incorporate various constraints imposed by many spacecraft missions such as collision avoidance, co-orbital configuration, altitude and frozen orbit constraints along with Sun-Synchronous orbit. The paper specifically investigates designing orbits for constrained visual sensor planning applications as the case study. For this purpose, the key elements to form an image in such vision systems are considered and effective factors are taken into account to define a metric for perception quality. The simulation results confirm the effectiveness of the proposed method for several scenarios on low and medium Earth orbits as well as a challenging Space-Based Space Surveillance program application.Comment: 12 pages, 18 figure
    corecore