42 research outputs found
Towards an Iterative Algorithm for the Optimal Boundary Coverage of a 3D Environment
This paper presents a new optimal algorithm for locating a set of sensors in 3D able to see the boundaries of a polyhedral environment. Our approach is iterative and is based on a lower bound on the sensors' number and on a restriction of the original problem requiring each face to be observed in its entirety by at least one sensor. The lower bound allows evaluating the quality of the solution obtained at each step, and halting the algorithm if the solution is satisfactory. The algorithm asymptotically converges to the optimal solution of the unrestricted problem if the faces are subdivided into smaller part
Recommended from our members
Computing camera viewpoints in a robot work-cell
Automatically planning a camera viewpoint for tasks such as inspection in an active robot work-cell is a difficult problem. This paper discusses new methods for computing viewpoints which meet the feature detectability constraints of focus, field-of-view, visibility, and resolution. A theoretical outline of the method is presented, followed by experimental results and a discussion of future work
Recommended from our members
The MVP sensor planning system for robotic vision tasks
The MVP (machine vision planner) model-based sensor planning system for robotic vision is presented. MVP automatically synthesizes desirable camera views of a scene based on geometric models of the environment, optical models of the vision sensors, and models of the task to be achieved. The generic task of feature detectability has been chosen since it is applicable to many robot-controlled vision systems. For such a task, features of interest in the environment are required to simultaneously be visible, inside the field of view, in focus, and magnified as required. In this paper, we present a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin. In addition, we present experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration
Recommended from our members
Automated sensor planning for robotic vision tasks
A method is presented to determine viewpoints for a robotic vision system for which object features of interest will simultaneously by visible, inside the field-of-view, in-focus, and magnified as required. A technique that poses the problem in an optimization setting in order to determine viewpoints that satisfy all requirements simultaneously and with a margin is presented. The formulation and results of the optimization are shown, as well as experimental results in which a robot vision system is positioned and its lens is set according to this method. Camera views are taken from the computed viewpoints in order to verify that all feature detectability requirements are satisfied
Recommended from our members
Dynamic sensor planning
A method of extending the sensor planning abilities of the MVP (machine vision planning) system to plan viewpoints for monitoring a pre-planned robot task is described. The dynamic sensor planning system presented analyzes geometric models of the environment and of the planned motions of the robot, as well as optical models of the vision sensor. Using a combination of swept volumes and a temporal interval search technique, it computes a series of viewpoints, each of which provides a valid viewpoint for a different interval of the task. By mounting a camera on another manipulator, the viewpoints can be executed at appropriate times during the task so that there is always a robust view suitable for monitoring the task. Experimental results monitoring a simulated robot operation are presented, and directions for future research are discussed
Towards an enterprise architecture for public administration using a top-down approach
Downloaded from
This paper presents a dynamic sensor-planning system that is capable of planning the locations and settings of vision sensors for use in an environment containing objects moving in known ways. The key component of this research is the computation of the camera position, orientation, and optical settings to be used over a time interval. A new algorithm is presented for viewpoint computation which ensures that the feature-detectability constraints of focus, resolution, field of view, and visibility are satisfied. A five-degree-of-freedom Cartesian robot carrying a CCD camera in a hand/eye configuration and surrounding the work cell of a Puma 560 robot was constructed for performing sensor-planning experiments. The results of these experiments, demonstrating the use of this system in a robot work cell, are presented. 1. Dynamic Sensor Plannin