8,191 research outputs found
Recommended from our members
The MVP sensor planning system for robotic vision tasks
The MVP (machine vision planner) model-based sensor planning system for robotic vision is presented. MVP automatically synthesizes desirable camera views of a scene based on geometric models of the environment, optical models of the vision sensors, and models of the task to be achieved. The generic task of feature detectability has been chosen since it is applicable to many robot-controlled vision systems. For such a task, features of interest in the environment are required to simultaneously be visible, inside the field of view, in focus, and magnified as required. In this paper, we present a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin. In addition, we present experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration
Recommended from our members
Computing camera viewpoints in a robot work-cell
Automatically planning a camera viewpoint for tasks such as inspection in an active robot work-cell is a difficult problem. This paper discusses new methods for computing viewpoints which meet the feature detectability constraints of focus, field-of-view, visibility, and resolution. A theoretical outline of the method is presented, followed by experimental results and a discussion of future work
MoSculp: Interactive Visualization of Shape and Time
We present a system that allows users to visualize complex human motion via
3D motion sculptures---a representation that conveys the 3D structure swept by
a human body as it moves through space. Given an input video, our system
computes the motion sculptures and provides a user interface for rendering it
in different styles, including the options to insert the sculpture back into
the original video, render it in a synthetic scene or physically print it.
To provide this end-to-end workflow, we introduce an algorithm that estimates
that human's 3D geometry over time from a set of 2D images and develop a
3D-aware image-based rendering approach that embeds the sculpture back into the
scene. By automating the process, our system takes motion sculpture creation
out of the realm of professional artists, and makes it applicable to a wide
range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time
motion information that is difficult to perceive with the naked eye, and allow
viewers to interpret how different parts of the object interact over time. We
validate the effectiveness of this approach with user studies, finding that our
motion sculpture visualizations are significantly more informative about motion
than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
Recommended from our members
Automated sensor planning for robotic vision tasks
A method is presented to determine viewpoints for a robotic vision system for which object features of interest will simultaneously by visible, inside the field-of-view, in-focus, and magnified as required. A technique that poses the problem in an optimization setting in order to determine viewpoints that satisfy all requirements simultaneously and with a margin is presented. The formulation and results of the optimization are shown, as well as experimental results in which a robot vision system is positioned and its lens is set according to this method. Camera views are taken from the computed viewpoints in order to verify that all feature detectability requirements are satisfied
Recommended from our members
Computing robust viewpoints with multi-constraints using tree annealing
In order to compute camera viewpoints during sensor planning, Tarabanis et al. (1991) present a group of feature detectability constraints which include six nonlinear inequalities in an eight-dimensional real space. It is difficult to compute robust viewpoints which satisfy all feature detectability constraints. In this paper, the viewpoint setting is formulated as an unconstrained optimization problem. Then a tree annealing algorithm, which is a general-purpose technique for finding minima of functions of continuously-valued variables, is applied to solve this nonlinear multiconstraint optimization problem. Our results show that the technique is quite effective to get robust viewpoints even in the presence of considerable amounts of noise
Recommended from our members
Camera Placement Planning Avoiding Occlusion: Test Results Using a Robotic Hand/Eye System
Camera placement experiments are presented that demonstrate the effectiveness of a viewpoint planning algorithm that avoids occlusion of a visual target. A CCD camera mounted on a robot in a hand-eye configuration is placed at planned unobstructed viewpoints to observe a target on a real object. The validity of the method is tested by placing the camera inside the viewing region, that is constructed using the proposed new sensor placement planning algorithm and observing whether the target is truly visible. The accuracy of the boundary of the constructed viewing region is tested by placing the camera at the critical - locations of the viewing region boundary and confirming that the target is barely visible. The corresponding scenes from the candidate viewpoints are shown demonstrating that occlusions are properly avoided
- âŠ