3,499 research outputs found
Machine vision based teleoperation aid
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid
Synthetic movies
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1989.Includes bibliographical references (leaves 67-70).by John A. Watlington.M.S
Interaction between high-level and low-level image analysis for semantic video object extraction
Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)
A stereo display prototype with multiple focal distances
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee
Rooms with Text: A Dataset for Overlaying Text Detection
In this paper, we introduce a new dataset of room interior pictures with
overlaying and scene text, totalling to 4836 annotated images in 25 product
categories. We provide details on the collection and annotation process of our
dataset, and analyze its statistics. Furthermore, we propose a baseline method
for overlaying text detection, that leverages the character region-aware text
detection framework to guide the classification model. We validate our approach
and show its efficiency in terms of binary classification metrics, reaching the
final performance of 0.95 F1 score, with false positive and false negative
rates of 0.02 and 0.06 correspondingly.Comment: Text in Everything workshop at ECCV 202
- …