10,274 research outputs found
Spartan Daily, October 10, 1934
Volume 23, Issue 14https://scholarworks.sjsu.edu/spartandaily/2193/thumbnail.jp
Research in interactive scene analysis
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations
The field of visual representation learning has seen explosive growth in the
past years, but its benefits in robotics have been surprisingly limited so far.
Prior work uses generic visual representations as a basis to learn
(task-specific) robot action policies (e.g., via behavior cloning). While the
visual representations do accelerate learning, they are primarily used to
encode visual observations. Thus, action information has to be derived purely
from robot data, which is expensive to collect! In this work, we present a
scalable alternative where the visual representations can help directly infer
robot actions. We observe that vision encoders express relationships between
image observations as distances (e.g., via embedding dot product) that could be
used to efficiently plan robot behavior. We operationalize this insight and
develop a simple algorithm for acquiring a distance function and dynamics
predictor, by fine-tuning a pre-trained representation on human collected video
sequences. The final method is able to substantially outperform traditional
robot learning baselines (e.g., 70% success v.s. 50% for behavior cloning on
pick-place) on a suite of diverse real-world manipulation tasks. It can also
generalize to novel objects, without using any robot demonstrations during
train time. For visualizations of the learned policies please check:
https://agi-labs.github.io/manipulate-by-seeing/.Comment: Oral Presentation at the International Conference on Computer Vision
(ICCV), 202
Human engineering design criteria study Final report
Human engineering design criteria for use in designing earth launch vehicle systems and equipmen
- …