148,027 research outputs found
Cascaded Scene Flow Prediction using Semantic Segmentation
Given two consecutive frames from a pair of stereo cameras, 3D scene flow
methods simultaneously estimate the 3D geometry and motion of the observed
scene. Many existing approaches use superpixels for regularization, but may
predict inconsistent shapes and motions inside rigidly moving objects. We
instead assume that scenes consist of foreground objects rigidly moving in
front of a static background, and use semantic cues to produce pixel-accurate
scene flow estimates. Our cascaded classification framework accurately models
3D scenes by iteratively refining semantic segmentation masks, stereo
correspondences, 3D rigid motion estimates, and optical flow fields. We
evaluate our method on the challenging KITTI autonomous driving benchmark, and
show that accounting for the motion of segmented vehicles leads to
state-of-the-art performance.Comment: International Conference on 3D Vision (3DV), 2017 (oral presentation
Ames vision group research overview
A major goal of the reseach group is to develop mathematical and computational models of early human vision. These models are valuable in the prediction of human performance, in the design of visual coding schemes and displays, and in robotic vision. To date researchers have models of retinal sampling, spatial processing in visual cortex, contrast sensitivity, and motion processing. Based on their models of early human vision, researchers developed several schemes for efficient coding and compression of monochrome and color images. These are pyramid schemes that decompose the image into features that vary in location, size, orientation, and phase. To determine the perceptual fidelity of these codes, researchers developed novel human testing methods that have received considerable attention in the research community. Researchers constructed models of human visual motion processing based on physiological and psychophysical data, and have tested these models through simulation and human experiments. They also explored the application of these biological algorithms to applications in automated guidance of rotorcraft and autonomous landing of spacecraft. Researchers developed networks for inhomogeneous image sampling, for pyramid coding of images, for automatic geometrical correction of disordered samples, and for removal of motion artifacts from unstable cameras
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
Vision-based Vehicle Navigation Using the Fluorescent Lamp Array on the Ceiling
In this paper, autonomous navigation based on a TV vision system on board a vehicle is proppsed. Our method is to use fluorescent lamp arrays on the ceiling as a lighthouse for vehicle motion. First, experimental study of vehicle control based on information from photo-sensors, set up on a TV screen, is worked out using an actual size model. Then, numerical simulations for this control scheme are carried out in detail. Moreover, a more vision-based approach is investigated which extracts information aspects from the images of fluorescent lamp arrays to realize more exact motion along the lamp array. Finally, a practical autonomous vehicle which is controlled by a photo-sensor system is constructed and its experimental results are shown
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
- …