901 research outputs found
A Model-Selection Framework for Multibody Structure-and-Motion of Image Sequences
Given an image sequence of a scene consisting of multiple rigidly moving objects, multi-body structure-and-motion (MSaM) is the task to segment the image feature tracks into the different rigid objects and compute the multiple-view geometry of each object. We present a framework for multibody structure-and-motion based on model selection. In a recover-and-select procedure, a redundant set of hypothetical scene motions is generated. Each subset of this pool of motion candidates is regarded as a possible explanation of the image feature tracks, and the most likely explanation is selected with model selection. The framework is generic and can be used with any parametric camera model, or with a combination of different models. It can deal with sets of correspondences, which change over time, and it is robust to realistic amounts of outliers. The framework is demonstrated for different camera and scene model
Automatic Feature-Based Stabilization of Video with Intentional Motion through a Particle Filter
Video sequences acquired by a camera mounted on a hand held device or a mobile platform are affected by unwanted shakes and jitters. In this situation, the performance of video applications, such us motion segmentation and tracking, might dramatically be decreased. Several digital video stabilization approaches have been proposed to overcome this problem. However, they are mainly based on motion estimation techniques that are prone to errors, and thus affecting the stabilization performance. On the other hand, these techniques can only obtain a successfully stabilization if the intentional camera motion is smooth, since they incorrectly filter abrupt changes in the intentional motion. In this paper a novel video stabilization technique that overcomes the aforementioned problems is presented. The motion is estimated by means of a sophisticated feature-based technique that is robust to errors, which could bias the estimation. The unwanted camera motion is filtered, while the intentional motion is successfully preserved thanks to a Particle Filter framework that is able to deal with abrupt changes in the intentional motion. The obtained results confirm the effectiveness of the proposed algorith
Research on a modifeied RANSAC and its applications to ellipse detection from a static image and motion detection from active stereo video sequences
制度:新 ; 報告番号:甲3091号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2010/2/24 ; 早大学位記番号:新535
Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
Depth sensing is useful in a variety of applications that range from
augmented reality to robotics. Time-of-flight (TOF) cameras are appealing
because they obtain dense depth measurements with minimal latency. However, for
many battery-powered devices, the illumination source of a TOF camera is power
hungry and can limit the battery life of the device. To address this issue, we
present an algorithm that lowers the power for depth sensing by reducing the
usage of the TOF camera and estimating depth maps using concurrently collected
images. Our technique also adaptively controls the TOF camera and enables it
when an accurate depth map cannot be estimated. To ensure that the overall
system power for depth sensing is reduced, we design our algorithm to run on a
low power embedded platform, where it outputs 640x480 depth maps at 30 frames
per second. We evaluate our approach on several RGB-D datasets, where it
produces depth maps with an overall mean relative error of 0.96% and reduces
the usage of the TOF camera by 85%. When used with commercial TOF cameras, we
estimate that our algorithm can lower the total power for depth sensing by up
to 73%
- …