1,476 research outputs found
Self-calibration of turntable sequences from silhouettes
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method. © 2009 IEEE.published_or_final_versio
VIGAN: Missing View Imputation with Generative Adversarial Networks
In an era when big data are becoming the norm, there is less concern with the
quantity but more with the quality and completeness of the data. In many
disciplines, data are collected from heterogeneous sources, resulting in
multi-view or multi-modal datasets. The missing data problem has been
challenging to address in multi-view data analysis. Especially, when certain
samples miss an entire view of data, it creates the missing view problem.
Classic multiple imputations or matrix completion methods are hardly effective
here when no information can be based on in the specific view to impute data
for such samples. The commonly-used simple method of removing samples with a
missing view can dramatically reduce sample size, thus diminishing the
statistical power of a subsequent analysis. In this paper, we propose a novel
approach for view imputation via generative adversarial networks (GANs), which
we name by VIGAN. This approach first treats each view as a separate domain and
identifies domain-to-domain mappings via a GAN using randomly-sampled data from
each view, and then employs a multi-modal denoising autoencoder (DAE) to
reconstruct the missing view from the GAN outputs based on paired data across
the views. Then, by optimizing the GAN and DAE jointly, our model enables the
knowledge integration for domain mappings and view correspondences to
effectively recover the missing view. Empirical results on benchmark datasets
validate the VIGAN approach by comparing against the state of the art. The
evaluation of VIGAN in a genetic study of substance use disorders further
proves the effectiveness and usability of this approach in life science.Comment: 10 pages, 8 figures, conferenc
3D Pose Regression using Convolutional Neural Networks
3D pose estimation is a key component of many important computer vision tasks
such as autonomous navigation and 3D scene understanding. Most state-of-the-art
approaches to 3D pose estimation solve this problem as a pose-classification
problem in which the pose space is discretized into bins and a CNN classifier
is used to predict a pose bin. We argue that the 3D pose space is continuous
and propose to solve the pose estimation problem in a CNN regression framework
with a suitable representation, data augmentation and loss function that
captures the geometry of the pose space. Experiments on PASCAL3D+ show that the
proposed 3D pose regression approach achieves competitive performance compared
to the state-of-the-art
On using gait to enhance frontal face extraction
Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario
Optical Flow in Mostly Rigid Scenes
The optical flow of natural scenes is a combination of the motion of the
observer and the independent motion of objects. Existing algorithms typically
focus on either recovering motion and structure under the assumption of a
purely static world or optical flow for general unconstrained scenes. We
combine these approaches in an optical flow algorithm that estimates an
explicit segmentation of moving objects from appearance and physical
constraints. In static regions we take advantage of strong constraints to
jointly estimate the camera motion and the 3D structure of the scene over
multiple frames. This allows us to also regularize the structure instead of the
motion. Our formulation uses a Plane+Parallax framework, which works even under
small baselines, and reduces the motion estimation to a one-dimensional search
problem, resulting in more accurate estimation. In moving regions the flow is
treated as unconstrained, and computed with an existing optical flow method.
The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art
results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201
- …