29,064 research outputs found
OSPC: Online Sequential Photometric Calibration
Photometric calibration is essential to many computer vision applications.
One of its key benefits is enhancing the performance of Visual SLAM, especially
when it depends on a direct method for tracking, such as the standard KLT
algorithm. Another advantage could be in retrieving the sensor irradiance
values from measured intensities, as a pre-processing step for some vision
algorithms, such as shape-from-shading. Current photometric calibration systems
rely on a joint optimization problem and encounter an ambiguity in the
estimates, which can only be resolved using ground truth information. We
propose a novel method that solves for photometric parameters using a
sequential estimation approach. Our proposed method achieves high accuracy in
estimating all parameters; furthermore, the formulations are linear and convex,
which makes the solution fast and suitable for online applications. Experiments
on a Visual Odometry system validate the proposed method and demonstrate its
advantages
Shape-from-shading using the heat equation
This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces
New constraints on data-closeness and needle map consistency for shape-from-shading
This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. First, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard constraint. This not only improves the data closeness of the recovered needle-map, but also removes the necessity for extensive parameter tuning. Second, we exploit the improved ease of control of the new shape-from-shading process to investigate various types of needle-map consistency constraint. The first set of constraints are based on needle-map smoothness. The second avenue of investigation is to use curvature information to impose topographic constraints. Third, we explore ways in which the needle-map is recovered so as to be consistent with the image gradient field. In each case we explore a variety of robust error measures and consistency weighting schemes that can be used to impose the desired constraints on the recovered needle-map. We provide an experimental assessment of the new shape-from-shading framework on both real world images and synthetic images with known ground truth surface normals. The main conclusion drawn from our analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map
3D Face Reconstruction by Learning from Synthetic Data
Fast and robust three-dimensional reconstruction of facial geometric
structure from a single image is a challenging task with numerous applications.
Here, we introduce a learning-based approach for reconstructing a
three-dimensional face from a single image. Recent face recovery methods rely
on accurate localization of key characteristic points. In contrast, the
proposed approach is based on a Convolutional-Neural-Network (CNN) which
extracts the face geometry directly from its image. Although such deep
architectures outperform other models in complex computer vision problems,
training them properly requires a large dataset of annotated examples. In the
case of three-dimensional faces, currently, there are no large volume data
sets, while acquiring such big-data is a tedious task. As an alternative, we
propose to generate random, yet nearly photo-realistic, facial images for which
the geometric form is known. The suggested model successfully recovers facial
shapes from real images, even for faces with extreme expressions and under
various lighting conditions.Comment: The first two authors contributed equally to this wor
Towards recovery of complex shapes in meshes using digital images for reverse engineering applications
When an object owns complex shapes, or when its outer surfaces are simply inaccessible, some of its parts may not be captured during its reverse engineering. These deficiencies in the point cloud result in a set of holes in the reconstructed mesh. This paper deals with the use of information extracted from digital images to recover missing areas of a physical object. The proposed algorithm fills in these holes by solving an optimization problem that combines two kinds of information: (1) the geometric information available on the surrounding of the holes, (2) the information contained in an image of the real object. The constraints come from the image irradiance equation, a first-order non-linear partial differential equation that links the position of the mesh vertices to the light intensity of the image pixels. The blending conditions are satisfied by using an objective function based on a mechanical model of bar network that simulates the curvature evolution over the mesh. The inherent shortcomings both to the current holefilling algorithms and the resolution of the image irradiance equations are overcom
Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging
Many graphics and vision problems can be expressed as non-linear least
squares optimizations of objective functions over visual data, such as images
and meshes. The mathematical descriptions of these functions are extremely
concise, but their implementation in real code is tedious, especially when
optimized for real-time performance on modern GPUs in interactive applications.
In this work, we propose a new language, Opt (available under
http://optlang.org), for writing these objective functions over image- or
graph-structured unknowns concisely and at a high level. Our compiler
automatically transforms these specifications into state-of-the-art GPU solvers
based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate
different variations of the solver, so users can easily explore tradeoffs in
numerical precision, matrix-free methods, and solver approaches. In our
results, we implement a variety of real-world graphics and vision applications.
Their energy functions are expressible in tens of lines of code, and produce
highly-optimized GPU solver implementations. These solver have performance
competitive with the best published hand-tuned, application-specific GPU
solvers, and orders of magnitude beyond a general-purpose auto-generated
solver
- …