1,084 research outputs found
Analysis and approximation of some Shape-from-Shading models for non-Lambertian surfaces
The reconstruction of a 3D object or a scene is a classical inverse problem
in Computer Vision. In the case of a single image this is called the
Shape-from-Shading (SfS) problem and it is known to be ill-posed even in a
simplified version like the vertical light source case. A huge number of works
deals with the orthographic SfS problem based on the Lambertian reflectance
model, the most common and simplest model which leads to an eikonal type
equation when the light source is on the vertical axis. In this paper we want
to study non-Lambertian models since they are more realistic and suitable
whenever one has to deal with different kind of surfaces, rough or specular. We
will present a unified mathematical formulation of some popular orthographic
non-Lambertian models, considering vertical and oblique light directions as
well as different viewer positions. These models lead to more complex
stationary nonlinear partial differential equations of Hamilton-Jacobi type
which can be regarded as the generalization of the classical eikonal equation
corresponding to the Lambertian case. However, all the equations corresponding
to the models considered here (Oren-Nayar and Phong) have a similar structure
so we can look for weak solutions to this class in the viscosity solution
framework. Via this unified approach, we are able to develop a semi-Lagrangian
approximation scheme for the Oren-Nayar and the Phong model and to prove a
general convergence result. Numerical simulations on synthetic and real images
will illustrate the effectiveness of this approach and the main features of the
scheme, also comparing the results with previous results in the literature.Comment: Accepted version to Journal of Mathematical Imaging and Vision, 57
page
Photometric stereo for strong specular highlights
Photometric stereo (PS) is a fundamental technique in computer vision known
to produce 3-D shape with high accuracy. The setting of PS is defined by using
several input images of a static scene taken from one and the same camera
position but under varying illumination. The vast majority of studies in this
3-D reconstruction method assume orthographic projection for the camera model.
In addition, they mainly consider the Lambertian reflectance model as the way
that light scatters at surfaces. So, providing reliable PS results from real
world objects still remains a challenging task. We address 3-D reconstruction
by PS using a more realistic set of assumptions combining for the first time
the complete Blinn-Phong reflectance model and perspective projection. To this
end, we will compare two different methods of incorporating the perspective
projection into our model. Experiments are performed on both synthetic and real
world images. Note that our real-world experiments do not benefit from
laboratory conditions. The results show the high potential of our method even
for complex real world applications such as medical endoscopy images which may
include high amounts of specular highlights
Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View
We propose a method for predicting the 3D shape of a deformable surface from
a single view. By contrast with previous approaches, we do not need a
pre-registered template of the surface, and our method is robust to the lack of
texture and partial occlusions. At the core of our approach is a {\it
geometry-aware} deep architecture that tackles the problem as usually done in
analytic solutions: first perform 2D detection of the mesh and then estimate a
3D shape that is geometrically consistent with the image. We train this
architecture in an end-to-end manner using a large dataset of synthetic
renderings of shapes under different levels of deformation, material
properties, textures and lighting conditions. We evaluate our approach on a
test split of this dataset and available real benchmarks, consistently
improving state-of-the-art solutions with a significantly lower computational
time.Comment: Accepted at CVPR 201
A framework for digital sunken relief generation based on 3D geometric models
Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose
- …