5,736 research outputs found

    Photometric stereo for strong specular highlights

    Full text link
    Photometric stereo (PS) is a fundamental technique in computer vision known to produce 3-D shape with high accuracy. The setting of PS is defined by using several input images of a static scene taken from one and the same camera position but under varying illumination. The vast majority of studies in this 3-D reconstruction method assume orthographic projection for the camera model. In addition, they mainly consider the Lambertian reflectance model as the way that light scatters at surfaces. So, providing reliable PS results from real world objects still remains a challenging task. We address 3-D reconstruction by PS using a more realistic set of assumptions combining for the first time the complete Blinn-Phong reflectance model and perspective projection. To this end, we will compare two different methods of incorporating the perspective projection into our model. Experiments are performed on both synthetic and real world images. Note that our real-world experiments do not benefit from laboratory conditions. The results show the high potential of our method even for complex real world applications such as medical endoscopy images which may include high amounts of specular highlights

    Analysis and approximation of some Shape-from-Shading models for non-Lambertian surfaces

    Full text link
    The reconstruction of a 3D object or a scene is a classical inverse problem in Computer Vision. In the case of a single image this is called the Shape-from-Shading (SfS) problem and it is known to be ill-posed even in a simplified version like the vertical light source case. A huge number of works deals with the orthographic SfS problem based on the Lambertian reflectance model, the most common and simplest model which leads to an eikonal type equation when the light source is on the vertical axis. In this paper we want to study non-Lambertian models since they are more realistic and suitable whenever one has to deal with different kind of surfaces, rough or specular. We will present a unified mathematical formulation of some popular orthographic non-Lambertian models, considering vertical and oblique light directions as well as different viewer positions. These models lead to more complex stationary nonlinear partial differential equations of Hamilton-Jacobi type which can be regarded as the generalization of the classical eikonal equation corresponding to the Lambertian case. However, all the equations corresponding to the models considered here (Oren-Nayar and Phong) have a similar structure so we can look for weak solutions to this class in the viscosity solution framework. Via this unified approach, we are able to develop a semi-Lagrangian approximation scheme for the Oren-Nayar and the Phong model and to prove a general convergence result. Numerical simulations on synthetic and real images will illustrate the effectiveness of this approach and the main features of the scheme, also comparing the results with previous results in the literature.Comment: Accepted version to Journal of Mathematical Imaging and Vision, 57 page

    Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences

    Full text link
    Machine learning based Single Image Intrinsic Decomposition (SIID) methods decompose a captured scene into its albedo and shading images by using the knowledge of a large set of known and realistic ground truth decompositions. Collecting and annotating such a dataset is an approach that cannot scale to sufficient variety and realism. We free ourselves from this limitation by training on unannotated images. Our method leverages the observation that two images of the same scene but with different lighting provide useful information on their intrinsic properties: by definition, albedo is invariant to lighting conditions, and cross-combining the estimated albedo of a first image with the estimated shading of a second one should lead back to the second one's input image. We transcribe this relationship into a siamese training scheme for a deep convolutional neural network that decomposes a single image into albedo and shading. The siamese setting allows us to introduce a new loss function including such cross-combinations, and to train solely on (time-lapse) images, discarding the need for any ground truth annotations. As a result, our method has the good properties of i) taking advantage of the time-varying information of image sequences in the (pre-computed) training step, ii) not requiring ground truth data to train on, and iii) being able to decompose single images of unseen scenes at runtime. To demonstrate and evaluate our work, we additionally propose a new rendered dataset containing illumination-varying scenes and a set of quantitative metrics to evaluate SIID algorithms. Despite its unsupervised nature, our results compete with state of the art methods, including supervised and non data-driven methods.Comment: To appear in Pacific Graphics 201

    Shading with Painterly Filtered Layers: A Process to Obtain Painterly Portraits

    Get PDF
    In this thesis, I study how color data from different styles of paintings can be extracted from photography with the end result maintaining the artistic integrity of the art style and having the look and feel of skin. My inspiration for this work came from the impasto style portraitures of painters such as Rembrandt and Greg Cartmell. I analyzed and studied the important visual characteristics of both Rembrandt’s and Cartmell’s styles of painting.These include how the artist develops shadow and shading, creates the illusion of subsurface scattering, and applies color to the canvas, which will be used as references to help develop the final renders in computer graphics. I also examined how color information can be extracted from portrait photography in order to gather accurate dark, medium, and light skin shades. Based on this analysis, I have developed a process for creating portrait paintings from 3D facial models. My process consists of four stages: (1) Modeling a 3D portrait of the subject, (2) data collection by photographing the subjects, (3) Barycentric shader development using photographs, and (4) Compositing with filtered layers. My contributions has been in stages (3) and (4) as follows: Development of an impasto-style Barycentric shader by extracting color information from gathered photographic images. This shader can result in realistic looking skin rendering. Development of a compositing technique that involves filtering layers of images that correspond to different effects such as diffuse, specular and ambient. To demonstrate proof-of-concept, I have created a few animations of the impasto style portrait painting for a single subject. For these animations, I have also sculpted high polygon count 3D model of the torso and head of my subject. Using my shading and compositing techniques, I have created rigid body animations that demonstrate the power of my techniques to obtain impasto style portraiture during animation under different lighting conditions

    CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

    Full text link
    Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present \ICG, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging \ICG, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering' published in ECCV, 201

    Innovative Techniques for Digitizing and Restoring Deteriorated Historical Documents

    Get PDF
    Recent large-scale document digitization initiatives have created new modes of access to modern library collections with the development of new hardware and software technologies. Most commonly, these digitization projects focus on accurately scanning bound texts, some reaching an efficiency of more than one million volumes per year. While vast digital collections are changing the way users access texts, current scanning paradigms can not handle many non-standard materials. Documentation forms such as manuscripts, scrolls, codices, deteriorated film, epigraphy, and rock art all hold a wealth of human knowledge in physical forms not accessible by standard book scanning technologies. This great omission motivates the development of new technology, presented by this thesis, that is not-only effective with deteriorated bound works, damaged manuscripts, and disintegrating photonegatives but also easily utilized by non-technical staff. First, a novel point light source calibration technique is presented that can be performed by library staff. Then, a photometric correction technique which uses known illumination and surface properties to remove shading distortions in deteriorated document images can be automatically applied. To complete the restoration process, a geometric correction is applied. Also unique to this work is the development of an image-based uncalibrated document scanner that utilizes the transmissivity of document substrates. This scanner extracts intrinsic document color information from one or both sides of a document. Simultaneously, the document shape is estimated to obtain distortion information. Lastly, this thesis provides a restoration framework for damaged photographic negatives that corrects photometric and geometric distortions. Current restoration techniques for the discussed form of negatives require physical manipulation to the photograph. The novel acquisition and restoration system presented here provides the first known solution to digitize and restore deteriorated photographic negatives without damaging the original negative in any way. This thesis work develops new methods of document scanning and restoration suitable for wide-scale deployment. By creating easy to access technologies, library staff can implement their own scanning initiatives and large-scale scanning projects can expand their current document-sets

    The Perception of Lightness in 3-D Curved Objects

    Full text link
    Lightness constancy requires the visual system to somehow "parse" the input scene into illumination and reflectance components. Experiments on the perception of lightness for 3-D curved objects show that human observers are able to perform such a decomposition for some scenes but not for others. Lightness constancy was quite good when a rich local gray level context was provided. Deviations occurred when both illumination and reflectance changed along the surface of the objects. Does the perception of a 3-D surface and illuminant layout help calibrate lightness judgements? Our results showed a small but consistent improvement between lightness matches on ellipsoid shapes compared to flat rectangle shapes under similar illumination conditions. Illumination change over 3-D forms is therefore taken into account in lightness perception.COPPE/UFRJ, Brazil; Air Force Office of Scientific Research (F49620-92-J-0334); Office of Naval Research (N00014-J-4100, N00014-94-1-0597
    corecore