1,045 research outputs found

    An improved photometric stereo through distance estimation and light vector optimization from diffused maxima region

    Get PDF
    © 2013 Elsevier B.V. All rights reserved. Although photometric stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Here we address this issue. Traditional photometric stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications, and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera is developed, which employs photometric stereo images without using other additional imaging modality. The method firstly identifies Lambertian diffused maxima region to calculate the object distance from the camera, from which the corrected per-pixel light vector is able to be derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic(as an object moves in the field of view) calculation of light vectors for each pixel with little additional computation cost. Experiments performed on synthetic as well as real data demonstrates that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error of up to 45% as well as mean height error of reconstructed surface of up to 6 mm. In addition, when compared to traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range

    RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System

    Full text link
    Simultaneous Localization and Mapping using RGB-D cameras has been a fertile research topic in the latest decade, due to the suitability of such sensors for indoor robotics. In this paper we propose a direct RGB-D SLAM algorithm with state-of-the-art accuracy and robustness at a los cost. Our experiments in the RGB-D TUM dataset [34] effectively show a better accuracy and robustness in CPU real time than direct RGB-D SLAM systems that make use of the GPU. The key ingredients of our approach are mainly two. Firstly, the combination of a semi-dense photometric and dense geometric error for the pose tracking (see Figure 1), which we demonstrate to be the most accurate alternative. And secondly, a model of the multi-view constraints and their errors in the mapping and tracking threads, which adds extra information over other approaches. We release the open-source implementation of our approach 1 . The reader is referred to a video with our results 2 for a more illustrative visualization of its performance

    Overcoming shadows in 3-source photometric stereo

    Get PDF
    Light occlusions are one of the most significant difficulties of photometric stereo methods. When three or more images are available without occlusion, the local surface orientation is overdetermined so that shape can be computed and the shadowed pixels can be discarded. In this paper, we look at the challenging case when only two images are available without occlusion, leading to a one degree of freedom ambiguity per pixel in the local orientation. We show that, in the presence of noise, integrability alone cannot resolve this ambiguity and reconstruct the geometry in the shadowed regions. As the problem is ill-posed in the presence of noise, we describe two regularization schemes that improve the numerical performance of the algorithm while preserving the data. Finally, the paper describes how this theory applies in the framework of color photometric stereo where one is restricted to only three images and light occlusions are common. Experiments on synthetic and real image sequences are presented

    Innovative optical non-contact measurement of respiratory function using photometric stereo

    Get PDF
    Pulmonary functional testing is very common and widely used in today's clinical environment for testing lung function. The contact based nature of a Spirometer can cause breathing awareness that alters the breathing pattern, affects the amount of air inhaled and exhaled and has hygiene implications. Spirometry also requires a high degree of compliance from the patient, as they have to breathe through a hand held mouth piece. To solve these issues a non-contact computer vision based system was developed for Pulmonary Functional Testing. This employs an improved photometric stereo method that was developed to recover local 3D surface orientation to enable calculation of breathing volumes. Although Photometric Stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Traditional Photometric Stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene, prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera was developed, which employs Photometric Stereo images without using other additional imaging modality. The method firstly identifies the Lambertian Diffused Maxima regions to calculate the object's distance from the camera, from which the corrected per-pixel light vector is derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic (as an object moves in the field of view) calculation of light vectors for each pixel with little additional computational cost. Experiments performed on synthetic as well as real data demonstrate that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error by up to 45% as well as the mean height error of reconstructed surface of up to 6 mm. In addition, compared with traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range. A high (0.98) correlation between breathing volume calculated from Photometric Stereo and Spirometer data was observed. This breathing volume is then converted to absolute amount of air by using distance information obtained by Lambertian Diffused Maxima Region. The unique and novel feature of this system is that it views the patients from both front and back and creates a 3D structure of the whole torso. By observing the 3D structure of the torso over time, the amount of air inhaled and exhaled can be estimated

    A Closed-Form, Consistent and Robust Solution to Uncalibrated Photometric Stereo Via Local Diffuse Reflectance Maxima

    Get PDF
    Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results

    Innovative Techniques for Digitizing and Restoring Deteriorated Historical Documents

    Get PDF
    Recent large-scale document digitization initiatives have created new modes of access to modern library collections with the development of new hardware and software technologies. Most commonly, these digitization projects focus on accurately scanning bound texts, some reaching an efficiency of more than one million volumes per year. While vast digital collections are changing the way users access texts, current scanning paradigms can not handle many non-standard materials. Documentation forms such as manuscripts, scrolls, codices, deteriorated film, epigraphy, and rock art all hold a wealth of human knowledge in physical forms not accessible by standard book scanning technologies. This great omission motivates the development of new technology, presented by this thesis, that is not-only effective with deteriorated bound works, damaged manuscripts, and disintegrating photonegatives but also easily utilized by non-technical staff. First, a novel point light source calibration technique is presented that can be performed by library staff. Then, a photometric correction technique which uses known illumination and surface properties to remove shading distortions in deteriorated document images can be automatically applied. To complete the restoration process, a geometric correction is applied. Also unique to this work is the development of an image-based uncalibrated document scanner that utilizes the transmissivity of document substrates. This scanner extracts intrinsic document color information from one or both sides of a document. Simultaneously, the document shape is estimated to obtain distortion information. Lastly, this thesis provides a restoration framework for damaged photographic negatives that corrects photometric and geometric distortions. Current restoration techniques for the discussed form of negatives require physical manipulation to the photograph. The novel acquisition and restoration system presented here provides the first known solution to digitize and restore deteriorated photographic negatives without damaging the original negative in any way. This thesis work develops new methods of document scanning and restoration suitable for wide-scale deployment. By creating easy to access technologies, library staff can implement their own scanning initiatives and large-scale scanning projects can expand their current document-sets

    Vision technology/algorithms for space robotics applications

    Get PDF
    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed
    • …
    corecore