3,953 research outputs found

    Recovering Intrinsic Images from a Single Image

    Get PDF
    We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images

    Modeling Boundaries of Influence among Positional Uncertainty Fields

    Get PDF
    Within a CIS environment, the proper use of information requires the identification of the uncertainty associated with it. As such, there has been a substantial amount of research dedicated to describing and quantifying spatial data uncertainty. Recent advances in sensor technology and image analysis techniques are making image-derived geospatial data increasingly popular. Along with development in sensor and image analysis technologies have come departures from conventional point-by-point measurements. Current advancements support the transition from traditional point measures to novel techniques that allow the extraction of complex objects as single entities (e.g., road outlines, buildings). As the methods of data extraction advance, so too must the methods of estimating the uncertainty associated with the data. Not only will object uncertainties be modeled, but the connections between these uncertainties will also be estimated. The current methods for determining spatial accuracy for lines and areas typically involve defining a zone of uncertainty around the measured line, within which the actual line exists with some probability. Yet within the research community, the proper shape of this \u27uncertainty band\u27 is a topic with much dissent. Less contemplated is the manner in which such areas of uncertainty interact and influence one another. The development of positional error models, from the epsilon band and error band to the rigorous G-band, has focused on statistical models for estimating independent line features. Yet these models are not suited to model the interactions between uncertainty fields of adjacent features. At some point, these distributed areas of uncertainty around the features will intersect and overlap one another. In such instances, a feature\u27s uncertainty zone is defined not only by its measurement, but also by the uncertainty associated with neighboring features. It is therefore useful to understand and model the interactions between adjacent uncertainty fields. This thesis presents an analysis of estimation and modeling techniques of spatial uncertainty, focusing on the interactions among fields of positional uncertainty for image-derived linear features. Such interactions are assumed to occur between linear features derived from varying methods and sources, allowing the application of an independent error model. A synthetic uncertainty map is derived for a set of linear and aerial features, containing distributed fields of uncertainty for individual features. These uncertainty fields are shown to be advantageous for communication and user understanding, as well as being conducive to a variety of image processing techniques. Such image techniques can combine overlapping uncertainty fields to model the interaction between them. Deformable contour models are used to extract sets of continuous uncertainty boundaries for linear features, and are subsequently applied to extract a boundary of influence shared by two uncertainty fields. These methods are then applied to a complex scene of uncertainties, modeling the interactions of multiple objects within the scene. The resulting boundary uncertainty representations are unique from the previous independent error models which do not take neighboring influences into account. By modeling the boundary of interaction among the uncertainties of neighboring features, a more integrated approach to error modeling and analysis can be developed for complex spatial scenes and datasets

    Statistical Approaches to Inferring Object Shape from Single Images

    Get PDF
    Depth inference is a fundamental problem of computer vision with a broad range of potential applications. Monocular depth inference techniques, particularly shape from shading dates back to as early as the 40's when it was first used to study the shape of the lunar surface. Since then there has been ample research to develop depth inference algorithms using monocular cues. Most of these are based on physical models of image formation and rely on a number of simplifying assumptions that do not hold for real world and natural imagery. Very few make use of the rich statistical information contained in real world images and their 3D information. There have been a few notable exceptions though. The study of statistics of natural scenes has been concentrated on outdoor scenes which are cluttered. Statistics of scenes of single objects has been less studied, but is an essential part of daily human interaction with the environment. Inferring shape of single objects is a very important computer vision problem which has captured the interest of many researchers over the past few decades and has applications in object recognition, robotic grasping, fault detection and Content Based Image Retrieval (CBIR). This thesis focuses on studying the statistical properties of single objects and their range images which can benefit shape inference techniques. I acquired two databases: Single Object Range and HDR (SORH) and the Eton Myers Database of single objects, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography. I took a data driven approach and studied the statistics of color and range images of real scenes of single objects along with whole 3D objects and uncovered some interesting trends in the data. The fractal structure of natural images was previously well known, and thought to be a universal property. However, my research showed that the fractal structure of single objects and surfaces is governed by a wholly different set of rules. Classical computer vision problems of binocular and multi-view stereo, photometric stereo, shape from shading, structure from motion, and others, all rely on accurate and complete models of which 3D shapes and textures are plausible in nature, to avoid producing unlikely outputs. Bayesian approaches are common for these problems, and hopefully the findings on the statistics of the shape of single objects from this work and others will both inform new and more accurate Bayesian priors on shape, and also enable more efficient probabilistic inference procedures

    Shape and Illumination from Shading Using the Generic Viewpoint Assumption

    Get PDF
    The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special. Thus, any estimated parameters from an observation should be stable under small perturbations such as object, viewpoint or light positions. The GVA has been analyzed and quantified in previous works, but has not been put to practical use in actual vision tasks. In this paper, we show how to utilize the GVA to estimate shape and illumination from a single shading image, without the use of other priors. We propose a novel linearized Spherical Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose unknowns are shape and SH illumination. The model parameters are estimated using the Alternating Direction Method of Multipliers embedded in a multi-scale estimation framework. In this prior-free framework, we obtain competitive shape and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods.National Science Foundation (U.S.). Directorate for Computer and Information Science and Engineering/Division of Information & Intelligent Systems (Award 1212928)Qatar Computing Research Institut

    Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis

    Full text link
    We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution -- but complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    A multi-camera approach to image-based rendering and 3-D/Multiview display of ancient chinese artifacts

    Get PDF
    published_or_final_versio
    • …
    corecore