108 research outputs found

    Plane-Based Optimization for 3D Object Reconstruction from Single Line Drawings

    Full text link

    A Proposal Concerning the Analysis of Shadows in Images by an Active Observer (Dissertation Proposal)

    Get PDF
    Shadows occur frequently in indoor scenes and outdoors on sunny days. Despite the information inherent in shadows about a scene\u27s geometry and lighting conditions, relatively little work in image understanding has addressed the important problem of recognizing shadows. This is an even more serious failing when one considers the problems shadows pose for many visual techniques such as object recognition and shape from shading. Shadows are difficult to identify because they cannot be infallibly recognized until a scene\u27s geometry and lighting are known. However, there are a number of cues which together strongly suggest the identification of a shadow. We present a list of these cues and methods which can be used by an active observer to detect shadows. By an active observer, we mean an observer that is not only mobile, but can extend a probe into its environment. The proposed approach should allow the extraction of shadows in real time. Furthermore, the identification of a shadow should improve with observing time. In order to be able to identify shadows without or prior to obtaining information about the arrangement of objects or information about the spectral properties of materials in the scene, we provide the observer with a probe with which to cast its own shadows. Any visible shadows cast by the probe can be easily identified because they will be new to the scene. These actively obtained shadows allow the observer to experimentally determine the number and location of light sources in the scene, to locate the cast shadows, and to gain information about the likely spectral changes due to shadows. We present a novel method for locating a light source and the surface on which a shadow is cast. It takes into account errors in imaging and image processing and, furthermore, it takes special advantage of the benefits of an active observer. The information gained from the probe is of particular importance in effectively using the various shadow cues. In the course of identifying shadows, we also present a new modification on an image segmentation algorithm. Our modification provides a general description of color images in terms of regions that is particularly amenable to the analysis of shadows

    What the Back of the Object Looks Like: 3D Reconstruction from Line Drawings without Hidden Lines

    Full text link

    Learning continuous models for estimating intrinsic component images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Also issued in pages.MIT Rotch Library copy: issued in pages.Includes bibliographical references (leaves 137-144).The goal of computer vision is to use an image to recover the characteristics of a scene, such as its shape or illumination. This is difficult because an image is the mixture of multiple characteristics. For example, an edge in an image could be caused by either an edge on a surface or a change in the surface's color. Distinguishing the effects of different scene characteristics is an important step towards high-level analysis of an image. This thesis describes how to use machine learning to build a system that recovers different characteristics of the scene from a single, gray-scale image of the scene. The goal of the system is to use the observed image to recover images, referred to as Intrinsic Component Images, that represent the scene's characteristics. The development of the system is focused on estimating two important characteristics of a scene, its shading and reflectance, from a single image. From the observed image, the system estimates a shading image, which captures the interaction of the illumination and shape of the scene pictured, and an albedo image, which represents how the surfaces in the image reflect light. Measured both qualitatively and quantitatively, this system produces state-of-the-art estimates of shading and albedo images.(cont.) This system is also flexible enough to be used for the separate problem of removing noise from an image. Building this system requires algorithms for continuous regression and learning the parameters of a Conditionally Gaussian Markov Random Field. Unlike previous work, this system is trained using real-world surfaces with ground-truth shading and albedo images. The learning algorithms are designed to accommodate the large amount of data in this training set.by Marshall Friend Tappen.Ph.D

    3D object reconstruction from line drawings.

    Get PDF
    Cao Liangliang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 64-69).Abstracts in English and Chinese.Chapter 1 --- Introduction and Related Work --- p.1Chapter 1.1 --- Reconstruction from Single Line Drawings and the Applications --- p.1Chapter 1.2 --- Optimization-based Reconstruction --- p.2Chapter 1.3 --- Other Reconstruction Methods --- p.2Chapter 1.3.1 --- Line Labeling and Algebraic Methods --- p.2Chapter 1.3.2 --- CAD Reconstruction --- p.3Chapter 1.3.3 --- Modelling from Images --- p.3Chapter 1.4 --- Finding Faces of Line Drawings --- p.4Chapter 1.5 --- Generalized Cylinder --- p.4Chapter 1.6 --- Research Problems and Our Contribution --- p.5Chapter 1.6.1 --- A New Criteria --- p.5Chapter 1.6.2 --- Recover Objects from Line Drawings without Hidden Lines --- p.6Chapter 1.6.3 --- Reconstruction of Curved Objects --- p.6Chapter 1.6.4 --- Planar Limbs Assumption and the Derived Models --- p.6Chapter 2 --- A New Criteria for Reconstruction --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Human Visual Perception and the Symmetry Measure --- p.10Chapter 2.3 --- Reconstruction Based on Symmetry and Planarity --- p.11Chapter 2.3.1 --- Finding Faces --- p.11Chapter 2.3.2 --- Constraint of Planarity --- p.11Chapter 2.3.3 --- Objective Function --- p.12Chapter 2.3.4 --- Reconstruction Algorithm --- p.13Chapter 2.4 --- Experimental Results --- p.13Chapter 2.5 --- Summary --- p.18Chapter 3 --- Line Drawings without Hidden Lines: Inference and Reconstruction --- p.19Chapter 3.1 --- Introduction --- p.19Chapter 3.2 --- Terminology --- p.20Chapter 3.3 --- Theoretical Inference of the Hidden Topological Structure --- p.21Chapter 3.3.1 --- Assumptions --- p.21Chapter 3.3.2 --- Finding the Degrees and Ranks --- p.22Chapter 3.3.3 --- Constraints for the Inference --- p.23Chapter 3.4 --- An Algorithm to Recover the Hidden Topological Structure --- p.25Chapter 3.4.1 --- Outline of the Algorithm --- p.26Chapter 3.4.2 --- Constructing the Initial Hidden Structure --- p.26Chapter 3.4.3 --- Reducing Initial Hidden Structure --- p.27Chapter 3.4.4 --- Selecting the Most Plausible Structure --- p.28Chapter 3.5 --- Reconstruction of 3D Objects --- p.29Chapter 3.6 --- Experimental Results --- p.32Chapter 3.7 --- Summary --- p.32Chapter 4 --- Curved Objects Reconstruction from 2D Line Drawings --- p.35Chapter 4.1 --- Introduction --- p.35Chapter 4.2 --- Related Work --- p.36Chapter 4.2.1 --- Face Identification --- p.36Chapter 4.2.2 --- 3D Reconstruction of planar objects --- p.37Chapter 4.3 --- Reconstruction of Curved Objects --- p.37Chapter 4.3.1 --- Transformation of Line Drawings --- p.37Chapter 4.3.2 --- Finding 3D Bezier Curves --- p.39Chapter 4.3.3 --- Bezier Surface Patches and Boundaries --- p.40Chapter 4.3.4 --- Generating Bezier Surface Patches --- p.41Chapter 4.4 --- Results --- p.43Chapter 4.5 --- Summary --- p.45Chapter 5 --- Planar Limbs and Degen Generalized Cylinders --- p.47Chapter 5.1 --- Introduction --- p.47Chapter 5.2 --- Planar Limbs and View Directions --- p.49Chapter 5.3 --- DGCs in Homogeneous Coordinates --- p.53Chapter 5.3.1 --- Homogeneous Coordinates --- p.53Chapter 5.3.2 --- Degen Surfaces --- p.54Chapter 5.3.3 --- DGCs --- p.54Chapter 5.4 --- Properties of DGCs --- p.56Chapter 5.5 --- Potential Applications --- p.59Chapter 5.5.1 --- Recovery of DGC Descriptions --- p.59Chapter 5.5.2 --- Deformable DGCs --- p.60Chapter 5.6 --- Summary --- p.61Chapter 6 --- Conclusion and Future Work --- p.62Bibliography --- p.6

    3D object reconstruction from 2D and 3D line drawings.

    Get PDF
    Chen, Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 78-85).Abstracts in English and Chinese.Chapter 1 --- Introduction and Related Work --- p.1Chapter 1.1 --- Reconstruction from 2D Line Drawings and the Applications --- p.2Chapter 1.2 --- Previous Work on 3D Reconstruction from Single 2D Line Drawings --- p.4Chapter 1.3 --- Other Related Work on Interpretation of 2D Line Drawings --- p.5Chapter 1.3.1 --- Line Labeling and Superstrictness Problem --- p.6Chapter 1.3.2 --- CAD Reconstruction --- p.6Chapter 1.3.3 --- Modeling from Images --- p.6Chapter 1.3.4 --- Identifying Faces in the Line Drawings --- p.7Chapter 1.4 --- 3D Modeling Systems --- p.8Chapter 1.5 --- Research Problems and Our Contributions --- p.10Chapter 1.5.1 --- Recovering Complex Manifold Objects from Line Drawings --- p.10Chapter 1.5.2 --- The Vision-based Sketching System --- p.11Chapter 2 --- Reconstruction from Complex Line Drawings --- p.13Chapter 2.1 --- Introduction --- p.13Chapter 2.2 --- Assumptions and Terminology --- p.15Chapter 2.3 --- Separation of a Line Drawing --- p.17Chapter 2.3.1 --- Classification of Internal Faces --- p.18Chapter 2.3.2 --- Separating a Line Drawing along Internal Faces of Type 1 --- p.19Chapter 2.3.3 --- Detecting Internal Faces of Type 2 --- p.20Chapter 2.3.4 --- Separating a Line Drawing along Internal Faces of Type 2 --- p.28Chapter 2.4 --- 3D Reconstruction --- p.44Chapter 2.4.1 --- 3D Reconstruction from a Line Drawing --- p.44Chapter 2.4.2 --- Merging 3D Manifolds --- p.45Chapter 2.4.3 --- The Complete 3D Reconstruction Algorithm --- p.47Chapter 2.5 --- Experimental Results --- p.47Chapter 2.6 --- Summary --- p.52Chapter 3 --- A Vision-Based Sketching System for 3D Object Design --- p.54Chapter 3.1 --- Introduction --- p.54Chapter 3.2 --- The Sketching System --- p.55Chapter 3.3 --- 3D Geometry of the System --- p.56Chapter 3.3.1 --- Locating the Wand --- p.57Chapter 3.3.2 --- Calibration --- p.59Chapter 3.3.3 --- Working Space --- p.60Chapter 3.4 --- Wireframe Input and Object Editing --- p.62Chapter 3.5 --- Surface Generation --- p.63Chapter 3.5.1 --- Face Identification --- p.64Chapter 3.5.2 --- Planar Surface Generation --- p.65Chapter 3.5.3 --- Smooth Curved Surface Generation --- p.67Chapter 3.6 --- Experiments --- p.70Chapter 3.7 --- Summary --- p.72Chapter 4 --- Conclusion and Future Work --- p.74Chapter 4.1 --- Conclusion --- p.74Chapter 4.2 --- Future Work --- p.75Chapter 4.2.1 --- Learning-Based Line Drawing Reconstruction --- p.75Chapter 4.2.2 --- New Query Interface for 3D Object Retrieval --- p.75Chapter 4.2.3 --- Curved Object Reconstruction --- p.76Chapter 4.2.4 --- Improving the 3D Sketch System --- p.77Chapter 4.2.5 --- Other Directions --- p.77Bibliography --- p.7

    Part Description and Segmentation Using Contour, Surface and Volumetric Primitives

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics
    • …
    corecore