42 research outputs found

    New 3D scanning techniques for complex scenes

    Get PDF
    This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.Diese Dissertation prĂ€sentiert neuartige Verfahren fĂŒr die 3D-Digitalisierung komplexer Szenen, wie z.B. OberflĂ€chen mit sehr feinen Strukturen, durchscheinende Objekte, GegenstĂ€nde mit geringem Albedo, glĂ€nzende Objekte, Szenen mit Lichtinterreflektionen und unzusammenhĂ€ngende Szenen. Ausgehend von der Beobachtung, daß die spekulare Reflektion ein zuverlĂ€ssiger, visueller Hinweis fĂŒr die Mesostruktur einer OberflĂ€che ist, stellen wir ein progressives Meßsystem vor, um SpekularitĂ€tsfelder zu messen. Aus diesen Feldern kann anschließend die Mesostruktur rekonstruiert werden. Mit unserer Methode können OberflĂ€chen mit sehr feinen Strukturen von komplexen, realen Objekten effizient aufgenommen werden. Durchscheinende Objekte stellen ein großes Problem fĂŒr traditionelle, optischbasierte 3D-Rekonstruktionsmethoden dar. Wir analysieren und vergleichen zwei verschiedene Methoden zum Eliminieren von Lichtstreuung (Descattering): Phasenverschiebung und Polarisation. Weiterhin prĂ€sentieren wir mehrere hochqualitative 3D-Rekonstruktionsmethoden fĂŒr durchscheinende Objekte, die auf Phasenverschiebung und Polarisation basieren. Außerdem fĂŒhren wir das Konzept der modulationsbasierten Signaltrennung ein. Hierzu wird ein hochfrequentes Signal zu einem anderes Signal multipliziert. Das so modulierte Signal erhĂ€lt damit die separierenden Eigenschaften des hochfrequenten Signals. Dies erlaubt unsMeßartefakte aufgrund von globalen Beleuchtungseffekten zu vermeiden. Dieses Verfahren kann zum effizienten 3DScannen von Szenen mit durchscheinden Objekten und Interreflektionen benutzt werden

    Shape recovery from reflection.

    Get PDF
    by Yingli Tian.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 202-222).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Physics-Based Shape Recovery Techniques --- p.3Chapter 1.2 --- Proposed Approaches to Shape Recovery in this Thesis --- p.9Chapter 1.3 --- Thesis Outline --- p.13Chapter 2 --- Camera Model in Color Vision --- p.15Chapter 2.1 --- Introduction --- p.15Chapter 2.2 --- Spectral Linearization --- p.17Chapter 2.3 --- Image Balancing --- p.21Chapter 2.4 --- Spectral Sensitivity --- p.24Chapter 2.5 --- Color Clipping and Blooming --- p.24Chapter 3 --- Extended Light Source Models --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- A Spherical Light Model in 2D Coordinate System --- p.30Chapter 3.2.1 --- Basic Photometric Function for Hybrid Surfaces under a Point Light Source --- p.32Chapter 3.2.2 --- Photometric Function for Hybrid Surfaces under the Spher- ical Light Source --- p.34Chapter 3.3 --- A Spherical Light Model in 3D Coordinate System --- p.36Chapter 3.3.1 --- Radiance of the Spherical Light Source --- p.36Chapter 3.3.2 --- Surface Brightness Illuminated by One Point of the Spher- ical Light Source --- p.38Chapter 3.3.3 --- Surface Brightness Illuminated by the Spherical Light Source --- p.39Chapter 3.3.4 --- Rotating the Source-Object Coordinate to the Camera- Object Coordinate --- p.41Chapter 3.3.5 --- Surface Reflection Model --- p.44Chapter 3.4 --- Rectangular Light Model in 3D Coordinate System --- p.45Chapter 3.4.1 --- Radiance of a Rectangular Light Source --- p.45Chapter 3.4.2 --- Surface Brightness Illuminated by One Point of the Rect- angular Light Source --- p.47Chapter 3.4.3 --- Surface Brightness Illuminated by a Rectangular Light Source --- p.47Chapter 4 --- Shape Recovery from Specular Reflection --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Theory of the First Method --- p.57Chapter 4.2.1 --- Torrance-Sparrow Reflectance Model --- p.57Chapter 4.2.2 --- Relationship Between Surface Shapes from Different Images --- p.60Chapter 4.3 --- Theory of the Second Method --- p.65Chapter 4.3.1 --- Getting the Depth of a Reference Point --- p.65Chapter 4.3.2 --- Recovering the Depth and Normal of a Specular Point Near the Reference Point --- p.67Chapter 4.3.3 --- Recovering Local Shape of the Object by Specular Reflection --- p.69Chapter 4.4 --- Experimental Results and Discussions --- p.71Chapter 4.4.1 --- Experimental System and Results of the First Method --- p.71Chapter 4.4.2 --- Experimental System and Results of the Second Method --- p.76Chapter 5 --- Shape Recovery from One Sequence of Color Images --- p.81Chapter 5.1 --- Introduction --- p.81Chapter 5.2 --- Temporal-color Space Analysis of Reflection --- p.84Chapter 5.3 --- Estimation of Illuminant Color Ks --- p.88Chapter 5.4 --- Estimation of the Color Vector of the Body-reflection Component Kl --- p.89Chapter 5.5 --- Separating Specular and Body Reflection Components and Re- covering Surface Shape and Reflectance --- p.91Chapter 5.6 --- Experiment Results and Discussions --- p.92Chapter 5.6.1 --- Results with Interreflection --- p.93Chapter 5.6.2 --- Results Without Interreflection --- p.93Chapter 5.6.3 --- Simulation Results --- p.95Chapter 5.7 --- Analysis of Various Factors on the Accuracy --- p.96Chapter 5.7.1 --- Effects of Number of Samples --- p.96Chapter 5.7.2 --- Effects of Noise --- p.99Chapter 5.7.3 --- Effects of Object Size --- p.99Chapter 5.7.4 --- Camera Optical Axis Not in Light Source Plane --- p.102Chapter 5.7.5 --- Camera Optical Axis Not Passing Through Object Center --- p.105Chapter 6 --- Shape Recovery from Two Sequences of Images --- p.107Chapter 6.1 --- Introduction --- p.107Chapter 6.2 --- Method for 3D Shape Recovery from Two Sequences of Images --- p.109Chapter 6.3 --- Genetics-Based Method --- p.111Chapter 6.4 --- Experimental Results and Discussions --- p.115Chapter 6.4.1 --- Simulation Results --- p.115Chapter 6.4.2 --- Real Experimental Results --- p.118Chapter 7 --- Shape from Shading for Non-Lambertian Surfaces --- p.120Chapter 7.1 --- Introduction --- p.120Chapter 7.2 --- Reflectance Map for Non-Lambertian Color Surfaces --- p.123Chapter 7.3 --- Recovering Non-Lambertian Surface Shape from One Color Image --- p.127Chapter 7.3.1 --- Segmenting Hybrid Areas from Diffuse Areas Using Hue Information --- p.127Chapter 7.3.2 --- Calculating Intensities of Specular and Diffuse Compo- nents on Hybrid Areas --- p.128Chapter 7.3.3 --- Recovering Shape from Shading --- p.129Chapter 7.4 --- Experimental Results and Discussions --- p.131Chapter 7.4.1 --- Simulation Results --- p.131Chapter 7.4.2 --- Real Experimental Results --- p.136Chapter 8 --- Shape from Shading under Multiple Extended Light Sources --- p.142Chapter 8.1 --- Introduction --- p.142Chapter 8.2 --- Reflectance Map for Lambertian Surface Under Multiple Rectan- gular Light Sources --- p.144Chapter 8.3 --- Recovering Surface Shape Under Multiple Rectangular Light Sources --- p.148Chapter 8.4 --- Experimental Results and Discussions --- p.150Chapter 8.4.1 --- Synthetic Image Results --- p.150Chapter 8.4.2 --- Real Image Results --- p.152Chapter 9 --- Shape from Shading in Unknown Environments by Neural Net- works --- p.167Chapter 9.1 --- Introduction --- p.167Chapter 9.2 --- Shape Estimation --- p.169Chapter 9.2.1 --- Shape Recovery Problem under Multiple Rectangular Ex- tended Light Sources --- p.169Chapter 9.2.2 --- Forward Network Representation of Surface Normals --- p.170Chapter 9.2.3 --- Shape Estimation --- p.174Chapter 9.3 --- Application of the Neural Network in Shape Recovery --- p.174Chapter 9.3.1 --- Structure of the Neural Network --- p.174Chapter 9.3.2 --- Normalization of the Input and Output Patterns --- p.175Chapter 9.4 --- Experimental Results and Discussions --- p.178Chapter 9.4.1 --- Results for Lambertian Surface under One Rectangular Light --- p.178Chapter 9.4.2 --- Results for Lambertian Surface under Four Rectangular Light Sources --- p.180Chapter 9.4.3 --- Results for Hybrid Surface under One Rectangular Light Sources --- p.190Chapter 9.4.4 --- Discussions --- p.190Chapter 10 --- Summary and Conclusions --- p.191Chapter 10.1 --- Summary Results and Contributions --- p.192Chapter 10.2 --- Directions of Future Research --- p.199Bibliography --- p.20

    Toward color image segmentation in analog VLSI: Algorithm and hardware

    Get PDF
    Standard techniques for segmenting color images are based on finding normalized RGB discontinuities, color histogramming, or clustering techniques in RGB or CIE color spaces. The use of the psychophysical variable hue in HSI space has not been popular due to its numerical instability at low saturations. In this article, we propose the use of a simplified hue description suitable for implementation in analog VLSI. We demonstrate that if theintegrated white condition holds, hue is invariant to certain types of highlights, shading, and shadows. This is due to theadditive/shift invariance property, a property that other color variables lack. The more restrictive uniformly varying lighting model associated with themultiplicative/scale invariance property shared by both hue and normalized RGB allows invariance to transparencies, and to simple models of shading and shadows. Using binary hue discontinuities in conjunction with first-order type of surface interpolation, we demonstrate these invariant properties and compare them against the performance of RGB, normalized RGB, and CIE color spaces. We argue that working in HSI space offers an effective method for segmenting scenes in the presence of confounding cues due to shading, transparency, highlights, and shadows. Based on this work, we designed and fabricated for the first time an analog CMOS VLSI circuit with on-board phototransistor input that computes normalized color and hue

    Color Homography: theory and applications

    Get PDF
    Images of co-planar points in 3-dimensional space taken from different camera positions are a homography apart. Homographies are at the heart of geometric methods in computer vision and are used in geometric camera calibration, 3D reconstruction, stereo vision and image mosaicking among other tasks. In this paper we show the surprising result that homographies are the apposite tool for relating image colors of the same scene when the capture conditions -- illumination color, shading and device -- change. Three applications of color homographies are investigated. First, we show that color calibration is correctly formulated as a homography problem. Second, we compare the chromaticity distributions of an image of colorful objects to a database of object chromaticity distributions using homography matching. In the color transfer problem, the colors in one image are mapped so that the resulting image color style matches that of a target image. We show that natural image color transfer can be re-interpreted as a color homography mapping. Experiments demonstrate that solving the color homography problem leads to more accurate calibration, improved color-based object recognition, and we present a new direction for developing natural color transfer algorithms

    Reconstructing Geometry from Its Latent Structures

    Get PDF
    Our world is full of objects with complex shapes and structures. Through extensive experience humans quickly develop an intuition about how objects are shaped, and what their material properties are simply by analyzing their appearance. We engage this intuitive understanding of geometry in nearly everything we do.It is not surprising then, that a careful treatment of geometry stands to give machines a powerful advantage in the many tasks of visual perception. To that end, this thesis focuses on geometry recovery in a wide range of real-world problems. First, we describe a new approach to image registration. We observe that the structure of the imaged subject becomes embedded in the image intensities. By minimizing the change in shape of these intensity structures we ensure a physically realizable deformation. We then describe a method for reassembling fragmented, thin-shelled objects from range-images of their fragments using only the geometric and photometric structure embedded in the boundary of each fragment. Third, we describe a method for recovering and representing the shape of a geometric texture (such as bark, or sandpaper) by studying the characteristic properties of texture---self similarity and scale variability. Finally, we describe two methods for recovering the 3D geometry and reflectance properties of an object from images taken under natural illumination. We note that the structure of the surrounding environment, modulated by the reflectance, becomes embedded in the appearance of the object giving strong clues about the object's shape.Though these domains are quite diverse, an essential premise---that observations of objects contain within them salient clues about the object's structure---enables new and powerful approaches. For each problem we begin by investigating what these clues are.We then derive models and methods to canonically represent these clues and enable their full exploitation. The wide-ranging success of each method shows the importance of our carefully formulated observations about geometry, and the fundamental role geometry plays in visual perception.Ph.D., Computer Science -- Drexel University, 201

    A Critical Analysis of NeRF-Based 3D Reconstruction

    Get PDF
    This paper presents a critical analysis of image-based 3D reconstruction using neural radiance fields (NeRFs), with a focus on quantitative comparisons with respect to traditional photogrammetry. The aim is, therefore, to objectively evaluate the strengths and weaknesses of NeRFs and provide insights into their applicability to different real-life scenarios, from small objects to heritage and industrial scenes. After a comprehensive overview of photogrammetry and NeRF methods, highlighting their respective advantages and disadvantages, various NeRF methods are compared using diverse objects with varying sizes and surface characteristics, including texture-less, metallic, translucent, and transparent surfaces. We evaluated the quality of the resulting 3D reconstructions using multiple criteria, such as noise level, geometric accuracy, and the number of required images (i.e., image baselines). The results show that NeRFs exhibit superior performance over photogrammetry in terms of non-collaborative objects with texture-less, reflective, and refractive surfaces. Conversely, photogrammetry outperforms NeRFs in cases where the object’s surface possesses cooperative texture. Such complementarity should be further exploited in future works

    Beyond high-resolution geometry in 3D Cultural Heritage: enhancing visualization realism in interactive contexts

    Get PDF
    La tesi, nell’ambito della computer graphics 3D interattiva, descrive la definizione e sviluppo di algoritmi per un migliore realismo nella visualizzazione di modelli tridimensionali di grandi dimensioni, con particolare attenzione alle applicazioni di queste tecnologie di visualizzazione 3D ai beni culturali

    Color logic: Interactively defining color in the context of computer graphics

    Get PDF
    An attempt was made to build a bridge between the art and science of color, utilizing computer graphics as a medium. This interactive tutorial presents both technical and non-technical information in virtually complete graphic form, allowing the undergraduate college student to readily understand and apply its content. The program concentrates on relevant topics within each of the following aspects of color science: Color Vision, Light and Objects, Color Perception, Aesthetics and Design, Color Order, and Computer Color Models. Upon preliminary completion, user-testing was conducted in order to ensure that the program is intuitive, intriguing, and valuable to a wide range of users. COLOR LOGIC represents effective integration of color science, graphic design, user-interface design, and computer graphics design. Several practical applications for the program are discussed
    corecore