16 research outputs found

    Accurate color synthesis of three-dimensional objects in an image

    Get PDF
    Author name used in this publication: John H. Xin2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Shape recovery from reflection.

    Get PDF
    by Yingli Tian.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 202-222).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Physics-Based Shape Recovery Techniques --- p.3Chapter 1.2 --- Proposed Approaches to Shape Recovery in this Thesis --- p.9Chapter 1.3 --- Thesis Outline --- p.13Chapter 2 --- Camera Model in Color Vision --- p.15Chapter 2.1 --- Introduction --- p.15Chapter 2.2 --- Spectral Linearization --- p.17Chapter 2.3 --- Image Balancing --- p.21Chapter 2.4 --- Spectral Sensitivity --- p.24Chapter 2.5 --- Color Clipping and Blooming --- p.24Chapter 3 --- Extended Light Source Models --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- A Spherical Light Model in 2D Coordinate System --- p.30Chapter 3.2.1 --- Basic Photometric Function for Hybrid Surfaces under a Point Light Source --- p.32Chapter 3.2.2 --- Photometric Function for Hybrid Surfaces under the Spher- ical Light Source --- p.34Chapter 3.3 --- A Spherical Light Model in 3D Coordinate System --- p.36Chapter 3.3.1 --- Radiance of the Spherical Light Source --- p.36Chapter 3.3.2 --- Surface Brightness Illuminated by One Point of the Spher- ical Light Source --- p.38Chapter 3.3.3 --- Surface Brightness Illuminated by the Spherical Light Source --- p.39Chapter 3.3.4 --- Rotating the Source-Object Coordinate to the Camera- Object Coordinate --- p.41Chapter 3.3.5 --- Surface Reflection Model --- p.44Chapter 3.4 --- Rectangular Light Model in 3D Coordinate System --- p.45Chapter 3.4.1 --- Radiance of a Rectangular Light Source --- p.45Chapter 3.4.2 --- Surface Brightness Illuminated by One Point of the Rect- angular Light Source --- p.47Chapter 3.4.3 --- Surface Brightness Illuminated by a Rectangular Light Source --- p.47Chapter 4 --- Shape Recovery from Specular Reflection --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Theory of the First Method --- p.57Chapter 4.2.1 --- Torrance-Sparrow Reflectance Model --- p.57Chapter 4.2.2 --- Relationship Between Surface Shapes from Different Images --- p.60Chapter 4.3 --- Theory of the Second Method --- p.65Chapter 4.3.1 --- Getting the Depth of a Reference Point --- p.65Chapter 4.3.2 --- Recovering the Depth and Normal of a Specular Point Near the Reference Point --- p.67Chapter 4.3.3 --- Recovering Local Shape of the Object by Specular Reflection --- p.69Chapter 4.4 --- Experimental Results and Discussions --- p.71Chapter 4.4.1 --- Experimental System and Results of the First Method --- p.71Chapter 4.4.2 --- Experimental System and Results of the Second Method --- p.76Chapter 5 --- Shape Recovery from One Sequence of Color Images --- p.81Chapter 5.1 --- Introduction --- p.81Chapter 5.2 --- Temporal-color Space Analysis of Reflection --- p.84Chapter 5.3 --- Estimation of Illuminant Color Ks --- p.88Chapter 5.4 --- Estimation of the Color Vector of the Body-reflection Component Kl --- p.89Chapter 5.5 --- Separating Specular and Body Reflection Components and Re- covering Surface Shape and Reflectance --- p.91Chapter 5.6 --- Experiment Results and Discussions --- p.92Chapter 5.6.1 --- Results with Interreflection --- p.93Chapter 5.6.2 --- Results Without Interreflection --- p.93Chapter 5.6.3 --- Simulation Results --- p.95Chapter 5.7 --- Analysis of Various Factors on the Accuracy --- p.96Chapter 5.7.1 --- Effects of Number of Samples --- p.96Chapter 5.7.2 --- Effects of Noise --- p.99Chapter 5.7.3 --- Effects of Object Size --- p.99Chapter 5.7.4 --- Camera Optical Axis Not in Light Source Plane --- p.102Chapter 5.7.5 --- Camera Optical Axis Not Passing Through Object Center --- p.105Chapter 6 --- Shape Recovery from Two Sequences of Images --- p.107Chapter 6.1 --- Introduction --- p.107Chapter 6.2 --- Method for 3D Shape Recovery from Two Sequences of Images --- p.109Chapter 6.3 --- Genetics-Based Method --- p.111Chapter 6.4 --- Experimental Results and Discussions --- p.115Chapter 6.4.1 --- Simulation Results --- p.115Chapter 6.4.2 --- Real Experimental Results --- p.118Chapter 7 --- Shape from Shading for Non-Lambertian Surfaces --- p.120Chapter 7.1 --- Introduction --- p.120Chapter 7.2 --- Reflectance Map for Non-Lambertian Color Surfaces --- p.123Chapter 7.3 --- Recovering Non-Lambertian Surface Shape from One Color Image --- p.127Chapter 7.3.1 --- Segmenting Hybrid Areas from Diffuse Areas Using Hue Information --- p.127Chapter 7.3.2 --- Calculating Intensities of Specular and Diffuse Compo- nents on Hybrid Areas --- p.128Chapter 7.3.3 --- Recovering Shape from Shading --- p.129Chapter 7.4 --- Experimental Results and Discussions --- p.131Chapter 7.4.1 --- Simulation Results --- p.131Chapter 7.4.2 --- Real Experimental Results --- p.136Chapter 8 --- Shape from Shading under Multiple Extended Light Sources --- p.142Chapter 8.1 --- Introduction --- p.142Chapter 8.2 --- Reflectance Map for Lambertian Surface Under Multiple Rectan- gular Light Sources --- p.144Chapter 8.3 --- Recovering Surface Shape Under Multiple Rectangular Light Sources --- p.148Chapter 8.4 --- Experimental Results and Discussions --- p.150Chapter 8.4.1 --- Synthetic Image Results --- p.150Chapter 8.4.2 --- Real Image Results --- p.152Chapter 9 --- Shape from Shading in Unknown Environments by Neural Net- works --- p.167Chapter 9.1 --- Introduction --- p.167Chapter 9.2 --- Shape Estimation --- p.169Chapter 9.2.1 --- Shape Recovery Problem under Multiple Rectangular Ex- tended Light Sources --- p.169Chapter 9.2.2 --- Forward Network Representation of Surface Normals --- p.170Chapter 9.2.3 --- Shape Estimation --- p.174Chapter 9.3 --- Application of the Neural Network in Shape Recovery --- p.174Chapter 9.3.1 --- Structure of the Neural Network --- p.174Chapter 9.3.2 --- Normalization of the Input and Output Patterns --- p.175Chapter 9.4 --- Experimental Results and Discussions --- p.178Chapter 9.4.1 --- Results for Lambertian Surface under One Rectangular Light --- p.178Chapter 9.4.2 --- Results for Lambertian Surface under Four Rectangular Light Sources --- p.180Chapter 9.4.3 --- Results for Hybrid Surface under One Rectangular Light Sources --- p.190Chapter 9.4.4 --- Discussions --- p.190Chapter 10 --- Summary and Conclusions --- p.191Chapter 10.1 --- Summary Results and Contributions --- p.192Chapter 10.2 --- Directions of Future Research --- p.199Bibliography --- p.20

    Toward color image segmentation in analog VLSI: Algorithm and hardware

    Get PDF
    Standard techniques for segmenting color images are based on finding normalized RGB discontinuities, color histogramming, or clustering techniques in RGB or CIE color spaces. The use of the psychophysical variable hue in HSI space has not been popular due to its numerical instability at low saturations. In this article, we propose the use of a simplified hue description suitable for implementation in analog VLSI. We demonstrate that if theintegrated white condition holds, hue is invariant to certain types of highlights, shading, and shadows. This is due to theadditive/shift invariance property, a property that other color variables lack. The more restrictive uniformly varying lighting model associated with themultiplicative/scale invariance property shared by both hue and normalized RGB allows invariance to transparencies, and to simple models of shading and shadows. Using binary hue discontinuities in conjunction with first-order type of surface interpolation, we demonstrate these invariant properties and compare them against the performance of RGB, normalized RGB, and CIE color spaces. We argue that working in HSI space offers an effective method for segmenting scenes in the presence of confounding cues due to shading, transparency, highlights, and shadows. Based on this work, we designed and fabricated for the first time an analog CMOS VLSI circuit with on-board phototransistor input that computes normalized color and hue

    New 3D scanning techniques for complex scenes

    Get PDF
    This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.Diese Dissertation präsentiert neuartige Verfahren für die 3D-Digitalisierung komplexer Szenen, wie z.B. Oberflächen mit sehr feinen Strukturen, durchscheinende Objekte, Gegenstände mit geringem Albedo, glänzende Objekte, Szenen mit Lichtinterreflektionen und unzusammenhängende Szenen. Ausgehend von der Beobachtung, daß die spekulare Reflektion ein zuverlässiger, visueller Hinweis für die Mesostruktur einer Oberfläche ist, stellen wir ein progressives Meßsystem vor, um Spekularitätsfelder zu messen. Aus diesen Feldern kann anschließend die Mesostruktur rekonstruiert werden. Mit unserer Methode können Oberflächen mit sehr feinen Strukturen von komplexen, realen Objekten effizient aufgenommen werden. Durchscheinende Objekte stellen ein großes Problem für traditionelle, optischbasierte 3D-Rekonstruktionsmethoden dar. Wir analysieren und vergleichen zwei verschiedene Methoden zum Eliminieren von Lichtstreuung (Descattering): Phasenverschiebung und Polarisation. Weiterhin präsentieren wir mehrere hochqualitative 3D-Rekonstruktionsmethoden für durchscheinende Objekte, die auf Phasenverschiebung und Polarisation basieren. Außerdem führen wir das Konzept der modulationsbasierten Signaltrennung ein. Hierzu wird ein hochfrequentes Signal zu einem anderes Signal multipliziert. Das so modulierte Signal erhält damit die separierenden Eigenschaften des hochfrequenten Signals. Dies erlaubt unsMeßartefakte aufgrund von globalen Beleuchtungseffekten zu vermeiden. Dieses Verfahren kann zum effizienten 3DScannen von Szenen mit durchscheinden Objekten und Interreflektionen benutzt werden

    Inferring surface shape from specular reflections

    Get PDF

    Outdoor computer vision and weed control

    Get PDF

    Extending minkowski norm illuminant estimation

    Get PDF
    The ability to obtain colour images invariant to changes of illumination is called colour constancy. An algorithm for colour constancy takes sensor responses - digital images - as input, estimates the ambient light and returns a corrected image in which the illuminant influence over the colours has been removed. In this thesis we investigate the step of illuminant estimation for colour constancy and aim to extend the state of the art in this field. We first revisit the Minkowski Family Norm framework for illuminant estimation. Because, of all the simple statistical approaches, it is the most general formulation and, crucially, delivers the best results. This thesis makes four technical contributions. First, we reformulate the Minkowski approach to provide better estimation when a constraint on illumination is employed. Second, we show how the method can (by orders of magnitude) be implemented to run much faster than previous algorithms. Third, we show how a simple edge based variant delivers improved estimation compared with the state of the art across many datasets. In contradistinction to the prior state of the art our definition of edges is fixed (a simple combination of first and second derivatives) i.e. we do not tune our algorithm to particular image datasets. This performance is further improved by incorporating a gamut constraint on surface colour -our 4th contribution. The thesis finishes by considering our approach in the context of a recent OSA competition run to benchmark computational algorithms operating on physiologically relevant cone based input data. Here we find that Constrained Minkowski Norms operi ii ating on spectrally sharpened cone sensors (linear combinations of the cones that behave more like camera sensors) supports competition leading illuminant estimation

    Mutual Illumination Photometric Stereo

    Get PDF
    Many techniques have been developed in computer vision to recover three-dimensional shape from two-dimensional images. These techniques impose various combinations of assumptions/restrictions of conditions to produce a representation of shape (e.g. surface normals or a height map). Although great progress has been made it is a problem which remains far from solved. In this thesis we propose a new approach to shape recovery - namely `mutual illumination photometric stereo'. We exploit the presence of colourful mutual illumination in an environment to recover the shape of objects from a single image
    corecore