25 research outputs found

    ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition

    Get PDF
    In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.Comment: Submitted to International Journal of Computer Vision (IJCV

    Extended Intensity Range Imaging

    Get PDF
    A single composite image with an extended intensive range is generated by combining disjoining regions from different images of the same scene. The set of images is obtained with a charge-couple device (CCD) set for different flux integration times. By limiting differences in the integration times so that the ranges of output pixel values overlap considerably, individual pixels are assigned the value measured at each spatial location that is in the most sensitive range where the values are both below saturation and are most precisely specified. Integration times are lengthened geometrically from a minimum where all pixel values are below saturation until all dark regions emerge from the lowest quantization level. the method is applied to an example scene and the effect the composite images have on traditional low-level imaging methods also is examined

    Investigation on the application of ZnO nanostructures to improve the optical performance of white light-emitting diodes

    Get PDF
    Though combining blue LED chips with yellow phosphor has been the most common method in white light-emitting diode (WLED) production, the attained angular correlated color temperature (CCT) uniformity is still poor. Thus, this article proposes to add ZnO nanostructures to WLED packages to promote the color uniformity of the WLEDs. The outcomes of the research demonstrate that utilizing ZnO at different amount can affect the scattering energy and the CCT deviations in WLEDs packages in different extents. Particularly, adding the node-like (N-ZnO), sheet-like (S-ZnO), and rod-like (R-ZnO) leads to the corresponding decreases of CCT deviations from 3455.49 K to 96.30 K, 40.03 K, and 60.09 K, respectively. Meanwhile, with 0.25% N-ZnO, 0.75% S-ZnO, and 0.25 % R-ZnO, WLED devices can achieve both better CCT homogeneity and lower reduction in luminous flux. The results of this article can be a valuable document for the manufacturer to use as reference in improving their WLED products

    Material Recognition Meets 3D Reconstruction : Novel Tools for Efficient, Automatic Acquisition Systems

    Get PDF
    For decades, the accurate acquisition of geometry and reflectance properties has represented one of the major objectives in computer vision and computer graphics with many applications in industry, entertainment and cultural heritage. Reproducing even the finest details of surface geometry and surface reflectance has become a ubiquitous prerequisite in visual prototyping, advertisement or digital preservation of objects. However, today's acquisition methods are typically designed for only a rather small range of material types. Furthermore, there is still a lack of accurate reconstruction methods for objects with a more complex surface reflectance behavior beyond diffuse reflectance. In addition to accurate acquisition techniques, the demand for creating large quantities of digital contents also pushes the focus towards fully automatic and highly efficient solutions that allow for masses of objects to be acquired as fast as possible. This thesis is dedicated to the investigation of basic components that allow an efficient, automatic acquisition process. We argue that such an efficient, automatic acquisition can be realized when material recognition "meets" 3D reconstruction and we will demonstrate that reliably recognizing the materials of the considered object allows a more efficient geometry acquisition. Therefore, the main objectives of this thesis are given by the development of novel, robust geometry acquisition techniques for surface materials beyond diffuse surface reflectance, and the development of novel, robust techniques for material recognition. In the context of 3D geometry acquisition, we introduce an improvement of structured light systems, which are capable of robustly acquiring objects ranging from diffuse surface reflectance to even specular surface reflectance with a sufficient diffuse component. We demonstrate that the resolution of the reconstruction can be increased significantly for multi-camera, multi-projector structured light systems by using overlappings of patterns that have been projected under different projector poses. As the reconstructions obtained by applying such triangulation-based techniques still contain high-frequency noise due to inaccurately localized correspondences established for images acquired under different viewpoints, we furthermore introduce a novel geometry acquisition technique that complements the structured light system with additional photometric normals and results in significantly more accurate reconstructions. In addition, we also present a novel method to acquire the 3D shape of mirroring objects with complex surface geometry. The aforementioned investigations on 3D reconstruction are accompanied by the development of novel tools for reliable material recognition which can be used in an initial step to recognize the present surface materials and, hence, to efficiently select the subsequently applied appropriate acquisition techniques based on these classified materials. In the scope of this thesis, we therefore focus on material recognition for scenarios with controlled illumination as given in lab environments as well as scenarios with natural illumination that are given in photographs of typical daily life scenes. Finally, based on the techniques developed in this thesis, we provide novel concepts towards efficient, automatic acquisition systems

    Shape recovery from reflection.

    Get PDF
    by Yingli Tian.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 202-222).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Physics-Based Shape Recovery Techniques --- p.3Chapter 1.2 --- Proposed Approaches to Shape Recovery in this Thesis --- p.9Chapter 1.3 --- Thesis Outline --- p.13Chapter 2 --- Camera Model in Color Vision --- p.15Chapter 2.1 --- Introduction --- p.15Chapter 2.2 --- Spectral Linearization --- p.17Chapter 2.3 --- Image Balancing --- p.21Chapter 2.4 --- Spectral Sensitivity --- p.24Chapter 2.5 --- Color Clipping and Blooming --- p.24Chapter 3 --- Extended Light Source Models --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- A Spherical Light Model in 2D Coordinate System --- p.30Chapter 3.2.1 --- Basic Photometric Function for Hybrid Surfaces under a Point Light Source --- p.32Chapter 3.2.2 --- Photometric Function for Hybrid Surfaces under the Spher- ical Light Source --- p.34Chapter 3.3 --- A Spherical Light Model in 3D Coordinate System --- p.36Chapter 3.3.1 --- Radiance of the Spherical Light Source --- p.36Chapter 3.3.2 --- Surface Brightness Illuminated by One Point of the Spher- ical Light Source --- p.38Chapter 3.3.3 --- Surface Brightness Illuminated by the Spherical Light Source --- p.39Chapter 3.3.4 --- Rotating the Source-Object Coordinate to the Camera- Object Coordinate --- p.41Chapter 3.3.5 --- Surface Reflection Model --- p.44Chapter 3.4 --- Rectangular Light Model in 3D Coordinate System --- p.45Chapter 3.4.1 --- Radiance of a Rectangular Light Source --- p.45Chapter 3.4.2 --- Surface Brightness Illuminated by One Point of the Rect- angular Light Source --- p.47Chapter 3.4.3 --- Surface Brightness Illuminated by a Rectangular Light Source --- p.47Chapter 4 --- Shape Recovery from Specular Reflection --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Theory of the First Method --- p.57Chapter 4.2.1 --- Torrance-Sparrow Reflectance Model --- p.57Chapter 4.2.2 --- Relationship Between Surface Shapes from Different Images --- p.60Chapter 4.3 --- Theory of the Second Method --- p.65Chapter 4.3.1 --- Getting the Depth of a Reference Point --- p.65Chapter 4.3.2 --- Recovering the Depth and Normal of a Specular Point Near the Reference Point --- p.67Chapter 4.3.3 --- Recovering Local Shape of the Object by Specular Reflection --- p.69Chapter 4.4 --- Experimental Results and Discussions --- p.71Chapter 4.4.1 --- Experimental System and Results of the First Method --- p.71Chapter 4.4.2 --- Experimental System and Results of the Second Method --- p.76Chapter 5 --- Shape Recovery from One Sequence of Color Images --- p.81Chapter 5.1 --- Introduction --- p.81Chapter 5.2 --- Temporal-color Space Analysis of Reflection --- p.84Chapter 5.3 --- Estimation of Illuminant Color Ks --- p.88Chapter 5.4 --- Estimation of the Color Vector of the Body-reflection Component Kl --- p.89Chapter 5.5 --- Separating Specular and Body Reflection Components and Re- covering Surface Shape and Reflectance --- p.91Chapter 5.6 --- Experiment Results and Discussions --- p.92Chapter 5.6.1 --- Results with Interreflection --- p.93Chapter 5.6.2 --- Results Without Interreflection --- p.93Chapter 5.6.3 --- Simulation Results --- p.95Chapter 5.7 --- Analysis of Various Factors on the Accuracy --- p.96Chapter 5.7.1 --- Effects of Number of Samples --- p.96Chapter 5.7.2 --- Effects of Noise --- p.99Chapter 5.7.3 --- Effects of Object Size --- p.99Chapter 5.7.4 --- Camera Optical Axis Not in Light Source Plane --- p.102Chapter 5.7.5 --- Camera Optical Axis Not Passing Through Object Center --- p.105Chapter 6 --- Shape Recovery from Two Sequences of Images --- p.107Chapter 6.1 --- Introduction --- p.107Chapter 6.2 --- Method for 3D Shape Recovery from Two Sequences of Images --- p.109Chapter 6.3 --- Genetics-Based Method --- p.111Chapter 6.4 --- Experimental Results and Discussions --- p.115Chapter 6.4.1 --- Simulation Results --- p.115Chapter 6.4.2 --- Real Experimental Results --- p.118Chapter 7 --- Shape from Shading for Non-Lambertian Surfaces --- p.120Chapter 7.1 --- Introduction --- p.120Chapter 7.2 --- Reflectance Map for Non-Lambertian Color Surfaces --- p.123Chapter 7.3 --- Recovering Non-Lambertian Surface Shape from One Color Image --- p.127Chapter 7.3.1 --- Segmenting Hybrid Areas from Diffuse Areas Using Hue Information --- p.127Chapter 7.3.2 --- Calculating Intensities of Specular and Diffuse Compo- nents on Hybrid Areas --- p.128Chapter 7.3.3 --- Recovering Shape from Shading --- p.129Chapter 7.4 --- Experimental Results and Discussions --- p.131Chapter 7.4.1 --- Simulation Results --- p.131Chapter 7.4.2 --- Real Experimental Results --- p.136Chapter 8 --- Shape from Shading under Multiple Extended Light Sources --- p.142Chapter 8.1 --- Introduction --- p.142Chapter 8.2 --- Reflectance Map for Lambertian Surface Under Multiple Rectan- gular Light Sources --- p.144Chapter 8.3 --- Recovering Surface Shape Under Multiple Rectangular Light Sources --- p.148Chapter 8.4 --- Experimental Results and Discussions --- p.150Chapter 8.4.1 --- Synthetic Image Results --- p.150Chapter 8.4.2 --- Real Image Results --- p.152Chapter 9 --- Shape from Shading in Unknown Environments by Neural Net- works --- p.167Chapter 9.1 --- Introduction --- p.167Chapter 9.2 --- Shape Estimation --- p.169Chapter 9.2.1 --- Shape Recovery Problem under Multiple Rectangular Ex- tended Light Sources --- p.169Chapter 9.2.2 --- Forward Network Representation of Surface Normals --- p.170Chapter 9.2.3 --- Shape Estimation --- p.174Chapter 9.3 --- Application of the Neural Network in Shape Recovery --- p.174Chapter 9.3.1 --- Structure of the Neural Network --- p.174Chapter 9.3.2 --- Normalization of the Input and Output Patterns --- p.175Chapter 9.4 --- Experimental Results and Discussions --- p.178Chapter 9.4.1 --- Results for Lambertian Surface under One Rectangular Light --- p.178Chapter 9.4.2 --- Results for Lambertian Surface under Four Rectangular Light Sources --- p.180Chapter 9.4.3 --- Results for Hybrid Surface under One Rectangular Light Sources --- p.190Chapter 9.4.4 --- Discussions --- p.190Chapter 10 --- Summary and Conclusions --- p.191Chapter 10.1 --- Summary Results and Contributions --- p.192Chapter 10.2 --- Directions of Future Research --- p.199Bibliography --- p.20

    Mutual Illumination Photometric Stereo

    Get PDF
    Many techniques have been developed in computer vision to recover three-dimensional shape from two-dimensional images. These techniques impose various combinations of assumptions/restrictions of conditions to produce a representation of shape (e.g. surface normals or a height map). Although great progress has been made it is a problem which remains far from solved. In this thesis we propose a new approach to shape recovery - namely `mutual illumination photometric stereo'. We exploit the presence of colourful mutual illumination in an environment to recover the shape of objects from a single image

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    Communication of Digital Material Appearance Based on Human Perception

    Get PDF
    Im alltägliche Leben begegnen wir digitalen Materialien in einer Vielzahl von Situationen wie beispielsweise bei Computerspielen, Filmen, Reklamewänden in zB U-Bahn Stationen oder beim Online-Kauf von Kleidungen. Während einige dieser Materialien durch digitale Modelle repräsentiert werden, welche das Aussehen einer bestimmten Oberfläche in Abhängigkeit des Materials der Fläche sowie den Beleuchtungsbedingungen beschreiben, basieren andere digitale Darstellungen auf der simplen Verwendung von Fotos der realen Materialien, was zB bei Online-Shopping häufig verwendet wird. Die Verwendung von computer-generierten Materialien ist im Vergleich zu einzelnen Fotos besonders vorteilhaft, da diese realistische Erfahrungen im Rahmen von virtuellen Szenarien, kooperativem Produkt-Design, Marketing während der prototypischen Entwicklungsphase oder der Ausstellung von Möbeln oder Accesoires in spezifischen Umgebungen erlauben. Während mittels aktueller Digitalisierungsmethoden bereits eine beeindruckende Reproduktionsqualität erzielt wird, wird eine hochpräzise photorealistische digitale Reproduktion von Materialien für die große Vielfalt von Materialtypen nicht erreicht. Daher verwenden viele Materialkataloge immer noch Fotos oder sogar physikalische Materialproben um ihre Kollektionen zu repräsentieren. Ein wichtiger Grund für diese Lücke in der Genauigkeit des Aussehens von digitalen zu echten Materialien liegt darin, dass die Zusammenhänge zwischen physikalischen Materialeigenschaften und der vom Menschen wahrgenommenen visuellen Qualität noch weitgehend unbekannt sind. Die im Rahmen dieser Arbeit durchgeführten Untersuchungen adressieren diesen Aspekt. Zu diesem Zweck werden etablierte digitalie Materialmodellen bezüglich ihrer Eignung zur Kommunikation von physikalischen und sujektiven Materialeigenschaften untersucht, wobei Beobachtungen darauf hinweisen, dass ein Teil der fühlbaren/haptischen Informationen wie z.B. Materialstärke oder Härtegrad aufgrund der dem Modell anhaftenden geometrische Abstraktion verloren gehen. Folglich wird im Rahmen der Arbeit das Zusammenspiel der verschiedenen Sinneswahrnehmungen (mit Fokus auf die visuellen und akustischen Modalitäten) untersucht um festzustellen, welche Informationen während des Digitalisierungsprozesses verloren gehen. Es zeigt sich, dass insbesondere akustische Informationen in Kombination mit der visuellen Wahrnehmung die Einschätzung fühlbarer Materialeigenschaften erleichtert. Eines der Defizite bei der Analyse des Aussehens von Materialien ist der Mangel bezüglich sich an der Wahnehmung richtenden Metriken die eine Beantwortung von Fragen wie z.B. "Sind die Materialien A und B sich ähnlicher als die Materialien C und D?" erlauben, wie sie in vielen Anwendungen der Computergrafik auftreten. Daher widmen sich die im Rahmen dieser Arbeit durchgeführten Studien auch dem Vergleich von unterschiedlichen Materialrepräsentationen im Hinblick auf. Zu diesem Zweck wird eine Methodik zur Berechnung der wahrgenommenen paarweisen Ähnlichkeit von Material-Texturen eingeführt, welche auf der Verwendung von Textursyntheseverfahren beruht und sich an der Idee/dem Begriff der geradenoch-wahrnehmbaren Unterschiede orientiert. Der vorgeschlagene Ansatz erlaubt das Überwinden einiger Probleme zuvor veröffentlichter Methoden zur Bestimmung der Änhlichkeit von Texturen und führt zu sinnvollen/plausiblen Distanzen von Materialprobem. Zusammenfassend führen die im Rahmen dieser Dissertation dargestellten Inhalte/Verfahren zu einem tieferen Verständnis bezüglich der menschlichen Wahnehmung von digitalen bzw. realen Materialien über unterschiedliche Sinne, einem besseren Verständnis bzgl. der Bewertung der Ähnlichkeit von Texturen durch die Entwicklung einer neuen perzeptuellen Metrik und liefern grundlegende Einsichten für zukünftige Untersuchungen im Bereich der Perzeption von digitalen Materialien.In daily life, we encounter digital materials and interact with them in numerous situations, for instance when we play computer games, watch a movie, see billboard in the metro station or buy new clothes online. While some of these virtual materials are given by computational models that describe the appearance of a particular surface based on its material and the illumination conditions, some others are presented as simple digital photographs of real materials, as is usually the case for material samples from online retailing stores. The utilization of computer-generated materials entails significant advantages over plain images as they allow realistic experiences in virtual scenarios, cooperative product design, advertising in prototype phase or exhibition of furniture and wearables in specific environments. However, even though exceptional material reproduction quality has been achieved in the domain of computer graphics, current technology is still far away from highly accurate photo-realistic virtual material reproductions for the wide range of existing categories and, for this reason, many material catalogs still use pictures or even physical material samples to illustrate their collections. An important reason for this gap between digital and real material appearance is that the connections between physical material characteristics and the visual quality perceived by humans are far from well-understood. Our investigations intend to shed some light in this direction. Concretely, we explore the ability of state-of-the-art digital material models in communicating physical and subjective material qualities, observing that part of the tactile/haptic information (eg thickness, hardness) is missing due to the geometric abstractions intrinsic to the model. Consequently, in order to account for the information deteriorated during the digitization process, we investigate the interplay between different sensing modalities (vision and hearing) and discover that particular sound cues, in combination with visual information, facilitate the estimation of such tactile material qualities. One of the shortcomings when studying material appearance is the lack of perceptually-derived metrics able to answer questions like "are materials A and B more similar than C and D?", which arise in many computer graphics applications. In the absence of such metrics, our studies compare different appearance models in terms of how capable are they to depict/transmit a collection of meaningful perceptual qualities. To address this problem, we introduce a methodology to compute the perceived pairwise similarity between textures from material samples that makes use of patch-based texture synthesis algorithms and is inspired on the notion of Just-Noticeable Differences. Our technique is able to overcome some of the issues posed by previous texture similarity collection methods and produces meaningful distances between samples. In summary, with the contents presented in this thesis we are able to delve deeply in how humans perceive digital and real materials through different senses, acquire a better understanding of texture similarity by developing a perceptually-based metric and provide a groundwork for further investigations in the perception of digital materials
    corecore