402 research outputs found

    Automatic Detection and Correction for Glossy Reflections in Digital Photograph

    Get PDF
    [[abstract]]The popularization of digital technology has made shooting digital photos and using related applications a part of daily life. However, the use of flash, to compensate for low atmospheric lighting, often leads to overexposure or glossy reflections. This study proposes an auto-detection and inpainting technique to correct overexposed faces in digital photography. This algorithm segments the skin color in the photo as well as uses face detection and capturing to determine candidate bright spots on the face. Based on the statistical analysis of color brightness and filtering, the bright spots are identified. Finally, bright spots are corrected through inpainting technology. From the experimental results, this study demonstrates the high accuracy and efficiency of the method

    Design of an Ultra-wideband Radio Frequency Identification System with Chipless Transponders

    Get PDF
    The state-of-the-art commercially available radio-frequency identification (RFID) transponders are usually composed of an antenna and an application specific integrated circuit chip, which still makes them very costly compared to the well-established barcode technology. Therefore, a novel low-cost RFID system solution based on passive chipless RFID transponders manufactured using conductive strips on flexible substrates is proposed in this work. The chipless RFID transponders follow a specific structure design, which aim is to modify the shape of the impinged electromagnetic wave to embed anidentification code in it and then backscatter the encoded signal to the reader. This dissertation comprises a multidisciplinary research encompassing the design of low-cost chipless RFID transponders with a novel frequency coding technique, unlike usually disregarded in literature, this approach considers the communication channel effects and assigns a unique frequency response to each transponder. Hence, the identification codes are different enough, to reduce the detection error and improve their automatic recognition by the reader while working under normal conditions. The chipless RFID transponders are manufactured using different materials and state-of-the-art mass production fabrication processes, like printed electronics. Moreover, two different reader front-ends working in the ultra-wideband (UWB) frequency range are used to interrogate the chipless RFID transponders. The first one is built using high-performance off-theshelf components following the stepped frequency modulation (SFM) radar principle, and the second one is a commercially available impulse radio (IR) radar. Finally, the two readers are programmed with algorithms based on the conventional minimum distance and maximum likelihood detection techniques, considering the whole transponder radio frequency (RF) response, instead of following the commonly used approach of focusing on specific parts of the spectrum to detect dips or peaks. The programmed readers automatically identify when a chipless RFID transponder is placed within their interrogation zones and proceed to the successful recognition of its embedded identification code. Accomplishing in this way, two novel fully automatic SFM- and IRRFID readers for chipless transponders. The SFM-RFID system is capable to successfully decode up to eight different chipless RFID transponders placed sequentially at a maximum reading range of 36 cm. The IR-RFID system up to four sequentially and two simultaneously placed different chipless RFID transponders within a 50 cm range.:Acknowledgments Abstract Kurzfassung Table of Contents Index of Figures Index of Tables Index of Abbreviations Index of Symbols 1 Introduction 1.1 Motivation 1.2 Scope of Application 1.3 Objectives and Structure Fundamentals of the RFID Technology 2.1 Automatic Identification Systems Background 2.1.1 Barcode Technology 2.1.2 Optical Character Recognition 2.1.3 Biometric Procedures 2.1.4 Smart Cards 2.1.5 RFID Systems 2.2 RFID System Principle 2.2.1 RFID Features 2.3 RFID with Chipless Transponders 2.3.1 Time Domain Encoding 2.3.2 Frequency Domain Encoding 2.4 Summary Manufacturing Technologies 3.1 Organic and Printed Electronics 3.1.1 Substrates 3.1.2 Organic Inks 3.1.3 Screen Printing 3.1.4 Flexography 3.2 The Printing Process 3.3 A Fabrication Alternative with Aluminum or Copper Strips 3.4 Fabrication Technologies for Chipless RFID Transponders 3.5 Summary UWB Chipless RFID Transponder Design 4.1 Scattering Theory 4.1.1 Radar Cross-Section Definition 4.1.2 Radar Absorbing Material’s Principle 4.1.3 Dielectric Multilayers Wave Matrix Analysis 4.1.4 Frequency Selective Surfaces 4.2 Double-Dipoles UWB Chipless RFID Transponder 4.2.1 An Infinite Double-Dipole Array 4.2.2 Double-Dipoles UWB Chipless Transponder Design 4.2.3 Prototype Fabrication 4.3 UWB Chipless RFID Transponder with Concentric Circles 4.3.1 Concentric Circles UWB Chipless Transponder 4.3.2 Concentric Rings UWB Chipless RFID Transponder 4.4 Concentric Octagons UWB Chipless Transponders 4.4.1 Concentric Octagons UWB Chipless Transponder Design 1 4.4.2 Concentric Octagons UWB Chipless Transponder Design 2 4.5 Summary 5. RFID Readers for Chipless Transponders 5.1 Background 5.1.1 The Radar Range Equation 5.1.2 Range Resolution 5.1.3 Frequency Band Selection 5.2 Frequency Domain Reader Test System 5.2.1 Stepped Frequency Waveforms 5.2.2 Reader Architecture 5.2.3 Test System Results 5.3 Time Domain Reader 5.3.1 Novelda Radar 5.3.2 Test System Results 5.4 Summary Detection of UWB Chipless RFID Transponders 6.1 Background 6.2 The Communication Channel 6.2.1 AWGN Channel Modeling and Detection 6.2.2 Free-Space Path Loss Modeling and Normalization 6.3 Detection and Decoding of Chipless RFID Transponders 6.3.1 Minimum Distance Detector 6.3.2 Maximum Likelihood Detector 6.3.3 Correlator Detector 6.3.4 Test Results 6.4 Simultaneous Detection of Multiple UWB Chipless Transponders 6.5 Summary System Implementation 7.1 SFM-UWB RFID System with CR-Chipless Transponders 7.2 IR-UWB RFID System with COD1-Chipless Transponders 7.3 Summary Conclusion and Outlook References Publications Appendix A RCS Calculation Measurement Setups Appendix B Resistance and Skin Depth Calculation Appendix C List of Videos Test Videos Consortium Videos Curriculum Vita

    A Practical Reflectance Transformation Imaging Pipeline for Surface Characterization in Cultural Heritage

    Get PDF
    We present a practical acquisition and processing pipeline to characterize the surface structure of cultural heritage objects. Using a free-form Reflectance Transformation Imaging (RTI) approach, we acquire multiple digital photographs of the studied object shot from a stationary camera. In each photograph, a light is freely positioned around the object in order to cover a wide variety of illumination directions. Multiple reflective spheres and white Lambertian surfaces are added to the scene to automatically recover light positions and to compensate for non-uniform illumination. An estimation of geometry and reflectance parameters (e.g., albedo, normals, polynomial texture maps coefficients) is then performed to locally characterize surface properties. The resulting object description is stable and representative enough of surface features to reliably provide a characterization of measured surfaces. We validate our approach by comparing RTI-acquired data with data acquired with a high-resolution microprofilometer.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 66509

    Image based surface reflectance remapping for consistent and tool independent material appearence

    Get PDF
    Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost. This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints. The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference

    Surface analysis and fingerprint recognition from multi-light imaging collections

    Get PDF
    Multi-light imaging captures a scene from a fixed viewpoint through multiple photographs, each of which are illuminated from a different direction. Every image reveals information about the surface, with the intensity reflected from each point being measured for all lighting directions. The images captured are known as multi-light image collections (MLICs), for which a variety of techniques have been developed over recent decades to acquire information from the images. These techniques include shape from shading, photometric stereo and reflectance transformation imaging (RTI). Pixel coordinates from one image in a MLIC will correspond to exactly the same position on the surface across all images in the MLIC since the camera does not move. We assess the relevant literature to the methods presented in this thesis in chapter 1 and describe different types of reflections and surface types, as well as explaining the multi-light imaging process. In chapter 2 we present a novel automated RTI method which requires no calibration equipment (i.e. shiny reference spheres or 3D printed structures as other methods require) and automatically computes the lighting direction and compensates for non-uniform illumination. Then in chapter 3 we describe our novel MLIC method termed Remote Extraction of Latent Fingerprints (RELF) which segments each multi-light imaging photograph into superpixels (small groups of pixels) and uses a neural network classifier to determine whether or not the superpixel contains fingerprint. The RELF algorithm then mosaics these superpixels which are classified as fingerprint together in order to obtain a complete latent print image, entirely contactlessly. In chapter 4 we detail our work with the Metropolitan Police Service (MPS) UK, who described to us with their needs and requirements which helped us to create a prototype RELF imaging device which is now being tested by MPS officers who are validating the quality of the latent prints extracted using our technique. In chapter 5 we then further developed our multi-light imaging latent fingerprint technique to extract latent prints from curved surfaces and automatically correct for surface curvature distortions. We have a patent pending for this method

    Inverse tone mapping

    Get PDF
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated

    Image-based Material Editing

    Get PDF
    Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object\u27s background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques

    New 3D scanning techniques for complex scenes

    Get PDF
    This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.Diese Dissertation prĂ€sentiert neuartige Verfahren fĂŒr die 3D-Digitalisierung komplexer Szenen, wie z.B. OberflĂ€chen mit sehr feinen Strukturen, durchscheinende Objekte, GegenstĂ€nde mit geringem Albedo, glĂ€nzende Objekte, Szenen mit Lichtinterreflektionen und unzusammenhĂ€ngende Szenen. Ausgehend von der Beobachtung, daß die spekulare Reflektion ein zuverlĂ€ssiger, visueller Hinweis fĂŒr die Mesostruktur einer OberflĂ€che ist, stellen wir ein progressives Meßsystem vor, um SpekularitĂ€tsfelder zu messen. Aus diesen Feldern kann anschließend die Mesostruktur rekonstruiert werden. Mit unserer Methode können OberflĂ€chen mit sehr feinen Strukturen von komplexen, realen Objekten effizient aufgenommen werden. Durchscheinende Objekte stellen ein großes Problem fĂŒr traditionelle, optischbasierte 3D-Rekonstruktionsmethoden dar. Wir analysieren und vergleichen zwei verschiedene Methoden zum Eliminieren von Lichtstreuung (Descattering): Phasenverschiebung und Polarisation. Weiterhin prĂ€sentieren wir mehrere hochqualitative 3D-Rekonstruktionsmethoden fĂŒr durchscheinende Objekte, die auf Phasenverschiebung und Polarisation basieren. Außerdem fĂŒhren wir das Konzept der modulationsbasierten Signaltrennung ein. Hierzu wird ein hochfrequentes Signal zu einem anderes Signal multipliziert. Das so modulierte Signal erhĂ€lt damit die separierenden Eigenschaften des hochfrequenten Signals. Dies erlaubt unsMeßartefakte aufgrund von globalen Beleuchtungseffekten zu vermeiden. Dieses Verfahren kann zum effizienten 3DScannen von Szenen mit durchscheinden Objekten und Interreflektionen benutzt werden

    Polarimetric remote sensing system analysis: Digital Imaging and Remote Sensing Image Generation (DIRSIG) model validation and impact of polarization phenomenology on material discriminability

    Get PDF
    In addition to spectral information acquired by traditional multi/hyperspectral systems, passive electro optical and infrared (EO/IR) polarimetric sensors also measure the polarization response of different materials in the scene. Such an imaging modality can be useful in improving surface characterization; however, the characteristics of polarimetric systems have not been completely explored by the remote sensing community. Therefore, the main objective of this research was to advance our knowledge in polarimetric remote sensing by investigating the impact of polarization phenomenology on material discriminability. The first part of this research focuses on system validation, where the major goal was to assess the fidelity of the polarimetric images simulated using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. A theoretical framework, based on polarization vision models used for animal vision studies and industrial defect detection applications, was developed within which the major components of the polarimetric image chain were validated. In the second part of this research, a polarization physics based approach for improved material discriminability was proposed. This approach utilizes the angular variation in the polarization response to infer the physical characteristics of the observed surface by imaging the scene in three different view directions. The usefulness of the proposed approach in improving detection performance in the absence of apriori knowledge about the target geometry was demonstrated. Sensitivity analysis of the proposed system for different scene related parameters was performed to identify the imaging conditions under which the material discriminability is maximized. Furthermore, the detection performance of the proposed polarimetric system was compared to that of the hyperspectral system to identify scenarios where polarization information can be very useful in improving the target contrast
    • 

    corecore