2,053 research outputs found

    Computational low-light flash photography

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    Assessing the effectiveness of RapidEye multispectral imagery for vegetation mapping in Madeira Island (Portugal)

    Get PDF
    Madeira Island is a biodiversity hotspot due to its high number of endemic/native plant species. In this work we developed and assessed a methodological framework to produce a RapidEye-based vegetation map. Reasonable accuracies were achieved for a 26 categories classification scheme in two different seasons. We tested pixel and object based approaches and the inclusion of a vegetation index band on top of the pre-processed RapidEye bands stack. Object based generally showed to outperform pixel based classification approaches except for linear or highly scattered classes. The addition of a vegetation index to the workflow increased the separability of the Jeffrey-Matusita least separable class pairs, but not necessarily the overall accuracy. The Pontius accuracy assessment highlighted class specific accuracy tradeoffs related to different combinations of the inputs and methods. The approach to be used, in conclusion, should be carefully considered on the basis of the desired result.info:eu-repo/semantics/publishedVersio

    Interactive Shadow Removal

    Get PDF

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    Development of a light-sheet fluorescence microscope employing an ALPAO deformable mirror to achieve video-rate remote refocusing and volumetric imaging.

    Get PDF
    There are numerous situations in microscopy where it is desirable to remotely refocus a microscope employing a high numerical aperture (NA) objective lens. This thesis describes the characterisation, development and implementation of an Alpao membrane deformable mirror-based system to achieve this goal for a light-sheet fluorescence microscope (LSFM). The Alpao deformable mirror (DM) DM97-15 used is this work has 97 actuators and was sufficiently fast to perform refocus sweeps at 25 Hz and faster. However, a known issue with using Alpao deformable mirrors in open-loop mode is that they exhibit viscoelastic creep and temperature- dependent variations in the mirror response. The effect of visco-elastic creep was reduced by ensuring that the mirror profile was on average constant on timescales shorter than the characteristic time of the visco-elastic creep. The thermal effect was managed by ensuring that the electrical power delivered to the actuators was constant prior to optimisation and use. This was achieved by ensuring that the frequency and amplitude of oscillation of the mirror was constant prior to optimisation, so that it reached a thermal steady state, was approximately constant during optimisation and constant during use. The image-based optimisation procedure employed used an estimate of the Strehl ratio of the optical system calculated from an image of an array of 1 μm diameter holes. The optimisation procedure included optimising the amount of high-NA defocus and the Zernike modes from Noll indices 4 to 24. The system was tested at 26.3 refocus sweeps per second over a refocus range of -50 to 50 μm with a 40x/0.85 air objective and a 40x/0.80 water immersion objective. The air objective enabled a mean Strehl metric of more than 0.6 over a lateral field of view of 200x200 microns and for a refocus range of 45 microns. The water objective achieved a mean Strehl metric of more than 0.6 over a lateral field of view of 200x200 microns over a larger refocus range of 77 microns. The DM-based refocusing system was then incorporated into a LSFM setup. The spatial resolution of the system was characterised using fluorescent beads imaged volumetrically at 26.3 volumes per second. The performance of the system was also demonstrated for imaging fluorescence pollen grain samples.Open Acces

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
    corecore