36 research outputs found

    Holographic instrumentation applications

    Get PDF
    Investigating possibilities and limitations of applying holographic techniques to aerospace technolog

    Kaleidoscopic imaging

    Get PDF
    Kaleidoscopes have a great potential in computational photography as a tool for redistributing light rays. In time-of-flight imaging the concept of the kaleidoscope is also useful when dealing with the reconstruction of the geometry that causes multiple reflections. This work is a step towards opening new possibilities for the use of mirror systems as well as towards making their use more practical. The focus of this work is the analysis of planar kaleidoscope systems to enable their practical applicability in 3D imaging tasks. We analyse important practical properties of mirror systems and develop a theoretical toolbox for dealing with planar kaleidoscopes. Based on this theoretical toolbox we explore the use of planar kaleidoscopes for multi-view imaging and for the acquisition of 3D objects. The knowledge of the mirrors positions is crucial for these multi-view applications. On the other hand, the reconstruction of the geometry of a mirror room from time-of-flight measurements is also an important problem. We therefore employ the developed tools for solving this problem using multiple observations of a single scene point.Kaleidoskope haben in der rechnergestützten Fotografie ein großes Anwendungspotenzial, da sie flexibel zur Umverteilung von Lichtstrahlen genutzt werden können. Diese Arbeit ist ein Schritt auf dem Weg zu neuen Einsatzmöglichkeiten von Spiegelsystemen und zu ihrer praktischen Anwendung. Das Hauptaugenmerk der Arbeit liegt dabei auf der Analyse planarer Spiegelsysteme mit dem Ziel, sie für Aufgaben in der 3D-Bilderzeugung praktisch nutzbar zu machen. Auch für die Time-of-flight-Technologie ist das Konzept des Kaleidoskops, wie in der Arbeit gezeigt wird, bei der Rekonstruktion von Mehrfachreflektionen erzeugender Geometrie von Nutzen. In der Arbeit wird ein theoretischer Ansatz entwickelt der die Analyse planarer Kaleidoskope stark vereinfacht. Mithilfe dieses Ansatzes wird der Einsatz planarer Spiegelsysteme im Multiview Imaging und bei der Erfassung von 3-D-Objekten untersucht. Das Wissen um die Spiegelpositionen innerhalb des Systems ist für diese Anwendungen entscheidend und erfordert die Entwicklung geeigneter Methoden zur Kalibrierung dieser Positionen. Ein ähnliches Problem tritt in Time-of-Flight Anwendungen bei der, oft unerwünschten, Aufnahme von Mehrfachreflektionen auf. Beide Problemstellungen lassen sich auf die Rekonstruktion der Geometrie eines Spiegelraums zurückführen, das mit Hilfe des entwickelten Ansatzes in allgemeinererWeise als bisher gelöst werden kann

    A Compact Neutron Scatter Camera Using Optical Coded-Aperture Imaging

    Get PDF
    The detection and localization of fast neutron resources is an important capability for a number of nuclear security areas such as emergency response and arms control treaty verification. Neutron scatter cameras are one technology that can be used to accomplish this task, but current instruments tend to be large (meter scale) and not portable. Using optical coded-aperture imaging, fast plastic scintillator, and fast photodetectors that were sensitive to single photons, a portable neutron scatter camera was designed and simulated. The design was optimized, an experimental prototype was constructed, and neutron imaging was demonstrated with a tagged 252Cf source in the lab

    A Vector Signal Processing Approach to Color

    Get PDF
    Surface (Lambertain) color is a useful visual cue for analyzing material composition of scenes. This thesis adopts a signal processing approach to color vision. It represents color images as fields of 3D vectors, from which we extract region and boundary information. The first problem we face is one of secondary imaging effects that makes image color different from surface color. We demonstrate a simple but effective polarization based technique that corrects for these effects. We then propose a systematic approach of scalarizing color, that allows us to augment classical image processing tools and concepts for multi-dimensional color signals

    Increasing the Imaging Speed of Stochastic Optical Reconstruction Microscopy

    Get PDF
    This thesis investigates methods of increasing the imaging speed of Stochastic Optical Reconstruction Microscopy (STORM); a superresolution imaging technique which breaks the diffraction limit by imaging single molecules. Initially the imaging conditions were optimised to maximize both the Signal-to-Noise Ratio (SNR) and the number of molecules localised in order to push the system to image at the fastest rate possible. It was found that the lowest readout laser power possible should be used at a frame rate between 100 - 150 fps. The optimum concentration of MEA - a component of the STORM imaging buffer - was found to be 100 mM. Whilst the optimized conditions afford some speed increase, there is a more fundamental question to be investigated: how many localisations are required for an accurate reconstruction of the sample? The answer to this question will allow a reduction in the image acquisition time by only imaging until the minimum number of molecules have been localised. The density of localisations was studied over time and a simple histogram analysis suggested that using a trade off between density and localisation limited regimes is a valid method to increase the imaging speed by determining a "finishing point". The localisation density increased linearly over time for all samples tested, however some areas reached the cut off density more quickly than others. Using several analysis methods and simulated data it was shown that the blinking behaviour of molecules is a random process and that the variability in resolution across an image is mostly due to a non uniform labelling distribution. Finally, dual colour samples were imaged, as labelling the target structure with two coloured dyes was hypothesised to double the imaging speed. This was found to be true, however there was no overall reduction in acquisition time as dual labelled samples have a slower increase in localisation density over time

    Development of a radiative transport based, fluorescence-enhanced, frequency-domain small animal imaging system

    Get PDF
    Herein we present the development of a fluorescence-enhanced, frequency-domain radiative transport reconstruction system designed for small animal optical tomography. The system includes a time-dependent data acquisition instrument, a radiative transport based forward model for prediction of time-dependent propagation of photons in small, non-diffuse volumes, and an algorithm which utilizes the forward model to reconstruct fluorescent yields from air/tissue boundary measurements. The major components of the instrumentation include a charge coupled device camera, an image intensifier, signal generators, and an optical switch. Time-dependent data were obtained in the frequency-domain using homodyne techniques on phantoms with 0.2% to 3% intralipid solutions. Through collaboration with Transpire, Inc., a fluorescence-enhanced, frequency-domain, radiative transport equation (RTE) solver was developed. This solver incorporates the discrete ordinates, source iteration with diffusion synthetic acceleration, and linear discontinuous finite element differencing schemes, to predict accurately the fluence of excitation and emission photons in diffuse and transport limited systems. Additional techniques such as the first scattered distributed source method and integral transport theory are used to model the numerical apertures of fiber optic sources and detectors. The accuracy of the RTE solver was validated against diffusion and Monte Carlo predictions and experimental data. The comparisons were favorable in both the diffusion and transport limits, with average errors of the RTE predictions, as compared to experimental data, typically being less than 8% in amplitude and 7% in phase. These average errors are similar to those of the Monte Carlo and diffusion predictions. Synthetic data from a virtual mouse were used to demonstrate the feasibility of using the RTE solver for reconstructing fluorescent heterogeneities in small, non-diffuse volumes. The current version of the RTE solver limits the reconstruction to one iteration and the reconstruction of marginally diffuse, frequency-domain experimental data using RTE was not successful. Multiple iterations using a diffusion solver successfully reconstructed the fluorescent heterogeneities, indicating that, when available, multiple iterations of the RTE based solver should also reconstruct the heterogeneities

    A first glimpse into the EGRIP ice core: An analysis of the influence of deformation and recrystallisation on fabric and microstructures of the Northeast Greenland Ice Stream

    Get PDF
    Global sea level has been rising over the last century, and one of the contributors and the main source of projection uncertainties is ice sheet mass loss by solid ice discharge. Projections currently lack sufficient confidence, partly due to the difficulty in simulating ice flow behaviour, which is highly influenced by deformation modes and the physical properties of ice, such as grain microstructure and c-axis orientation anisotropy. This thesis aims to deliver an overview about the deformation regimes and microstructural properties, as well as crystal-preferred orientation (CPO) anisotropy, of the Northeast Greenland Ice Stream (NEGIS) by examining an ice core from the East Greenland Ice Core Project (EGRIP). Ice streams are major features to conduct the discharge from inland ice towards the coasts and NEGIS is the largest and most dominant one in Greenland. Therefore, microstructure and fabric data from almost 800 thin sections were analysed by an automated Fabric Analyser and a Large Area Scanning Macroscope. The result is an almost continuous record of the physical properties of the upper 1714m of the ice core. The major findings regarding crystal-preferred orientations are (1) a much more rapid evolution of c-axes anisotropy in shallow depths compared to lower dynamics sites and (2) partly novel characteristics in the CPO patterns. These findings are accompanied by highly irregular grain shapes, the regular occurrence of protruding grains and further indicators for an early onset of dynamic recrystallisation. Grain size values are similar to results from other ice cores and show an increase in grain size, followed by a strong decrease in the Glacial. Until a depth of 196 m, a broad single maximum CPO was observed, indicating vertical compression from overlaying layers. A crossed girdle of Type I and Type II, observed in natural ice for the very first time, dominates until 294 m, probably caused by a fluctuation between non-coaxial and coaxial deformation, accompanied by simple shear and the activation of multiple slip-systems. Between 294 and 500m a transition into a vertical girdle CPO occurs. Extensional deformation along flow leads to a distinct vertical girdle between 500 and 1150 m. This CPO pattern develops into a horizontal maxima CPO, also observed as a novel feature in polar ice, which is probably caused by additional simple shear. This new microstructure and fabric information improves our understanding of ice dynamics, and should be considered in future ice flow law parameterisations to improve ice-sheet models

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der größten Herausforderungen bei dieser Aufgabe ist die vielfältige Ambiguität. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivität der Oberfläche gegenseitig. Zusätzlich wird eine Integration dieser Einflüsse durchgeführt, um das endgültige Bild zu erzeugen. Die Umkehrung dieser integrierten Abhängigkeiten ist eine äußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei Ansätze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ansätzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsüblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natürlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualität der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was üblich für Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections
    corecore