169 research outputs found

    Computer Generation of Integral Images using Interpolative Shading Techniques

    Get PDF
    Research to produce artificial 3D images that duplicates the human stereovision has been ongoing for hundreds of years. What has taken millions of years to evolve in humans is proving elusive even for present day technological advancements. The difficulties are compounded when real-time generation is contemplated. The problem is one of depth. When perceiving the world around us it has been shown that the sense of depth is the result of many different factors. These can be described as monocular and binocular. Monocular depth cues include overlapping or occlusion, shading and shadows, texture etc. Another monocular cue is accommodation (and binocular to some extent) where the focal length of the crystalline lens is adjusted to view an image. The important binocular cues are convergence and parallax. Convergence allows the observer to judge distance by the difference in angle between the viewing axes of left and right eyes when both are focussing on a point. Parallax relates to the fact that each eye sees a slightly shifted view of the image. If a system can be produced that requires the observer to use all of these cues, as when viewing the real world, then the transition to and from viewing a 3D display will be seamless. However, for many 3D imaging techniques, which current work is primarily directed towards, this is not the case and raises a serious issue of viewer comfort. Researchers worldwide, in university and industry, are pursuing their approaches in the development of 3D systems, and physiological disturbances that can cause nausea in some observers will not be acceptable. The ideal 3D system would require, as minimum, accurate depth reproduction, multiviewer capability, and all-round seamless viewing. The necessity not to wear stereoscopic or polarising glasses would be ideal and lack of viewer fatigue essential. Finally, for whatever the use of the system, be it CAD, medical, scientific visualisation, remote inspection etc on the one hand, or consumer markets such as 3D video games and 3DTV on the other, the system has to be relatively inexpensive. Integral photography is a ‘real camera’ system that attempts to comply with this ideal; it was invented in 1908 but due to technological reasons was not capable of being a useful autostereoscopic system. However, more recently, along with advances in technology, it is becoming a more attractive proposition for those interested in developing a suitable system for 3DTV. The fast computer generation of integral images is the subject of this thesis; the adjective ‘fast’ being used to distinguish it from the much slower technique of ray tracing integral images. These two techniques are the standard in monoscopic computer graphics whereby ray tracing generates photo-realistic images and the fast forward geometric approach that uses interpolative shading techniques is the method used for real-time generation. Before this present work began it was not known if it was possible to create volumetric integral images using a similar fast approach as that employed by standard computer graphics, but it soon became apparent that it would be successful and hence a valuable contribution in this area. Presented herein is a full description of the development of two derived methods for producing rendered integral image animations using interpolative shading. The main body of the work is the development of code to put these methods into practice along with many observations and discoveries that the author came across during this task.The Defence and Research Agency (DERA), a contract (LAIRD) under the European Link/EPSRC photonics initiative, and DTI/EPSRC sponsorship within the PROMETHEUS project

    New techniques of multiple integral field spectroscopy

    Get PDF
    The work of this thesis is to investigate new techniques for Integral Field Spectroscopy (IPS) to make the most efficient use of modem large telescopes. Most of the work described is aimed at the FMOS for the SUBARU 8m telescope. Although this is primarily a system for Multiple Object Spectroscopy (MOS) employing single fibres, there is an option to include a multiple-IFS (MIPS) system. Much of this thesis is therefore aimed at the design and prototyping of critical systems for both the IPS and MOS modes of this instrument. The basic theory of IFU design is discussed first. Some particular problems are described and their soludons presented. The design of the MIPS system is described together with the construction and testing of a prototype deployable IFU. The assembly of the pickoff/fore-optics, microlens array and fibre bundle and their testing are described in detail. The estimated performance of the complete module is presented together with suggestions for improving the system efficiency which is currently limited by the performance of the microlens array. The prototyping of the MIPS system is supported by an extensive programme of testing of candidate microlens arrays. Another critical aspect of the instrument is the ability to disconnect the (IPS and MOS) fibre input which is installed on a removable prime focus top-end ring from the spectrographs which are mounted elsewhere on the telescope. This requires high-performance multiple fibre connectors. The designs of connectors for the MOS and IPS modes are described. Results from the testing of a prototype for the MOS mode are presented. This work is supported by a mathematical model of the coupling efficiency which takes into account optical aberrations and alignment errors. The final critical aspect of FMOS which has been investigated is the design of the spectrographs. The baseline system operates in the near-infrared (NIR) but an additional visible channel is an option. Efficient designs for both the visible and NIR systems are presented. The design of the NIR spectrograph presents challenges in the choice of materials for the doublet and triplet lenses employed. The choice of material and the combinations in which they can be used are described. This thesis shows that all these critical aspects of FMOS have good solutions that will result in good performance of the whole instrument. For the multiple IFU system, the prototype demonstrates acceptable performance which can be made excellent by the use of a better microlens array. The multiple fibre connector prototype already indicates excellent performance. Finally, the spectrograph designs presented should result in high efficiency and good image quality

    Image sensing with multilayer, nonlinear optical neural networks

    Full text link
    Optical imaging is commonly used for both scientific and technological applications across industry and academia. In image sensing, a measurement, such as of an object's position, is performed by computational analysis of a digitized image. An emerging image-sensing paradigm breaks this delineation between data collection and analysis by designing optical components to perform not imaging, but encoding. By optically encoding images into a compressed, low-dimensional latent space suitable for efficient post-analysis, these image sensors can operate with fewer pixels and fewer photons, allowing higher-throughput, lower-latency operation. Optical neural networks (ONNs) offer a platform for processing data in the analog, optical domain. ONN-based sensors have however been limited to linear processing, but nonlinearity is a prerequisite for depth, and multilayer NNs significantly outperform shallow NNs on many tasks. Here, we realize a multilayer ONN pre-processor for image sensing, using a commercial image intensifier as a parallel optoelectronic, optical-to-optical nonlinear activation function. We demonstrate that the nonlinear ONN pre-processor can achieve compression ratios of up to 800:1 while still enabling high accuracy across several representative computer-vision tasks, including machine-vision benchmarks, flow-cytometry image classification, and identification of objects in real scenes. In all cases we find that the ONN's nonlinearity and depth allowed it to outperform a purely linear ONN encoder. Although our experiments are specialized to ONN sensors for incoherent-light images, alternative ONN platforms should facilitate a range of ONN sensors. These ONN sensors may surpass conventional sensors by pre-processing optical information in spatial, temporal, and/or spectral dimensions, potentially with coherent and quantum qualities, all natively in the optical domain

    Non-disruptive use of light fields in image and video processing

    Get PDF
    In the age of computational imaging, cameras capture not only an image but also data. This captured additional data can be best used for photo-realistic renderings facilitating numerous post-processing possibilities such as perspective shift, depth scaling, digital refocus, 3D reconstruction, and much more. In computational photography, the light field imaging technology captures the complete volumetric information of a scene. This technology has the highest potential to accelerate immersive experiences towards close-toreality. It has gained significance in both commercial and research domains. However, due to lack of coding and storage formats and also the incompatibility of the tools to process and enable the data, light fields are not exploited to its full potential. This dissertation approaches the integration of light field data to image and video processing. Towards this goal, the representation of light fields using advanced file formats designed for 2D image assemblies to facilitate asset re-usability and interoperability between applications and devices is addressed. The novel 5D light field acquisition and the on-going research on coding frameworks are presented. Multiple techniques for optimised sequencing of light field data are also proposed. As light fields contain complete 3D information of a scene, large amounts of data is captured and is highly redundant in nature. Hence, by pre-processing the data using the proposed approaches, excellent coding performance can be achieved.Im Zeitalter der computergestĂŒtzten Bildgebung erfassen Kameras nicht mehr nur ein Bild, sondern vielmehr auch Daten. Diese erfassten Zusatzdaten lassen sich optimal fĂŒr fotorealistische Renderings nutzen und erlauben zahlreiche Nachbearbeitungsmöglichkeiten, wie Perspektivwechsel, Tiefenskalierung, digitale Nachfokussierung, 3D-Rekonstruktion und vieles mehr. In der computergestĂŒtzten Fotografie erfasst die Lichtfeld-Abbildungstechnologie die vollstĂ€ndige volumetrische Information einer Szene. Diese Technologie bietet dabei das grĂ¶ĂŸte Potenzial, immersive Erlebnisse zu mehr RealitĂ€tsnĂ€he zu beschleunigen. Deshalb gewinnt sie sowohl im kommerziellen Sektor als auch im Forschungsbereich zunehmend an Bedeutung. Aufgrund fehlender Kompressions- und Speicherformate sowie der InkompatibilitĂ€t derWerkzeuge zur Verarbeitung und Freigabe der Daten, wird das Potenzial der Lichtfelder nicht voll ausgeschöpft. Diese Dissertation ermöglicht die Integration von Lichtfelddaten in die Bild- und Videoverarbeitung. Hierzu wird die Darstellung von Lichtfeldern mit Hilfe von fortschrittlichen fĂŒr 2D-Bilder entwickelten Dateiformaten erarbeitet, um die Wiederverwendbarkeit von Assets- Dateien und die KompatibilitĂ€t zwischen Anwendungen und GerĂ€ten zu erleichtern. Die neuartige 5D-Lichtfeldaufnahme und die aktuelle Forschung an Kompressions-Rahmenbedingungen werden vorgestellt. Es werden zudem verschiedene Techniken fĂŒr eine optimierte Sequenzierung von Lichtfelddaten vorgeschlagen. Da Lichtfelder die vollstĂ€ndige 3D-Information einer Szene beinhalten, wird eine große Menge an Daten, die in hohem Maße redundant sind, erfasst. Die hier vorgeschlagenen AnsĂ€tze zur Datenvorverarbeitung erreichen dabei eine ausgezeichnete Komprimierleistung

    Theoretical and experimental study of tunable liquid crystal lenses : wavefront optimization

    Get PDF
    Adaptive optical systems have applications in various domains: imaging (zoom and autofocus), medicine (endoscopy, ophthalmology), virtual and augmented reality. Liquid crystal-based lenses have become a big part of adaptive optics industry as they have numerous advantages in comparison with traditional methods. Despite significant progress made over the past decades, certain performance and production limitations still exist. This thesis explores ways of overcoming these problems, considering two types of tunable lenses: liquid crystal lens using dielectric dividing principle and modal control lens.The introduction of this thesis presents the theory of liquid crystals and adaptive lenses, addressing existing liquid crystal lenses as well.In the first and second chapters of this work we demonstrate the results of theoretical modeling of double dielectric optically hidden liquid crystal lens design. We have studied the influence of geometrical parameters, such as thickness of liquid crystal cell, shape and dimensions of dielectrics forming the optically hidden layer, on the optical power of the lens. The dependences of optical power on the relative permittivity and conductivity of dielectrics were obtained. The behavior of such a lens in the presence of temperature variation was analyzed. We have further extended the concept of hidden dielectric layer to exploration of microstructures. Two systems of microlenses and microprisms have been simulated. The comparison of optical phase modulation dependence on spatial frequency of microstructures was obtained. Deviations from ideal wavefronts were evaluated in both cases. We also compared proposed designs with a standard interdigital electrode approach. Suggested devices could be used for continuous light steering or as tunable microlens arrays. In the third and fourth chapters we present our studies of tunable lenses based on modal control principle. We verified simulation results by comparing them with experimentally obtained dependences of optical power and root mean square spherical aberrations. We have explored the following modifications of conventional modal control lens: 1) additional powered ring electrode; 2) floating disk electrode; 3) combination of the first two cases. The influence of each modification was studied and explained. Simulation results showed that using the combination of additional electrodes along with optimal powering technique -the wavefront could be corrected within the entire clear aperture of the lens. Modified lens meets low aberration requirements for ophthalmic applications (for example,intraocular implant). Finally, a new design of a wide aperture tunable modal control Fresnel lens was investigated. Imaging performance of the proposed Fresnel lens was evaluated and compared with the reference lens built using traditional modal control approach. The prototype device demonstrated the increase of optical powerin comparison with a conventional modal control lens of the same aperture size. A theoretical model and numerical simulations of the Fresnel lens design were presented. Simulations demonstrated a possibility of noticeable image quality improvement obtained using optimized voltages and frequencies

    Design And Fabrication of High Numerical Aperture And Low Aberration Bi-Convex Micro Lens Array

    Get PDF
    Micro lens array is crucial in various kinds of optical and electronic applications. A micro lens array with high numerical aperture (NA) and low aberration is in particular needed. This research is aimed to design and fabricate such a micro lens array with simple structure while keeps the same NA of a same-diameter hemisphere lens. A bi-convex semispherical micro lens array, with corresponding NA 0.379, by PDMS is first designed and analyzed. Experiments are further conducted to fabricate the designed micro lens array by the thermal reflow process. The formed profile is then sputtered with copper to serve as the mold. The front and the rear micro lens array are fabricated by plating PDMS to the mold and then assembled to form the designed micro lens array

    Photonic Jet: Science and Application

    Get PDF

    Photonic Jet: Science and Application

    Get PDF
    Photonic jets (PJs) are important mesoscale optical phenomena arising from electromagnetic waves interacting with dielectric particles. PJs have applications in super-resolution imaging, sensing, detection, patterning, trapping, manipulation, waveguiding, signal amplification and high-efficiency signal collection, among others. This reprint provides an overview of the field and highlights recent advances and trends in PJ research

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. FĂŒr die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. ZunĂ€chst wird eine vollstĂ€ndige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spĂ€rlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz grĂ¶ĂŸere effektive AtomgrĂ¶ĂŸen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen DisparitĂ€tskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewĂŒnschte Information direkt aus den codierten Messungen geschĂ€tzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die QualitĂ€t der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten GradientenĂ€hnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden ĂŒbertrifft. Um die verschiedenen RekonstruktionsansĂ€tze zu trainieren und zu bewerten, werden zwei DatensĂ€tze erstellt. ZunĂ€chst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfĂŒgbarer DisparitĂ€t Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger DisparitĂ€t enthĂ€lt, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die QualitĂ€t weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufĂ€llige, regulĂ€re, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. DarĂŒber hinaus werden weitere Untersuchungen durchgefĂŒhrt, zum Beispiel bezĂŒglich der AbhĂ€ngigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse ĂŒberzeugend und zeigen eine hohe RekonstruktionsqualitĂ€t. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, ĂŒbertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender DisparitĂ€tsschĂ€tzung nach dem Stand der Technik
    • 

    corecore