15 research outputs found

    Roadmap on 3D integral imaging: Sensing, processing, and display

    Get PDF
    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    A Compact, High Resolution Hyperspectral Imager for Remote Sensing of Soil Moisture

    Get PDF
    Measurement of soil moisture content is a key challenge across a variety of fields, ranging from civil engineering through to defence and agriculture. While dedicated satellite platforms like SMAP and SMOS provide high spatial coverage, their low spatial resolution limits their application to larger regional studies. The advent of compact, high lift capacity UAVs has enabled small scale surveys of specific farmland cites. This thesis presents work on the development of a compact, high spatial and spectral resolution hyperspectral imager, designed for remote measurement of soil moisture content. The optical design of the system incorporates a bespoke freeform blazed diffraction grating, providing higher optical performance at a similar aperture to conventional Offner-Chrisp designs. The key challenges of UAV-borne hyperspectral imaging relate to using only solar illumination, with both intermittent cloud cover and atmospheric water absorption creating challenges in obtaining accurate reflectance measurements. A hardware based calibration channel for mitigating cloud cover effects is introduced, along with a comparison of methods for recovering soil moisture content from reflectance data under varying illumination conditions. The data processing pipeline required to process the raw pushbroom data into georectified images is also discussed. Finally, preliminary work on applying soil moisture techniques to leaf imaging are presented

    Engineering for a Changing World: 59th IWK, Ilmenau Scientific Colloquium, Technische Universität Ilmenau, September 11-15, 2017 : programme

    Get PDF
    In 2017, the Ilmenau Scientific Colloquium is again organised by the Department of Mechanical Engineering. The title of this year’s conference “Engineering for a Changing World” refers to limited natural resources of our planet, to massive changes in cooperation between continents, countries, institutions and people – enabled by the increased implementation of information technology as the probably most dominant driver in many fields. The Colloquium, complemented by workshops, is characterised by the following topics, but not limited to them: – Precision Engineering and Metrology – Industry 4.0 and Digitalisation in Mechanical Engineering – Mechatronics, Biomechatronics and Mechanism Technology – Systems Technology – Innovative Metallic Materials The topics are oriented on key strategic aspects of research and teaching in Mechanical Engineering at our university

    Widening the view angle of auto-multiscopic display, denoising low brightness light field data and 3D reconstruction with delicate details

    Get PDF
    This doctoral thesis will present the results of my work into widening the viewing angle of the auto-multiscopic display, denoising light filed data the enhancement of captured light filed data captured in low light circumstance, and the attempts on reconstructing the subject surface with delicate details from microscopy image sets. The automultiscopic displays carefully control the distribution of emitted light over space, direction (angle) and time so that even a static image displayed can encode parallax across viewing directions (light field). This allows simultaneous observation by multiple viewers, each perceiving 3D from their own (correct) perspective. Currently, the illusion can only be effectively maintained over a narrow range of viewing angles. We propose and analyze a simple solution to widen the range of viewing angles for automultiscopic displays that use parallax barriers. We insert a refractive medium, with a high refractive index, between the display and parallax barriers. The inserted medium warps the exitant lightfield in a way that increases the potential viewing angle. We analyze the consequences of this warp and build a prototype with a 93% increase in the effective viewing angle. Additionally, we developed an integral images synthesis method that can address the refraction introduced by the inserted medium efficiently without the use of ray tracing. Capturing light field image with a short exposure time is preferable for eliminating the motion blur but it also leads to low brightness in a low light environment, which results in a low signal noise ratio. Most light field denoising methods apply regular 2D image denoising method to the sub-aperture images of a 4D light field directly, but it is not suitable for focused light field data whose sub-aperture image resolution is too low to be applied regular denoising methods. Therefore, we propose a deep learning denoising method based on micro lens images of focused light field to denoise the depth map and the original micro lens image set simultaneously, and achieved high quality total focused images from the low focused light field data. In areas like digital museum, remote researching, 3D reconstruction with delicate details of subjects is desired and technology like 3D reconstruction based on macro photography has been used successfully for various purposes. We intend to push it further by using microscope rather than macro lens, which is supposed to be able to capture the microscopy level details of the subject. We design and implement a scanning method which is able to capture microscopy image set from a curve surface based on robotic arm, and the 3D reconstruction method suitable for the microscopy image set

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    Multifunctional Volumetric Metaoptics

    Get PDF
    Optical systems are often comprised of modular arrangements of components, and the improvement of these systems has historically leaned on the precise manufacturing and alignment of the comprising elements. This provides an intuitive pathway to optical design, but ultimately yields systems that are far bulkier than required by the laws of physics. It is often the case that the required degrees of freedom to achieve complex tasks is present within dielectric volumes that are only several wavelengths per side, and these degrees of freedom can be accessed by patterning the dielectric volume with subwavelength resolution. Even in such small volumes, all of the fundamental properties of light (wavelength, polarization, k-vector) can be controlled which opens the possibility for extremely multifunctional, compact image sensor elements. The determination of the refractive index distribution of these devices has historically been a challenging inverse-design problem, and the fabrication of 3D dielectric devices is a challenge unique to different regimes of the electromagnetic spectrum. This thesis utilizes current state-of-the-art optimization techniques to design multifunctional volumetric devices, and theoretically expands upon the techniques to facilitate the optimization of high index contrast structures. Multiple microwave prototypes are measured, devices operating at terahertz frequencies are fabricated using silicon micromachining, and optical devices with resolutions achievable with CMOS processing techniques are studied for next-generation camera sensors.</p
    corecore