1,418 research outputs found

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy

    Virtual Super Resolution of Scale Invariant Textured Images Using Multifractal Stochastic Processes

    Get PDF
    International audienceWe present a new method of magnification for textured images featuring scale invariance properties. This work is originally motivated by an application to astronomical images. One goal is to propose a method to quantitatively predict statistical and visual properties of images taken by a forthcoming higher resolution telescope from older images at lower resolution. This is done by performing a virtual super resolution using a family of scale invariant stochastic processes, namely compound Poisson cascades, and fractional integration. The procedure preserves the visual aspect as well as the statistical properties of the initial image. An augmentation of information is performed by locally adding random small scale details below the initial pixel size. This extrapolation procedure yields a potentially infinite number of magnified versions of an image. It allows for large magnification factors (virtually infinite) and is physically conservative: zooming out to the initial resolution yields the initial image back. The (virtually) super resolved images can be used to predict the quality of future observations as well as to develop and test compression or denoising techniques

    Computer Generation of Integral Images using Interpolative Shading Techniques

    Get PDF
    Research to produce artificial 3D images that duplicates the human stereovision has been ongoing for hundreds of years. What has taken millions of years to evolve in humans is proving elusive even for present day technological advancements. The difficulties are compounded when real-time generation is contemplated. The problem is one of depth. When perceiving the world around us it has been shown that the sense of depth is the result of many different factors. These can be described as monocular and binocular. Monocular depth cues include overlapping or occlusion, shading and shadows, texture etc. Another monocular cue is accommodation (and binocular to some extent) where the focal length of the crystalline lens is adjusted to view an image. The important binocular cues are convergence and parallax. Convergence allows the observer to judge distance by the difference in angle between the viewing axes of left and right eyes when both are focussing on a point. Parallax relates to the fact that each eye sees a slightly shifted view of the image. If a system can be produced that requires the observer to use all of these cues, as when viewing the real world, then the transition to and from viewing a 3D display will be seamless. However, for many 3D imaging techniques, which current work is primarily directed towards, this is not the case and raises a serious issue of viewer comfort. Researchers worldwide, in university and industry, are pursuing their approaches in the development of 3D systems, and physiological disturbances that can cause nausea in some observers will not be acceptable. The ideal 3D system would require, as minimum, accurate depth reproduction, multiviewer capability, and all-round seamless viewing. The necessity not to wear stereoscopic or polarising glasses would be ideal and lack of viewer fatigue essential. Finally, for whatever the use of the system, be it CAD, medical, scientific visualisation, remote inspection etc on the one hand, or consumer markets such as 3D video games and 3DTV on the other, the system has to be relatively inexpensive. Integral photography is a ‘real camera’ system that attempts to comply with this ideal; it was invented in 1908 but due to technological reasons was not capable of being a useful autostereoscopic system. However, more recently, along with advances in technology, it is becoming a more attractive proposition for those interested in developing a suitable system for 3DTV. The fast computer generation of integral images is the subject of this thesis; the adjective ‘fast’ being used to distinguish it from the much slower technique of ray tracing integral images. These two techniques are the standard in monoscopic computer graphics whereby ray tracing generates photo-realistic images and the fast forward geometric approach that uses interpolative shading techniques is the method used for real-time generation. Before this present work began it was not known if it was possible to create volumetric integral images using a similar fast approach as that employed by standard computer graphics, but it soon became apparent that it would be successful and hence a valuable contribution in this area. Presented herein is a full description of the development of two derived methods for producing rendered integral image animations using interpolative shading. The main body of the work is the development of code to put these methods into practice along with many observations and discoveries that the author came across during this task.The Defence and Research Agency (DERA), a contract (LAIRD) under the European Link/EPSRC photonics initiative, and DTI/EPSRC sponsorship within the PROMETHEUS project

    Metarefraction

    Get PDF
    Imagine a thin sheet that performs optical illusions on the scene behind it. For example, a window that appears to reverse depth and to image objects in front of the sheet, or alternatively swimming goggles that cancel the refraction of surrounding water. This thesis will explore how such sheets may be realized. With the refinement of optical fabrication technologies, it is now possible to mass-produce miniaturized optical components. Repeating them over the surface of a sheet, their combined effect may realize optical effects from the structure, rather than the substance, of the sheet. Specifically, such components may realize arbitrary ray-direction mappings at each point on the sheet. Here such mappings, metarefractions, are explored from a range of perspectives. This thesis will explore the inception, theoretical development and ultimately the experimental realization of metarefraction. At its core, this work is primarily mathematical in nature but draws upon both experimental and computational techniques in order to test and visualize the concepts that will be discussed. Examples of such ray-direction mappings will be explored as will their ray- and wave-optical implications. This thesis is structured as follows: Initially, the definition of metarefraction, along with some existing examples, is presented. Then, ray mappings are related to negative refraction, a subject that metarefraction has a surprising number of parallels to. New forms of metarefraction are then introduced, before being incorporated into imaging systems. Later, ray-optical transformations, such as metarefraction, are shown to be limited by implicit wave-optical restrictions. In some cases, these vastly reduce the number of light fields that may be exactly transformed. After this, the most general possible metarefraction is sought, and a simple case is realized experimentally. Further restrictions are then determined, before finishing with a discussion and summary, and by considering possible directions that future work could develop in
    • …
    corecore