1,256 research outputs found

    Pre-processing of integral images for 3-D displays

    Get PDF
    This paper seeks to explore a method to accurately correct geometric distortions caused during the capture of three dimensional (3-D) integral images. Such distortions are rotational and scaling errors which, if not corrected, will cause banding and moire effects on the replayed image. The method for calculating the angle of deviation in the 3-D Integral Images is based on Hough Transform. It allows detection of the angle necessary for correction of the rotational error. Experiments have been conducted on a number of 3-D integral image samples and it has been found that the proposed method produces results with accuracy of 0.05 deg

    3D Capture and 3D Contents Generation for Holographic Imaging

    Get PDF
    The intrinsic properties of holograms make 3D holographic imaging the best candidate for a 3D display. The holographic display is an autostereoscopic display which provides highly realistic images with unique perspective for an arbitrary number of viewers, motion parallax both vertically and horizontally, and focusing at different depths. The 3D content generation for this display is carried out by means of digital holography. Digital holography implements the classic holographic principle as a two‐step process of wavefront capture in the form of a 2D interference pattern and wavefront reconstruction by applying numerically or optically a reference wave. The chapter follows the two main tendencies in forming the 3D holographic content—direct feeding of optically recorded digital holograms to a holographic display and computer generation of interference fringes from directional, depth and colour information about the 3D objects. The focus is set on important issues that comprise encoding of 3D information for holographic imaging starting from conversion of optically captured holographic data to the display data format, going through different approaches for forming the content for computer generation of holograms from coherently or incoherently captured 3D data and finishing with methods for the accelerated computing of these holograms

    Motion and disparity estimation with self adapted evolutionary strategy in 3D video coding

    Get PDF
    Real world information, obtained by humans is three dimensional (3-D). In experimental user-trials, subjective assessments have clearly demonstrated the increased impact of 3-D pictures compared to conventional flat-picture techniques. It is reasonable, therefore, that we humans want an imaging system that produces pictures that are as natural and real as things we see and experience every day. Three-dimensional imaging and hence, 3-D television (3DTV) are very promising approaches expected to satisfy these desires. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. In this paper, we propose a novel approach to use Evolutionary Strategy (ES) for joint motion and disparity estimation to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression using a self adapted ES. A half pixel refinement algorithm is then applied by interpolating macro blocks in the previous frame to further improve the video quality. Experimental results demonstrate that the proposed adaptable ES with Half Pixel Joint Motion and Disparity Estimation can up to 1.5 dB objective quality gain without any additional computational cost over our previous algorithm.1Furthermore, the proposed technique get similar objective quality compared to the full search algorithm by reducing the computational cost up to 90%

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Roadmap on 3D integral imaging: Sensing, processing, and display

    Get PDF
    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy

    A virtual reality system using the concentric mosaic: Construction, rendering, and data compression

    Get PDF
    This paper proposes a new image-based rendering (IBR) technique called "concentric mosaic" for virtual reality applications. IBR using the plenoptic function is an efficient technique for rendering new views of a scene from a collection of sample images previously captured. It provides much better image quality and lower computational requirement for rendering than conventional three-dimensional (3-D) model-building approaches. The concentric mosaic is a 3-D plenoptic function with viewpoints constrained on a plane. Compared with other more sophisticated four-dimensional plenoptic functions such as the light field and the lumigraph, the file size of a concentric mosaic is much smaller. In contrast to a panorama, the concentric mosaic allows users to move freely in a circular region and observe significant parallax and lighting changes without recovering the geometric and photometric scene models. The rendering of concentric mosaics is very efficient, and involves the reordering and interpolating of previously captured slit images in the concentric mosaic. It typically consists of hundreds of high-resolution images which consume a significant amount of storage and bandwidth for transmission. An MPEG-like compression algorithm is therefore proposed in this paper taking into account the access patterns and redundancy of the mosaic images. The compression algorithms of two equivalent representations of the concentric mosaic, namely the multiperspective panoramas and the normal setup sequence, are investigated. A multiresolution representation of concentric mosaics using a nonlinear filter bank is also proposed.published_or_final_versio
    • …
    corecore