9,818 research outputs found

    Multidimensional Optical Sensing and Imaging Systems (MOSIS): From Macro to Micro Scales

    Get PDF
    Multidimensional optical imaging systems for information processing and visualization technologies have numerous applications in fields such as manufacturing, medical sciences, entertainment, robotics, surveillance, and defense. Among different three-dimensional (3-D) imaging methods, integral imaging is a promising multiperspective sensing and display technique. Compared with other 3-D imaging techniques, integral imaging can capture a scene using an incoherent light source and generate real 3-D images for observation without any special viewing devices. This review paper describes passive multidimensional imaging systems combined with different integral imaging configurations. One example is the integral-imaging-based multidimensional optical sensing and imaging systems (MOSIS), which can be used for 3-D visualization, seeing through obscurations, material inspection, and object recognition from microscales to long range imaging. This system utilizes many degrees of freedom such as time and space multiplexing, depth information, polarimetric, temporal, photon flux and multispectral information based on integral imaging to record and reconstruct the multidimensionally integrated scene. Image fusion may be used to integrate the multidimensional images obtained by polarimetric sensors, multispectral cameras, and various multiplexing techniques. The multidimensional images contain substantially more information compared with two-dimensional (2-D) images or conventional 3-D images. In addition, we present recent progress and applications of 3-D integral imaging including human gesture recognition in the time domain, depth estimation, mid-wave-infrared photon counting, 3-D polarimetric imaging for object shape and material identification, dynamic integral imaging implemented with liquid-crystal devices, and 3-D endoscopy for healthcare applications.B. Javidi wishes to acknowledge support by the National Science Foundation (NSF) under Grant NSF/IIS-1422179, and DARPA and US Army under contract number W911NF-13-1-0485. The work of P. Latorre Carmona, A. Martínez-Uso, J. M. Sotoca and F. Pla was supported by the Spanish Ministry of Economy under the project ESP2013-48458-C4-3-P, and by MICINN under the project MTM2013-48371-C2-2-PDGI, by Generalitat Valenciana under the project PROMETEO-II/2014/062, and by Universitat Jaume I through project P11B2014-09. The work of M. Martínez-Corral and G. Saavedra was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2015-66458-C2-1R, and by the Generalitat Valenciana, Spain under the project PROMETEOII/2014/072

    Disparity map generation based on trapezoidal camera architecture for multiview video

    Get PDF
    Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities, the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video remains a huge challenge. This paper presents the mathematical description of trapezoidal camera architecture and relationships which facilitate the determination of camera position for visual content acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera Architecture is that it allows for adaptive camera topology by which points within the scene, especially the occluded ones can be optically and geometrically viewed from several different viewpoints either on the edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and the fact that the positions of cameras (with the exception of few) differ in their vertical coordinate description could very well be used to address the issue of occlusion which continues to be a major problem in computer vision with regards to the generation of depth map

    Three-dimensional imaging with multiple degrees of freedom using data fusion

    Get PDF
    This paper presents an overview of research work and some novel strategies and results on using data fusion in 3-D imaging when using multiple information sources. We examine a variety of approaches and applications such as 3-D imaging integrated with polarimetric and multispectral imaging, low levels of photon flux for photon-counting 3-D imaging, and image fusion in both multiwavelength 3-D digital holography and 3-D integral imaging. Results demonstrate the benefits data fusion provides for different purposes, including visualization enhancement under different conditions, and 3-D reconstruction quality improvement

    Depth and All-in-Focus Image Estimation in Synthetic Aperture Integral Imaging Under Partial Occlusions

    Get PDF
    A common assumption in the integral imaging reconstruction is that a pixel will be photo-consistent if all viewpoints observed by the different cameras converge at a single point when focusing at the proper depth. However, the presence of occlusions between objects in the scene prevents this from being fulfilled. In this paper, a novel depth and all-in focus image estimation method is presented, based on a photo-consistency measure that uses the median criterion in relation to the elemental images. The interest of this approach is to find a solution to detect which camera correctly sees the partially occluded object at a certain depth and allows for a precise solution to the object depth. In addition, a robust solution is proposed to detect the boundary limits between partially occluded objects, which are subsequently used during the regularization depth estimation process. The experimental results show that the proposed method outperforms other state-of-the-art depth estimation methods in a synthetic aperture integral imaging framework

    Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range

    Get PDF
    Penència presentada al SPIE Conference Volume 9867 "Three-Dimensional Imaging, Visualization, and Display 2016" organitzat per Bahram Javidi i Jung-Young Son i celebrat a Baltimore, (Maryland, United States) el 17 d' abril de 2016In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain

    Polarimetric 3D integral imaging in photon-starved conditions

    Get PDF
    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging

    Analysis and Observations from the First Amazon Picking Challenge

    Full text link
    This paper presents a overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team's background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team's success in the competition, and discuss observations and lessons learned based on survey results and the authors' personal experiences during the challenge

    3D polarimetric integral imaging in low illumination conditions

    Get PDF
    We overview a previously reported three-dimensional (3D) polarimetric integral imaging method and algorithms for extracting 3D polarimetric information in low light environment. 3D integral imaging reconstruction algorithm is first performed to the originally captured two-dimensional (2D) polarimetric images. The signal-to-noise ratio (SNR) of the 3D reconstructed polarimetric image is enhanced comparing with the 2D images. The Stokes polarization parameters are measured and applied for the calculation of the 3D volumetric degree of polarization (DoP) image of the scene. Statistical analysis on the 3D DoP can extract the polarimetric properties of the scene. Experimental results verified the proposed method out performs the conventional 2D polarimetric imaging in low illumination environment

    Time-of-flight compressed-sensing ultrafast photography for encrypted three-dimensional dynamic imaging

    Get PDF
    We applied compressed ultrafast photography (CUP), a computational imaging technique, to acquire three-dimensional (3D) images. The approach unites image encryption, compression, and acquisition in a single measurement, thereby allowing efficient and secure data transmission. By leveraging the time-of-flight (ToF) information of pulsed light reflected by the object, we can reconstruct a volumetric image (150 mm×150 mm×1050 mm, x × y × z) from a single camera snapshot. Furthermore, we demonstrated high-speed 3D videography of a moving object at 75 frames per second using the ToF-CUP camera

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy
    corecore