576 research outputs found

    Embedded Line Scan Image Sensors: The Low Cost Alternative for High Speed Imaging

    Full text link
    In this paper we propose a low-cost high-speed imaging line scan system. We replace an expensive industrial line scan camera and illumination with a custom-built set-up of cheap off-the-shelf components, yielding a measurement system with comparative quality while costing about 20 times less. We use a low-cost linear (1D) image sensor, cheap optics including a LED-based or LASER-based lighting and an embedded platform to process the images. A step-by-step method to design such a custom high speed imaging system and select proper components is proposed. Simulations allowing to predict the final image quality to be obtained by the set-up has been developed. Finally, we applied our method in a lab, closely representing the real-life cases. Our results shows that our simulations are very accurate and that our low-cost line scan set-up acquired image quality compared to the high-end commercial vision system, for a fraction of the price.Comment: 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA

    PMAS: The Potsdam Multi Aperture Spectrophotometer. II. The Wide Integral Field Unit PPak

    Full text link
    PPak is a new fiber-based Integral Field Unit (IFU), developed at the Astrophysical Institute Potsdam, implemented as a module into the existing PMAS spectrograph. The purpose of PPak is to provide both an extended field-of-view with a large light collecting power for each spatial element, as well as an adequate spectral resolution. The PPak system consists of a fiber bundle with 331 object, 36 sky and 15 calibration fibers. The object and sky fibers collect the light from the focal plane behind a focal reducer lens. The object fibers of PPak, each 2.7 arcseconds in diameter, provide a contiguous hexagonal field-of-view of 74 times 64 arcseconds on the sky, with a filling factor of 60%. The operational wavelength range is from 400 to 900nm. The PPak-IFU, together with the PMAS spectrograph, are intended for the study of extended, low surface brightness objects, offering an optimization of total light-collecting power and spectral resolution. This paper describes the instrument design, the assembly, integration and tests, the commissioning and operational procedures, and presents the measured performance at the telescope.Comment: 14 pages, 21 figures, accepted at PAS

    3D shape measurement of discontinuous specular objects based on advanced PMD with bi-telecentric lens

    Get PDF
    This paper presents an advanced phase measuring deflectometry (PMD) method based on a novel mathematical model to obtain three dimensional (3D) shape of discontinuous specular object using a bi-telecentric lens. The proposed method uses an LCD screen, a flat beam splitter, a camera with a bi-telecentric lens, and a translating stage. The LCD screen is used to display sinusoidal fringe patterns and can be moved by the stage to two different positions along the normal direction of a reference plane. The camera captures the deformed fringe patterns reflected by the measured specular surface. The splitter realizes the fringe patterns displaying and imaging from the same direction. Using the proposed advanced PMD method, the depth data can be directly calculated from absolute phase, instead of integrating gradient data. In order to calibrate the relative orientation of the LCD screen and the camera, an auxiliary plane mirror is used to reflect the pattern on the LCD screen three times. After the geometric calibration, 3D shape data of the measured specular objects are calculated from the phase differences between the reference plane and the reflected surface. The experimental results show that 3D shape of discontinuous specular object can be effectively and accurately measured from absolute phase data by the proposed advanced PMD method

    Development of a Background-Oriented Schlieren Technique with Telecentric Lenses for Supersonic Flow

    Get PDF
    Background oriented schlieren (BOS) is a quantitative optical technique which exploits light deflection occurring in non-homogeneous transparent media. It allows to indirectly measure the density gradients by analysing the apparent displacement of features of a background pattern when imaged through the investigated flow. Thanks to its simple set-up and to the consolidated data reduction technique based on cross-correlation algorithms the BOS technique has progressively attracted the interest of the researchers. In this work a BOS system using a telecentric lens system has been set up in order to improve measurement accuracy and to avoid 3D effects arising from using conventional entocentric lenses. The design of the telecentric lens system is reported along with an analysis of its performance in term of spatial resolution. Some preliminary tests on a supersonic flows are also reported

    A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses

    Get PDF
    We propose a novel multi-view camera model for line-scan cameras with telecentric lenses. The camera model supports an arbitrary number of cameras and assumes a linear relative motion with constant velocity between the cameras and the object. We distinguish two motion configurations. In the first configuration, all cameras move with independent motion vectors. In the second configuration, the cameras are mounted rigidly with respect to each other and therefore share a common motion vector. The camera model can model arbitrary lens distortions by supporting arbitrary positions of the line sensor with respect to the optical axis. We propose an algorithm to calibrate a multi-view telecentric line-scan camera setup. To facilitate a 3D reconstruction, we prove that an image pair acquired with two telecentric line-scan cameras can always be rectified to the epipolar standard configuration, in contrast to line-scan cameras with entocentric lenses, for which this is possible only under very restricted conditions. The rectification allows an arbitrary stereo algorithm to be used to calculate disparity images. We propose an efficient algorithm to compute 3D coordinates from these disparities. Experiments on real images show the validity of the proposed multi-view telecentric line-scan camera model

    A study of model deflection measurement techniques applicable within the national transonic facility

    Get PDF
    Moire contouring, scanning interferometry, and holographic contouring were examined to determine their practicality and potential to meet performance requirements for a model deflection sensor. The system envisioned is to be nonintrusive, and is to be capable of mapping or contouring the surface of a 1-meter by 1-meter model with a resolution of 50 to 100 points. The available literature was surveyed, and computations and analyses were performed to establish specific performance requirements, as well as the capabilities and limitations of such a sensor within the geometry of the NTF section test section. Of the three systems examined, holographic contouring offers the most promise. Unlike Moire, it is not hampered by limited contour spacing and extraneous fringes. Its transverse resolution can far exceed the limited point sampling resolution of scanning heterodyne interferometry. The availability of the ruby laser as a high power, pulsed, multiple wavelength source makes such a system feasible within the NTF

    Calibration of a Telecentric Structured-light Device for Micrometric 3D Reconstruction

    Get PDF
    Structured-light 3D reconstruction techniques are employed in a wide range of applications for industrial inspection. In particular, some tasks require micrometric precision for the identification of microscopic surface irregularities. We propose a novel calibration technique for structured-light systems adopting telecentric lenses for both camera and projector. The device exploits a fixed light pattern (striped-based) to perform accurate microscopic surface reconstruction and measurements. Our method employs a sphere with a known radius as calibration target and takes advantage of the orthographic projection model of the telecentric lenses to recover the bundle of planes originated by the projector. Once the sheaf of parallel planes is properly described in the camera reference frame, the triangulation of the surface’s object hit by the light stripes is immediate. Moreover, we tested our technique in a real-world scenario for industrial surface inspection by implementing a complete pipeline to recover the intersections between the projected planes and the surface. Experimental analysis shows the robustness of the proposed approach against synthetic and real-world test data

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy
    • …
    corecore