44,469 research outputs found

    Using an n-zone TDI camera for acquisition of multiple images with different illuminations in a single scan

    Get PDF
    For fast scanning of large surfaces with microscopic resolution or for scanning of roll-fed material, TDI line scan cameras are typically used. TDI cameras sum up the light collected in adjacent lines of the image sensor synchronous to the motion of the object. Therefore TDI cameras have much higher sensitivity than standard line cameras. For many applications in the field of optical inspection more than one image of the object under test are needed with different illumination situations. For this task we need either more than one TDI camera or we have to scan the object several times in different illumination situations. Both solutions are often not entirely satisfying. In this paper we present a solution of this task using a modified TDI sensor consisting of three or more separate TDI zones. With this n-zone TDI camera it is possible to acquire multiple images with different illuminations in a single scan. In a simulation we demonstrate the principle of operation of the camera and the necessary image preprocessing which can be implemented in the frame grabber hardware

    Amorphous silicon e 3D sensors applied to object detection

    Get PDF
    Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm

    3D scanning of cultural heritage with consumer depth cameras

    Get PDF
    Three dimensional reconstruction of cultural heritage objects is an expensive and time-consuming process. Recent consumer real-time depth acquisition devices, like Microsoft Kinect, allow very fast and simple acquisition of 3D views. However 3D scanning with such devices is a challenging task due to the limited accuracy and reliability of the acquired data. This paper introduces a 3D reconstruction pipeline suited to use consumer depth cameras as hand-held scanners for cultural heritage objects. Several new contributions have been made to achieve this result. They include an ad-hoc filtering scheme that exploits the model of the error on the acquired data and a novel algorithm for the extraction of salient points exploiting both depth and color data. Then the salient points are used within a modified version of the ICP algorithm that exploits both geometry and color distances to precisely align the views even when geometry information is not sufficient to constrain the registration. The proposed method, although applicable to generic scenes, has been tuned to the acquisition of sculptures and in this connection its performance is rather interesting as the experimental results indicate

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017

    3d modelling of archaeological small finds by a low-cost range camera. Methodology and first results

    Get PDF
    The production of reliable documentation of small finds is a crucial process during archaeological excavations. Range cameras can be a valid alternative to traditional illustration methods: they are veritable 3D scanners able to easily collect the 3D geometry (shape and dimensions in metric units) of an object/scene practically in real-time. This work investigates precisely the potentialities of a promising low-cost range camera, the Structure SensorTM by Occipital, for rapid modelling archaeological objects. The accuracy assessment was thus performed by comparing the 3D model of a Cipriot-Phoenician globular jug captured by this device with the 3D model of the same object obtained through photogrammetry. In general, the performed analysis shows that Structure Sensor is capable to acquire the 3D geometry of a small object with an accuracy comparable at millimeter level to that obtainable with the photogrammetric method, even though the finer details are not always correctly modelled. The texture reconstruction is instead less accurate. In the end, it can be concluded that the range camera used for this work, due to its low-cost and flexibility, is a suitable tool for the rapid documentation of archaeological small finds, especially when not expert users are involved

    Sub-shot-noise shadow sensing with quantum correlations

    Get PDF
    The quantised nature of the electromagnetic field sets the classical limit to the sensitivity of position measurements. However, techniques based on the properties of quantum states can be exploited to accurately measure the relative displacement of a physical object beyond this classical limit. In this work, we use a simple scheme based on the split-detection of quantum correlations to measure the position of a shadow at the single-photon light level, with a precision that exceeds the shot-noise limit. This result is obtained by analysing the correlated signals of bi-photon pairs, created in parametric downconversion and detected by an electron multiplying CCD (EMCCD) camera employed as a split-detector. By comparing the measured statistics of spatially anticorrelated and uncorrelated photons we were able to observe a significant noise reduction corresponding to an improvement in position sensitivity of up to 17% (0.8dB). Our straightforward approach to sub-shot-noise position measurement is compatible with conventional shadow-sensing techniques based on the split-detection of light-fields, and yields an improvement that scales favourably with the detector’s quantum efficiency

    SIRIS: a high resolution scanning infrared camera for examining paintings

    Get PDF
    The new SIRIS (Scanning InfraRed Imaging System) camera developed at the National Gallery in London allows highresolution images of paintings to be made in the near infrared region (900–1700 nm). Images of 5000 × 5000 pixels are made by moving a 320 × 256 pixel InGaAs array across the focal plane of the camera using two orthogonal translation stages. The great advantages of this camera over scanning infrared devices are its relative portability and that image acquisition is comparatively rapid – a full 5000 × 5000 pixel image can be made in around 20 minutes. The paper describes the development of the mechanical, optical and electronic components of the camera, including the design of a new lens. The software routines used to control image capture and to assemble the individual 320 × 256 pixel frames into a seamless mosaic image are also mentioned. The optics of the SIRIS camera have been designed so that the camera can operate at a range of resolutions; from around 2.5 pixels per millimetre on large paintings of up to 2000 × 2000 mm to 10 pixels per millimetre on smaller paintings or details of paintings measuring 500 × 500 mm. The camera is primarily designed to examine underdrawings in paintings; preliminary results from test targets and paintings are presented and the quality of the images compared with those from other cameras currently used in this field

    Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes.

    Get PDF
    The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume

    3D Object Reconstruction from Hand-Object Interactions

    Full text link
    Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera. Although these approaches are successful for a wide range of object classes, they rely on stable and distinctive geometric or texture features. Many objects like mechanical parts, toys, household or decorative articles, however, are textureless and characterized by minimalistic shapes that are simple and symmetric. Existing in-hand scanning systems and 3d reconstruction techniques fail for such symmetric objects in the absence of highly distinctive features. In this work, we show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of even featureless and highly symmetric objects and we present an approach that fuses the rich additional information of hands into a 3d reconstruction pipeline, significantly contributing to the state-of-the-art of in-hand scanning.Comment: International Conference on Computer Vision (ICCV) 2015, http://files.is.tue.mpg.de/dtzionas/In-Hand-Scannin
    • …
    corecore