4,149 research outputs found

    Resolving depth measurement ambiguity with commercially available range imaging cameras

    Get PDF
    Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera. This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches. This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures. The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity

    Computational multi-depth single-photon imaging

    Full text link
    We present an imaging framework that is able to accurately reconstruct multiple depths at individual pixels from single-photon observations. Our active imaging method models the single-photon detection statistics from multiple reflectors within a pixel, and it also exploits the fact that a multi-depth profile at each pixel can be expressed as a sparse signal. We interpret the multi-depth reconstruction problem as a sparse deconvolution problem using single-photon observations, create a convex problem through discretization and relaxation, and use a modified iterative shrinkage-thresholding algorithm to efficiently solve for the optimal multi-depth solution. We experimentally demonstrate that the proposed framework is able to accurately reconstruct the depth features of an object that is behind a partially-reflecting scatterer and 4 m away from the imager with root mean-square error of 11 cm, using only 19 signal photon detections per pixel in the presence of moderate background light. In terms of root mean-square error, this is a factor of 4.2 improvement over the conventional method of Gaussian-mixture fitting for multi-depth recovery.This material is based upon work supported in part by a Samsung Scholarship, the US National Science Foundation under Grant No. 1422034, and the MIT Lincoln Laboratory Advanced Concepts Committee. We thank Dheera Venkatraman for his assistance with the experiments. (Samsung Scholarship; 1422034 - US National Science Foundation; MIT Lincoln Laboratory Advanced Concepts Committee)Accepted manuscrip

    The 3D model control of image processing

    Get PDF
    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator

    Image informatics strategies for deciphering neuronal network connectivity

    Get PDF
    Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Among the neuronal structures that show morphologi- cal plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular com- munication and the associated calcium-bursting behaviour. In vitro cultured neu- ronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardisation of both image acquisition and image analysis, it has become possible to extract statistically relevant readout from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies

    Detecting stars, galaxies, and asteroids with Gaia

    Full text link
    (Abridged) Gaia aims to make a 3-dimensional map of 1,000 million stars in our Milky Way to unravel its kinematical, dynamical, and chemical structure and evolution. Gaia's on-board detection software discriminates stars from spurious objects like cosmic rays and Solar protons. For this, parametrised point-spread-function-shape criteria are used. This study aims to provide an optimum set of parameters for these filters. We developed an emulation of the on-board detection software, which has 20 free, so-called rejection parameters which govern the boundaries between stars on the one hand and sharp or extended events on the other hand. We evaluate the detection and rejection performance of the algorithm using catalogues of simulated single stars, double stars, cosmic rays, Solar protons, unresolved galaxies, and asteroids. We optimised the rejection parameters, improving - with respect to the functional baseline - the detection performance of single and double stars, while, at the same time, improving the rejection performance of cosmic rays and of Solar protons. We find that the minimum separation to resolve a close, equal-brightness double star is 0.23 arcsec in the along-scan and 0.70 arcsec in the across-scan direction, independent of the brightness of the primary. We find that, whereas the optimised rejection parameters have no significant impact on the detectability of de Vaucouleurs profiles, they do significantly improve the detection of exponential-disk profiles. We also find that the optimised rejection parameters provide detection gains for asteroids fainter than 20 mag and for fast-moving near-Earth objects fainter than 18 mag, albeit this gain comes at the expense of a modest detection-probability loss for bright, fast-moving near-Earth objects. The major side effect of the optimised parameters is that spurious ghosts in the wings of bright stars essentially pass unfiltered.Comment: Accepted for publication in A&

    Augmented reality usage for prototyping speed up

    Full text link
    The first part of the article describes our approach for solution of this problem by means of Augmented Reality. The merging of the real world model and digital objects allows streamline the work with the model and speed up the whole production phase significantly. The main advantage of augmented reality is the possibility of direct manipulation with the scene using a portable digital camera. Also adding digital objects into the scene could be done using identification markers placed on the surface of the model. Therefore it is not necessary to work with special input devices and lose the contact with the real world model. Adjustments are done directly on the model. The key problem of outlined solution is the ability of identification of an object within the camera picture and its replacement with the digital object. The second part of the article is focused especially on the identification of exact position and orientation of the marker within the picture. The identification marker is generalized into the triple of points which represents a general plane in space. There is discussed the space identification of these points and the description of representation of their position and orientation be means of transformation matrix. This matrix is used for rendering of the graphical objects (e. g. in OpenGL and Direct3D).Comment: Keywords: augmented reality, prototyping, pose estimation, transformation matri
    • 

    corecore