17,359 research outputs found

    Fourier domain optical coherence tomography system with balance detection

    Get PDF
    A Fourier domain optical coherence tomography system with two spectrometers in balance detection is assembled using each an InGaAs linear camera. Conditions and adjustments of spectrometer parameters are presented to ensure anti-phase channeled spectrum modulation across the two cameras for a majority of wavelengths within the optical source spectrum. By blocking the signal to one of the spectrometers, the setup was used to compare the conditions of operation of a single camera with that of a balanced configuration. Using multiple layer samples, balanced detection technique is compared with techniques applied to conventional single camera setups, based on sequential deduction of averaged spectra collected with different on/off settings for the sample or reference beams. In terms of reducing the autocorrelation terms and fixed pattern noise, it is concluded that balance detection performs better than single camera techniques, is more tolerant to movement, exhibits longer term stability and can operate dynamically in real time. The cameras used exhibit larger saturation power than the power threshold where excess photon noise exceeds shot noise. Therefore, conditions to adjust the two cameras to reduce the noise when used in a balanced configuration are presented. It is shown that balance detection can reduce the noise in real time operation, in comparison with single camera configurations. However, simple deduction of an average spectrum in single camera configurations delivers less noise than the balance detection

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    An Infrared Television System for Hydrogen Flame Detection

    Get PDF
    Infrared sensitive vidicon camera system, utilizing a single camera operating in the near infrared, detects a hydrogen flame burning in a bright sunlit environment

    Single camera 3D planar Doppler velocity measurements using imaging fibre bundles

    Get PDF
    Two frequency planar Doppler Velocimetry (2ν-PDV) is a modification of the Planar Doppler Velocimetry (PDV) method that allows velocity measurements to be made, quickly and non intrusively, across a plane defined by a laser light sheet. In 2ν-PDV the flow is illuminated sequentially with two optical frequencies, separated by about 700MHz. A single CCD viewing through an iodine absorption cell is used to capture images under each illumination. The two images are used to find the normalised transmission through the cell, and the velocity information is encoded as a variation in the transmission Use of a single camera ensures registration of the reference and signal images and removes issues associated with the polarization sensitivity of the beam splitter, which are major problems in the conventional approach. A 2ν-PDV system has been constructed using a continuous-wave Argon ion laser combined with multiple imaging fibre bundles, to port multiple views of the measurement plane to a CCD camera, allowing the measurement of three velocity components.EPSR

    Single camera pose estimation using Bayesian filtering and Kinect motion priors

    Full text link
    Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014 conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video: https://www.youtube.com/watch?v=dJMTSo7-uF

    Single-Camera 3D Microscope Scanner

    Get PDF
    Sistema de escáner microscópico 3D basado en imágenes capturadas por una única cámara desplazada sobre la superficie a escánear. Las imágenes se combinan para obtener una imagen de alta resolución de la superficie completa, y se aplican técnicas de reconstrucción 3D para calcular su altura relativa

    Evaluation of an electro-optic remote displacement measuring system

    Get PDF
    An instrumentation system to provide a noncontact method for measurement of target positions was evaluated. The system employs two electro-optic camera units which give stereo information for use in determining three dimensional target locations. Specially developed, infrared sensitive photodetectors are used in the cameras to sense radiation from light emitting diode targets. Up to 30 of these targets can be monitored with a sampling rate of 312 Hz per target. An important part of the system is a minicomputer which is used to collect the camera data, sort it, make corrections for distortions in the electro-optic system, and perform the necesssary coordinate transformations. If target motions are restricted to locations in a plane which is perpendicular to a camera's optical axis, the system can be used with just one camera. Calibrations performed in this mode characterize accuracies in single camera operation. This information is also useful in determination of single camera contributions to total system errors. For this reason the system was tested in both the single camera and two camera (stereo) modes of operation

    MonoSLAM: A SINGLE CAMERA SLAM

    Get PDF
    Simultaneous Localization and Mapping (SLAM)became well established in the robotics community in the last decade and led to many innovations. This paper represents a monoSLAM algorithm, using a single camera as a sensor. The algorithm achieves both the localization of a RC car and building a full map of the track simultaneously in real time. A full map is drawn from sparse points using interpolation. One of the key contributions of this algorithm is that there is no need for any initial information about the width of the track, the positions of any landmarks, or the initial position of the RC car which makes it generic and suitable for different environments. Another main contribution is that although we depend here on a single camera, the depth could be estimated from the first frame knowing the height of the camera above the motion surface. Localization is achieved by tracking SURF points already initialized in the map, where the position of the car is updated using extended Kalman filter optimal estimation algorithm
    corecore