17,202 research outputs found

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Illumination waveform optimization for time-of-flight range imaging cameras

    Get PDF
    Time-of-flight range imaging sensors acquire an image of a scene, where in addition to standard intensity information, the range (or distance) is also measured concurrently by each pixel. Range is measured using a correlation technique, where an amplitude modulated light source illuminates the scene and the reflected light is sampled by a gain modulated image sensor. Typically the illumination source and image sensor are amplitude modulated with square waves, leading to a range measurement linearity error caused by aliased harmonic components within the correlation waveform. A simple method to improve measurement linearity by reducing the duty cycle of the illumination waveform to suppress problematic aliased harmonic components is demonstrated. If the total optical power is kept constant, the measured correlation waveform amplitude also increases at these reduced illumination duty cycles. Measurement performance is evaluated over a range of illumination duty cycles, both for a standard range imaging camera configuration, and also using a more complicated phase encoding method that is designed to cancel aliased harmonics during the sampling process. The standard configuration benefits from improved measurement linearity for illumination duty cycles around 30%, while the measured amplitude, hence range precision, is increased for both methods as the duty cycle is reduced below 50% (while maintaining constant optical power)

    A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns

    Get PDF
    Frequent checks on livestock\u2019s body growth can help reducing problems related to cow infertility or other welfare implications, and recognizing health\u2019s anomalies. In the last ten years, optical methods have been proposed to extract information on various parameters while avoiding direct contact with animals\u2019 body, generally causes stress. This research aims to evaluate a new monitoring system, which is suitable to frequently check calves and cow\u2019s growth through a three-dimensional analysis of their bodies\u2019 portions. The innovative system is based on multiple acquisitions from a low cost Structured Light Depth-Camera (Microsoft Kinect\u2122 v1). The metrological performance of the instrument is proved through an uncertainty analysis and a proper calibration procedure. The paper reports application of the depth camera for extraction of different body parameters. Expanded uncertainty ranging between 3 and 15 mm is reported in the case of ten repeated measurements. Coef\ufb01cients of determination R2> 0.84 and deviations lower than 6% from manual measurements where in general detected in the case of head size, hips distance, withers to tail length, chest girth, hips, and withers height. Conversely, lower performances where recognized in the case of animal depth (R2 = 0.74) and back slope (R2 = 0.12)

    Rank-based camera spectral sensitivity estimation

    Get PDF
    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab—a difficult and lengthy procedure—or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have “raw mode.” Experiments validate our method

    SeaWiFS technical report series. Volume 5: Ocean optics protocols for SeaWiFS validation

    Get PDF
    Protocols are presented for measuring optical properties, and other environmental variables, to validate the radiometric performance of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), and to develop and validate bio-optical algorithms for use with SeaWiFS data. The protocols are intended to establish foundations for a measurement strategy to verify the challenging SeaWiFS accuracy goals of 5 percent in water-leaving radiances and 35 percent in chlorophyll alpha concentration. The protocols first specify the variables which must be measured, and briefly review rationale. Subsequent chapters cover detailed protocols for instrument performance specifications, characterizing and calibration instruments, methods of making measurements in the field, and methods of data analysis. These protocols were developed at a workshop sponsored by the SeaWiFS Project Office (SPO) and held at the Naval Postgraduate School in Monterey, California (9-12 April, 1991). This report is the proceedings of that workshop, as interpreted and expanded by the authors and reviewed by workshop participants and other members of the bio-optical research community. The protocols are a first prescription to approach unprecedented measurement accuracies implied by the SeaWiFS goals, and research and development are needed to improve the state-of-the-art in specific areas. The protocols should be periodically revised to reflect technical advances during the SeaWiFS Project cycle

    Performance characterisation of a new photo-microsensor based sensing head for displacement measurement

    Get PDF
    This paper presents a robust displacement sensor with nanometre-scale resolution over a micrometre range. It is composed of low cost commercially available slotted photo-microsensors (SPMs). The displacement sensor is designed with a particular arrangement of a compact array of SPMs with specially designed shutter assembly and signal processing to significantly reduce sensitivity to ambient light, input voltage variation, circuit electronics drift, etc. The sensor principle and the characterisation results are described in this paper. The proposed prototype sensor has a linear measurement range of 20 ÎŒm and resolution of 21 nm. This kind of sensor has several potential applications, including mechanical structural deformation monitoring system

    Calibration of quasi-static aberrations in exoplanet direct-imaging instruments with a Zernike phase-mask sensor

    Full text link
    Context. Several exoplanet direct imaging instruments will soon be in operation. They use an extreme adaptive optics (XAO) system to correct the atmospheric turbulence and provide a highly-corrected beam to a near-infrared (NIR) coronagraph for starlight suppression. The performance of the coronagraph is however limited by the non-common path aberrations (NCPA) due to the differential wavefront errors existing between the visible XAO sensing path and the NIR science path, leading to residual speckles in the coronagraphic image. Aims. Several approaches have been developed in the past few years to accurately calibrate the NCPA, correct the quasi-static speckles and allow the observation of exoplanets at least 1e6 fainter than their host star. We propose an approach based on the Zernike phase-contrast method for the measurements of the NCPA between the optical path seen by the visible XAO wavefront sensor and that seen by the near-IR coronagraph. Methods. This approach uses a focal plane phase mask of size {\lambda}/D, where {\lambda} and D denote the wavelength and the telescope aperture diameter, respectively, to measure the quasi-static aberrations in the upstream pupil plane by encoding them into intensity variations in the downstream pupil image. We develop a rigorous formalism, leading to highly accurate measurement of the NCPA, in a quasi-linear way during the observation. Results. For a static phase map of standard deviation 44 nm rms at {\lambda} = 1.625 {\mu}m (0.026 {\lambda}), we estimate a possible reduction of the chromatic NCPA by a factor ranging from 3 to 10 in the presence of AO residuals compared with the expected performance of a typical current-generation system. This would allow a reduction of the level of quasi-static speckles in the detected images by a factor 10 to 100 hence, correspondingly improving the capacity to observe exoplanets.Comment: 11 pages, 14 figures, A&A accepted, 2nd version after language-editor correction

    Avalanche Photo-Detection for High Data Rate Applications

    Full text link
    Avalanche photo detection is commonly used in applications which require single photon sensitivity. We examine the limits of using avalanche photo diodes (APD) for characterising photon statistics at high data rates. To identify the regime of linear APD operation we employ a ps-pulsed diode laser with variable repetition rates between 0.5MHz and 80MHz. We modify the mean optical power of the coherent pulses by applying different levels of well-calibrated attenuation. The linearity at high repetition rates is limited by the APD dead time and a non-linear response arises at higher photon-numbers due to multiphoton events. Assuming Poissonian input light statistics we ascertain the effective mean photon-number of the incident light with high accuracy. Time multiplexed detectors (TMD) allow to accomplish photon- number resolution by photon chopping. This detection setup extends the linear response function to higher photon-numbers and statistical methods may be used to compensate for non-linearity. We investigated this effect, compare it to the single APD case and show the validity of the convolution treatment in the TMD data analysis.Comment: 16 pages, 5 figure

    Separating true range measurements from multi-path and scattering interference in commercial range cameras

    Get PDF
    Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene

    Experimental study of a low-order wavefront sensor for the high-contrast coronagraphic imager EXCEDE

    Full text link
    The mission EXCEDE (EXoplanetary Circumstellar Environments and Disk Explorer), selected by NASA for technology development, is designed to study the formation, evolution and architectures of exoplanetary systems and characterize circumstellar environments into stellar habitable zones. It is composed of a 0.7 m telescope equipped with a Phase-Induced Amplitude Apodization Coronagraph (PIAA-C) and a 2000-element MEMS deformable mirror, capable of raw contrasts of 1e-6 at 1.2 lambda/D and 1e-7 above 2 lambda/D. One of the key challenges to achieve those contrasts is to remove low-order aberrations, using a Low-Order WaveFront Sensor (LOWFS). An experiment simulating the starlight suppression system is currently developed at NASA Ames Research Center, and includes a LOWFS controlling tip/tilt modes in real time at 500 Hz. The LOWFS allowed us to reduce the tip/tilt disturbances to 1e-3 lambda/D rms, enhancing the previous contrast by a decade, to 8e-7 between 1.2 and 2 lambda/D. A Linear Quadratic Gaussian (LQG) controller is currently implemented to improve even more that result by reducing residual vibrations. This testbed shows that a good knowledge of the low-order disturbances is a key asset for high contrast imaging, whether for real-time control or for post processing.Comment: 12 pages, 20 figures, proceeding of the SPIE conference Optics+Photonics, San Diego 201
    • 

    corecore