15 research outputs found

    DIGITAL ZENITH CAMERA OF THE UNIVERSITY OF LATVIA

    No full text

    Hochpräzise Neigungsmessung mit dem elektronischen Pendelneigungssensor HRTM

    No full text
    The high-resolution electronic inclination sensor HRTM is of interest for tilt measurements with highest accuracy requirements due to its extremely low noise characteristics. In this contribution some experiences with the HRTM sensor are presented which were obtained under laboratory and field conditions. Emphasis is laid on investigations of the sensor's behaviour under changing temperatures. The HRTM sensor isused for exact tilt measurements in the context of the determination of the plumb line using a zenith camera

    CALIBRATION FOR INCREASED ACCURACY OF THE RANGE IMAGING CAMERA SWISSRANGER TM

    No full text
    Range imaging is a new suitable choice for measurement and modeling in many different applications. But due to the technology’s relatively new appearance on the market with a few different realizations, the knowledge of its capabilities is very low. In most applications, like robotics and measurement systems, the accuracy wanted, lies at some millimeters. The raw data of range imaging cameras do not reach this level. Therefore, the calibration of the sensors output is needed. In this paper some of the parameters which influence the behavior and performance of the range imaging camera SwissRanger TM (provided by the Swiss Center for Electronics and Microtechnology- CSEM) are described. Because of the highly systematic structure and correlations between parameters and output data, a parameter based calibration approach is presented. This includes a photogrammetric camera calibration and a distance system calibration with respect to the reflectivity and the distance itself. 1

    Time-of-flight sensor and color camera calibration for multi-view acquisition

    No full text
    This paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors. The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.close0

    Anatomofunctional bimodality imaging for plant phenotyping: An insight through depth imaging coupled to thermal imaging

    Get PDF
    International audienceFigure 9.2 The stereovision system and its possible occlusions. (a) Principle scheme of a stereovision system with two cameras. (b) Illustration of occlusions.Nayar and Nakagawa, 1994). At each increment of the translation stage, the vision sensor acquires an image that is blurred for points out of the focused plane and sharp for points in the focus plane. In each point of the object, blur appearance is like the application of a low-pass filter on the focused image. The quantification of focus on each point of the object in an image can be done by quantifying the amount of high spatial frequencies (Xiong and Shafer, 1993; Nayar and Nakagawa, 1994; Subbarao and Choi, 1995; Martinez Baena et al., 1997; Choi and Yun, 2000; Helmli and Scherer, 2001; Ahmad and Choi, 2005, 2007; Malik and Choi, 2007, 2008; Minhas et al., 2009). All these focus measures work on highly textured objects. In each object point, the evolution of focus is computed as a function of the position of the translation stage. The position of the translational stage corresponding to the maxima of the focus measure gives the depth of the considered point. In depth from focus methods, the vision sensor can be a low-cost webcam. Due to the translation stage, depth from focus systems are cumbersome and hardly usable in the field. In depth from defocus methods the lens parameters (aperture, focal) are changed so that the scene does not have to be translated. The defocus measure consists in approximating the PSF in each point of the scene. The PSF can be approximated only for textured subwindows and for objects out of field depth. For each scene point, the PSF approximation is done locally in subwindows with a statistical framework (Rajagopalan and Chaudhuri, 1999; Schechner and Kiryati, 1999; Farid and Simoncelli, 1998) or with deterministic optimization (Xiong and Shafer, 1995; Gokstorp, 1994; Favaro and Soatto, 2000; Favaro et al., 2003; Trouvé et al., 2011). Depth from defocus methods must use vision sensors with well-known parameters (aperture, focal). The use of low-cost webcams for these methods is therefore prohibited. The depth from defocus systems are suitable for use in greenhouse on highly textured and nonmovable plants
    corecore