8 research outputs found

    Initial steps for high-throughput phenotyping in vineyards

    Get PDF
    The evaluation of phenotypic characters of grapevines is required directly in vineyards and is strongly limited by time, costs and the subjectivity of person in charge. Sensor-based techniques are prerequisite in order to allow non-invasive phenotyping of individual plant traits, to increase the quantity of object records and to reduce error variation. Thus, a Prototype-Image-Acquisition-System (PIAS) was developed for semi-automated capture of geo-referenced images in an experimental vineyard. Different strategies were tested for image interpretation using MATLAB®. The interpretation of images from the vineyard with real background is more practice-oriented but requires the calculation of depth maps. Different image analysis tools were verified in order to enable contactless and non-invasive detection of bud burst and quantification of shoots at an early developmental stage (BBCH 10) and enable fast and accurate determination of the grapevine berry size at BBCH 89. Depending on the time of image acquisition at BBCH 10 up to 94 % of green shoots were visible in images. The mean berry size (BBCH 89) was recorded non-invasively with a precision of 1 mm.

    INCREMENTAL REAL-TIME BUNDLE ADJUSTMENT FOR MULTI-CAMERA SYSTEMS WITH POINTS AT INFINITY

    No full text
    This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omnidirectional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view

    BUNDLE ADJUSTMENT FOR MULTI-CAMERA SYSTEMS WITH POINTS AT INFINITY

    No full text
    We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or – like omnidirectional cameras – to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug 3 from Point Grey

    Cloud photogrammetry with dense stereo for fisheye cameras

    No full text
    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear

    Quantitative Interpretation of Tracks for Determination of Body Mass

    Get PDF
    To better understand the biology of extinct animals, experimentation with extant animals and innovative numerical approaches have grown in recent years. This research project uses principles of soil mechanics and a neoichnological field experiment with an African elephant to derive a novel concept for calculating the mass (i.e., the weight) of an animal from its footprints. We used the elephant's footprint geometry (i.e., vertical displacements, diameter) in combination with soil mechanical analyses (i.e., soil classification, soil parameter determination in the laboratory, Finite Element Analysis (FEA) and gait analysis) for the back analysis of the elephant's weight from a single footprint. In doing so we validated the first component of a methodology for calculating the weight of extinct dinosaurs. The field experiment was conducted under known boundary conditions at the Zoological Gardens Wuppertal with a female African elephant. The weight of the elephant was measured and the walking area was prepared with sediment in advance. Then the elephant was walked across the test area, leaving a trackway behind. Footprint geometry was obtained by laser scanning. To estimate the dynamic component involved in footprint formation, the velocity the foot reaches when touching the subsoil was determined by the Digital Image Correlation (DIC) technique. Soil parameters were identified by performing experiments on the soil in the laboratory. FEA was then used for the backcalculation of the elephant's weight. With this study, we demonstrate the adaptability of using footprint geometry in combination with theoretical considerations of loading of the subsoil during a walk and soil mechanical methods for prediction of trackmakers weight
    corecore