12 research outputs found

    Impact of different trajectories on extrinsic self-calibration for vehicle-based mobile laser scanning systems

    Get PDF
    The trend toward further integration of automotive electronic control units functionality into domain control units as well as the rise of computing-intensive driver assistance systems has led to a demand for high-performance automotive computation platforms. These platforms have to fulfill stringent safety requirements. One promising approach is the use of performance computation units in combination with safety controllers in a single control unit. Such systems require adequate communication links between the computation units. While Ethernet is widely used, a high-speed serial link communication protocol supported by an Infineon AURIX safety controller appears to be a promising alternative. In this paper, a high-speed serial link IP core is presented, which enables this type of high-speed serial link communication interface for field-programmable gate array–based computing units. In our test setup, the IP core was implemented in a high-performance Xilinx Zynq UltraScale+, which communicated with an Infineon AURIX via high-speed serial link and Ethernet. The first bandwidth measurements demonstrated that high-speed serial link is an interesting candidate for inter-chip communication, resulting in bandwidths reaching up to 127 Mbit/s using stream transmissions

    Improved UAV-borne 3D mapping by fusing optical and laserscanner data

    Get PDF
    In this paper, a new method for fusing optical and laserscanner data is presented for improved UAV-borne 3D mapping. We propose to equip an unmanned aerial vehicle (UAV) with a small platform which includes two sensors: a standard low-cost digital camera and a lightweight Hokuyo UTM-30LX-EW laserscanning device (210 g without cable). Initially, a calibration is carried out for the utilized devices. This involves a geometric camera calibration and the estimation of the position and orientation offset between the two sensors by lever-arm and bore-sight calibration. Subsequently, a feature tracking is performed through the image sequence by considering extracted interest points as well as the projected 3D laser points. These 2D results are fused with the measured laser distances and fed into a bundle adjustment in order to obtain a Simultaneous Localization and Mapping (SLAM). It is demonstrated that an improvement in terms of precision for the pose estimation is derived by fusing optical and laserscanner data

    Calibration of a multi-beam Laser System by using a TLS-generated Reference

    No full text
    Rotating multi-beam LIDARs mounted on moving platforms have become very successful for many applications such as autonomous navigation, obstacle avoidance or mobile mapping. To obtain accurate point coordinates, a precise calibration of such a LIDAR system is required. For the determination of the corresponding parameters we propose a calibration scheme which exploits the information of 3D reference point clouds captured by a terrestrial laser scanning (TLS) device. It is assumed that the accuracy of this point clouds is considerably higher than that from the multi-beam LIDAR and that the data represent faces of man-made objects at different distances. After extracting planes in the reference data sets, the point-plane-incidences of the measured points and the reference planes are used to formulate the implicit constraints. We inspect the Velodyne HDL-64E S2 system as the best-known representative for this kind of sensor system. The usability and feasibility of the calibration procedure is demonstrated with real data sets representing building faces (walls, roof planes and ground). Beside the improvement of the point accuracy by considering the calibration results, we test the significance of the parameters related to the sensor model and consider the uncertainty of measurements w.r.t. the measured distances. The Velodyne returns two kinds of measurements – distances and encoder angles. To account for this, we perform a variance component estimation to obtain realistic standard deviations for the observations

    ENHANCEMENT OF GENERIC BUILDING MODELS BY RECOGNITION AND ENFORCEMENT OF GEOMETRIC CONSTRAINTS

    No full text
    Many buildings in 3D city models can be represented by generic models, e.g. boundary representations or polyhedrons, without expressing building-specific knowledge explicitly. Without additional constraints, the bounding faces of these building reconstructions do not feature expected structures such as orthogonality or parallelism. The recognition and enforcement of man-made structures within model instances is one way to enhance 3D city models. Since the reconstructions are derived from uncertain and imprecise data, crisp relations such as orthogonality or parallelism are rarely satisfied exactly. Furthermore, the uncertainty of geometric entities is usually not specified in 3D city models. Therefore, we propose a point sampling which simulates the initial point cloud acquisition by airborne laser scanning and provides estimates for the uncertainties. We present a complete workflow for recognition and enforcement of man-made structures in a given boundary representation. The recognition is performed by hypothesis testing and the enforcement of the detected constraints by a global adjustment of all bounding faces. Since the adjustment changes not only the geometry but also the topology of faces, we obtain improved building models which feature regular structures and a potentially reduced complexity. The feasibility and the usability of the approach are demonstrated with a real data set

    Geometric reasoning for uncertain observations of man-made structures

    No full text
    Observations of man-made structures in terms of digital images, laser scans or sketches are inherently uncertain due to the acquisition process. Thus reverse engineering has to be applied to obtain topologically consistent and geometrically correct model instances by feature aggregation. The corresponding spatial reasoning process usually implies the detection of adjacencies, the generation and testing of hypotheses, and finally the enforcement of the detected relations. We present a complete and general work-flow for geometric reasoning that takes the uncertainty of the observations and of the derived low-level features into account. Thereby we exploit algebraic projective geometry to ease the formulation of geometric constraints. As this comes at the expense of an over-parametrization, we introduce an adjustment model which stringently incorporates uncertainty and copes with singular covariance matrices. The size of the resulting normal equation system depends only on the number of established constraints which paves the way to efficient solutions. We demonstrate the usefulness and the feasibility of the approach with results for the automatic analysis of a sketch and for a building reconstruction based on an airborne laser scan

    Optimal parameter estimation with homogeneous entities and arbitrary constraints

    No full text
    Well known estimation techniques in computational geometry usually deal only with single geometric entities as unknown parameters and do not account for constrained observations within the estimation. The estimation model proposed in this paper is much more general, as it can handle multiple homogeneous vectors as well as multiple constraints. Furthermore, it allows the consistent handling of arbitrary covariance matrices for the observed and the estimated entities. The major novelty is the proper handling of singular observation covariance matrices made possible by additional constraints within the estimation. These properties are of special interest for instance in the calculus of algebraic projective geometry, where singular covariance matrices arise naturally from the non-minimal parameterizations of the entities. The validity of the proposed adjustment model will be demonstrated by the estimation of a fundamental matrix from synthetic data and compared to heteroscedastic regression [1], which is considered as state-of-the-art estimator for this task. As the latter is unable to simultaneously estimate multiple entities, we will also demonstrate the usefulness and the feasibility of our approach by the constrained estimation of three vanishing points from observed uncertain image line segments
    corecore