10 research outputs found

    Impact of different trajectories on extrinsic self-calibration for vehicle-based mobile laser scanning systems

    Get PDF
    The trend toward further integration of automotive electronic control units functionality into domain control units as well as the rise of computing-intensive driver assistance systems has led to a demand for high-performance automotive computation platforms. These platforms have to fulfill stringent safety requirements. One promising approach is the use of performance computation units in combination with safety controllers in a single control unit. Such systems require adequate communication links between the computation units. While Ethernet is widely used, a high-speed serial link communication protocol supported by an Infineon AURIX safety controller appears to be a promising alternative. In this paper, a high-speed serial link IP core is presented, which enables this type of high-speed serial link communication interface for field-programmable gate array–based computing units. In our test setup, the IP core was implemented in a high-performance Xilinx Zynq UltraScale+, which communicated with an Infineon AURIX via high-speed serial link and Ethernet. The first bandwidth measurements demonstrated that high-speed serial link is an interesting candidate for inter-chip communication, resulting in bandwidths reaching up to 127 Mbit/s using stream transmissions

    Automatic Extrinsic Self-Calibration of Mobile Mapping Systems Based on Geometric 3D Features

    Get PDF
    Mobile Mapping is an efficient technology to acquire spatial data of the environment. The spatial data is fundamental for applications in crisis management, civil engineering or autonomous driving. The extrinsic calibration of the Mobile Mapping System is a decisive factor that affects the quality of the spatial data. Many existing extrinsic calibration approaches require the use of artificial targets in a time-consuming calibration procedure. Moreover, they are usually designed for a specific combination of sensors and are, thus, not universally applicable. We introduce a novel extrinsic self-calibration algorithm, which is fully automatic and completely data-driven. The fundamental assumption of the self-calibration is that the calibration parameters are estimated the best when the derived point cloud represents the real physical circumstances the best. The cost function we use to evaluate this is based on geometric features which rely on the 3D structure tensor derived from the local neighborhood of each point. We compare different cost functions based on geometric features and a cost function based on the RĂ©nyi quadratic entropy to evaluate the suitability for the self-calibration. Furthermore, we perform tests of the self-calibration on synthetic and two different real datasets. The real datasets differ in terms of the environment, the scale and the utilized sensors. We show that the self-calibration is able to extrinsically calibrate Mobile Mapping Systems with different combinations of mapping and pose estimation sensors such as a 2D laser scanner to a Motion Capture System and a 3D laser scanner to a stereo camera and ORB-SLAM2. For the first dataset, the parameters estimated by our self-calibration lead to a more accurate point cloud than two comparative approaches. For the second dataset, which has been acquired via a vehicle-based mobile mapping, our self-calibration achieves comparable results to a manually refined reference calibration, while it is universally applicable and fully automated

    Lost in translation (and rotation): Rapid extrinsic calibration for 2D and 3D LIDARs

    No full text
    This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from RĂ©nyi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations

    Computational Multimedia for Video Self Modeling

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras

    Representing 3D shape in sparse range images for urban object classification

    Get PDF
    This thesis develops techniques for interpreting 3D range images acquired in outdoor environments at a low resolution. It focuses on the task of robustly capturing the shapes that comprise objects, in order to classify them. With the recent development of 3D sensors such as the Velodyne, it is now possible to capture range images at video frame rates, allowing mobile robots to observe dynamic scenes in 3D. To classify objects in these scenes, features are extracted from the data, which allows different regions to be matched. However, range images acquired at this speed are of low resolution, and there are often significant changes in sensor viewpoint and occlusion. In this context, existing methods for feature extraction do not perform well. This thesis contributes algorithms for the robust abstraction from 3D points to object classes. Efficient region-of-interest and surface normal extraction are evaluated, resulting in a keypoint algorithm that provides stable orientations. These build towards a novel feature, called the ‘line image,’ that is designed to consistently capture local shape, regardless of sensor viewpoint. It does this by explicitly reasoning about the difference between known empty space, and space that has not been measured due to occlusion or sparse sensing. A dataset of urban objects scanned with a Velodyne was collected and hand labelled, in order to compare this feature with several others on the task of classification. First, a simple k-nearest neighbours approach was used, where the line image showed improvements. Second, more complex classifiers were applied, requiring the features to be clustered. The clusters were used in topic modelling, allowing specific sub-parts of objects to be learnt across multiple scales, improving accuracy by 10%. This work is applicable to any range image data. In general, it demonstrates the advantages in using the inherent density and occupancy information in a range image during 3D point cloud processing

    Representing 3D shape in sparse range images for urban object classification

    Get PDF
    This thesis develops techniques for interpreting 3D range images acquired in outdoor environments at a low resolution. It focuses on the task of robustly capturing the shapes that comprise objects, in order to classify them. With the recent development of 3D sensors such as the Velodyne, it is now possible to capture range images at video frame rates, allowing mobile robots to observe dynamic scenes in 3D. To classify objects in these scenes, features are extracted from the data, which allows different regions to be matched. However, range images acquired at this speed are of low resolution, and there are often significant changes in sensor viewpoint and occlusion. In this context, existing methods for feature extraction do not perform well. This thesis contributes algorithms for the robust abstraction from 3D points to object classes. Efficient region-of-interest and surface normal extraction are evaluated, resulting in a keypoint algorithm that provides stable orientations. These build towards a novel feature, called the ‘line image,’ that is designed to consistently capture local shape, regardless of sensor viewpoint. It does this by explicitly reasoning about the difference between known empty space, and space that has not been measured due to occlusion or sparse sensing. A dataset of urban objects scanned with a Velodyne was collected and hand labelled, in order to compare this feature with several others on the task of classification. First, a simple k-nearest neighbours approach was used, where the line image showed improvements. Second, more complex classifiers were applied, requiring the features to be clustered. The clusters were used in topic modelling, allowing specific sub-parts of objects to be learnt across multiple scales, improving accuracy by 10%. This work is applicable to any range image data. In general, it demonstrates the advantages in using the inherent density and occupancy information in a range image during 3D point cloud processing

    Selbstkalibrierung mobiler Multisensorsysteme mittels geometrischer 3D-Merkmale

    Get PDF
    Ein mobiles Multisensorsystem ermöglicht die effiziente, rĂ€umliche Erfassung von Objekten und der Umgebung. Die Kalibrierung des mobilen Multisensorsystems ist ein notwendiger Vorverarbeitungsschritt fĂŒr die Sensordatenfusion und fĂŒr genaue rĂ€umliche Erfassungen. Bei herkömmlichen Verfahren kalibrieren Experten das mobile Multisensorsystem in aufwĂ€ndigen Prozeduren vor Verwendung durch Aufnahmen eines Kalibrierobjektes mit bekannter Form. Im Gegensatz zu solchen objektbasierten Kalibrierungen ist eine Selbstkalibrierung praktikabler, zeitsparender und bestimmt die gesuchten Parameter mit höherer AktualitĂ€t. Diese Arbeit stellt eine neue Methode zur Selbstkalibrierung mobiler Multisensorsysteme vor, die als Merkmalsbasierte Selbstkalibrierung bezeichnet wird. Die Merkmalsbasierte Selbstkalibrierung ist ein datenbasiertes, universelles Verfahren, das fĂŒr eine beliebige Kombination aus einem Posenbestimmungssensor und einem Tiefensensor geeignet ist. Die fundamentale Annahme der Merkmalsbasierten Selbstkalibrierung ist, dass die gesuchten Parameter am besten bestimmt sind, wenn die erfasste Punktwolke die höchstmögliche QualitĂ€t hat. Die Kostenfunktion, die zur Bewertung der QualitĂ€t verwendet wird, basiert auf Geometrischen 3D-Merkmalen, die wiederum auf den lokalen Nachbarschaften jedes Punktes basieren. Neben der detaillierten Analyse unterschiedlicher Aspekte der Selbstkalibrierung, wie dem Einfluss der Systemposen auf das Ergebnis, der Eignung verschiedener Geometrischer 3D-Merkmale fĂŒr die Selbstkalibrierung und dem Konvergenzradius des Verfahrens, wird die Merkmalsbasierte Selbstkalibrierung anhand eines synthethischen und dreier realer DatensĂ€tze evaluiert. Diese DatensĂ€tze wurden dabei mit unterschiedlichen Sensoren und in unterschiedlichen Umgebungen aufgezeichnet. Die Experimente zeigen die vielseitige Einsetzbarkeit der Merkmalsbasierten Selbstkalibrierung hinsichtlich der Sensoren und der Umgebungen. Die Ergebnisse werden stets mit einer geeigneten objektbasierten Kalibrierung aus der Literatur und einer weiteren, nachimplementierten Selbstkalibrierung verglichen. Verglichen mit diesen Verfahren erzielt die Merkmalsbasierte Selbstkalibrierung bessere oder zumindest vergleichbare Genauigkeiten fĂŒr alle DatensĂ€tze. Die Genauigkeit und PrĂ€zision der Merkmalsbasierten Selbstkalibrierung entspricht dem aktuellen Stand der Forschung. FĂŒr den Datensatz, der die höchsten Sensorgenauigkeiten aufweist, werden beispielsweise die Parameter der relativen Translation zwischen dem Rigid Body eines Motion Capture Systems und einem Laserscanner mit einer Genauigkeit von ca. 1 cm1\,\mathrm{cm} bestimmt, obwohl die Distanzmessgenauigkeit dieses Laserscanners nur 3 cm3\,\mathrm{cm} betrĂ€gt
    corecore