572 research outputs found

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multi-camera systems are being deployed in a variety of vehicles and mobile robots today. To eliminate the need for cost and labor intensive maintenance and calibration, continuous self-calibration is highly desirable. In this book we present such an approach for self-calibration of multi-Camera systems for vehicle surround sensing. In an extensive evaluation we assess our algorithm quantitatively using real-world data

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multikamerasysteme werden heute bereits in einer Vielzahl von Fahrzeugen und mobilen Robotern eingesetzt. Die Anwendungen reichen dabei von einfachen Assistenzfunktionen wie der Erzeugung einer virtuellen Rundumsicht bis hin zur Umfelderfassung, wie sie fĂŒr teil- und vollautomatisches Fahren benötigt wird. Damit aus den Kamerabildern metrische GrĂ¶ĂŸen wie Distanzen und Winkel abgeleitet werden können und ein konsistentes Umfeldmodell aufgebaut werden kann, muss das Abbildungsverhalten der einzelnen Kameras sowie deren relative Lage zueinander bekannt sein. Insbesondere die Bestimmung der relativen Lage der Kameras zueinander, die durch die extrinsische Kalibrierung beschrieben wird, ist aufwendig, da sie nur im Gesamtverbund erfolgen kann. DarĂŒber hinaus ist zu erwarten, dass es ĂŒber die Lebensdauer des Fahrzeugs hinweg zu nicht vernachlĂ€ssigbaren VerĂ€nderungen durch Ă€ußere EinflĂŒsse kommt. Um den hohen Zeit- und Kostenaufwand einer regelmĂ€ĂŸigen Wartung zu vermeiden, ist ein Selbstkalibrierungsverfahren erforderlich, das die extrinsischen Kalibrierparameter fortlaufend nachschĂ€tzt. FĂŒr die Selbstkalibrierung wird typischerweise das Vorhandensein ĂŒberlappender Sichtbereiche ausgenutzt, um die extrinsische Kalibrierung auf der Basis von Bildkorrespondenzen zu schĂ€tzen. Falls die Sichtbereiche mehrerer Kameras jedoch nicht ĂŒberlappen, lassen sich die Kalibrierparameter auch aus den relativen Bewegungen ableiten, die die einzelnen Kameras beobachten. Die Bewegung typischer Straßenfahrzeuge lĂ€sst dabei jedoch nicht die Bestimmung aller Kalibrierparameter zu. Um die vollstĂ€ndige SchĂ€tzung der Parameter zu ermöglichen, lassen sich weitere Bedingungsgleichungen, die sich z.B. aus der Beobachtung der Bodenebene ergeben, einbinden. In dieser Arbeit wird dazu in einer theoretischen Analyse gezeigt, welche Parameter sich aus der Kombination verschiedener Bedingungsgleichungen eindeutig bestimmen lassen. Um das Umfeld eines Fahrzeugs vollstĂ€ndig erfassen zu können, werden typischerweise Objektive, wie zum Beispiel Fischaugenobjektive, eingesetzt, die einen sehr großen Bildwinkel ermöglichen. In dieser Arbeit wird ein Verfahren zur Bestimmung von Bildkorrespondenzen vorgeschlagen, das die geometrischen Verzerrungen, die sich durch die Verwendung von Fischaugenobjektiven und sich stark Ă€ndernden Ansichten ergeben, berĂŒcksichtigt. Darauf aufbauend stellen wir ein robustes Verfahren zur NachfĂŒhrung der Parameter der Bodenebene vor. Basierend auf der theoretischen Analyse der Beobachtbarkeit und den vorgestellten Verfahren stellen wir ein robustes, rekursives Kalibrierverfahren vor, das auf einem erweiterten Kalman-Filter aufbaut. Das vorgestellte Kalibrierverfahren zeichnet sich insbesondere durch die geringe Anzahl von internen Parametern, sowie durch die hohe FlexibilitĂ€t hinsichtlich der einbezogenen Bedingungsgleichungen aus und basiert einzig auf den Bilddaten des Multikamerasystems. In einer umfangreichen experimentellen Auswertung mit realen Daten vergleichen wir die Ergebnisse der auf unterschiedlichen Bedingungsgleichungen und Bewegungsmodellen basierenden Verfahren mit den aus einer Referenzkalibrierung bestimmten Parametern. Die besten Ergebnisse wurden dabei durch die Kombination aller vorgestellten Bedingungsgleichungen erzielt. Anhand mehrerer Beispiele zeigen wir, dass die erreichte Genauigkeit ausreichend fĂŒr eine Vielzahl von Anwendungen ist

    Robust ego-localization using monocular visual odometry

    Get PDF

    Vehicle Tracking and Motion Estimation Based on Stereo Vision Sequences

    Get PDF
    In this dissertation, a novel approach for estimating trajectories of road vehicles such as cars, vans, or motorbikes, based on stereo image sequences is presented. Moving objects are detected and reliably tracked in real-time from within a moving car. The resulting information on the pose and motion state of other moving objects with respect to the own vehicle is an essential basis for future driver assistance and safety systems, e.g., for collision prediction. The focus of this contribution is on oncoming traffic, while most existing work in the literature addresses tracking the lead vehicle. The overall approach is generic and scalable to a variety of traffic scenes including inner city, country road, and highway scenarios. A considerable part of this thesis addresses oncoming traffic at urban intersections. The parameters to be estimated include the 3D position and orientation of an object relative to the ego-vehicle, as well as the object's shape, dimension, velocity, acceleration and the rotational velocity (yaw rate). The key idea is to derive these parameters from a set of tracked 3D points on the object's surface, which are registered to a time-consistent object coordinate system, by means of an extended Kalman filter. Combining the rigid 3D point cloud model with the dynamic model of a vehicle is one main contribution of this thesis. Vehicle tracking at intersections requires covering a wide range of different object dynamics, since vehicles can turn quickly. Three different approaches for tracking objects during highly dynamic turn maneuvers up to extreme maneuvers such as skidding are presented and compared. These approaches allow for an online adaptation of the filter parameter values, overcoming manual parameter tuning depending on the dynamics of the tracked object in the scene. This is the second main contribution. Further issues include the introduction of two initialization methods, a robust outlier handling, a probabilistic approach for assigning new points to a tracked object, as well as mid-level fusion of the vision-based approach with a radar sensor. The overall system is systematically evaluated both on simulated and real-world data. The experimental results show the proposed system is able to accurately estimate the object pose and motion parameters in a variety of challenging situations, including night scenes, quick turn maneuvers, and partial occlusions. The limits of the system are also carefully investigated.In dieser Dissertation wird ein Ansatz zur TrajektorienschĂ€tzung von Straßenfahrzeugen (PKW, Lieferwagen, MotorrĂ€der,...) anhand von Stereo-Bildfolgen vorgestellt. Bewegte Objekte werden in Echtzeit aus einem fahrenden Auto heraus automatisch detektiert, vermessen und deren Bewegungszustand relativ zum eigenen Fahrzeug zuverlĂ€ssig bestimmt. Die gewonnenen Informationen liefern einen entscheidenden Grundstein fĂŒr zukĂŒnftige Fahrerassistenz- und Sicherheitssysteme im Automobilbereich, beispielsweise zur KollisionsprĂ€diktion. WĂ€hrend der Großteil der existierenden Literatur das Detektieren und Verfolgen vorausfahrender Fahrzeuge in Autobahnszenarien adressiert, setzt diese Arbeit einen Schwerpunkt auf den Gegenverkehr, speziell an stĂ€dtischen Kreuzungen. Der Ansatz ist jedoch grundsĂ€tzlich generisch und skalierbar fĂŒr eine Vielzahl an Verkehrssituationen (Innenstadt, Landstraße, Autobahn). Die zu schĂ€tzenden Parameter beinhalten die rĂ€umliche Lage des anderen Fahrzeugs relativ zum eigenen Fahrzeug, die Objekt-Geschwindigkeit und -LĂ€ngsbeschleunigung, sowie die Rotationsgeschwindigkeit (Gierrate) des beobachteten Objektes. ZusĂ€tzlich werden die Objektabmaße sowie die Objektform rekonstruiert. Die Grundidee ist es, diese Parameter anhand der Transformation von beobachteten 3D Punkten, welche eine ortsfeste Position auf der ObjektoberflĂ€che besitzen, mittels eines rekursiven SchĂ€tzers (Kalman Filter) zu bestimmen. Ein wesentlicher Beitrag dieser Arbeit liegt in der Kombination des Starrkörpermodells der Punktewolke mit einem Fahrzeugbewegungsmodell. An Kreuzungen können sehr unterschiedliche Dynamiken auftreten, von einer Geradeausfahrt mit konstanter Geschwindigkeit bis hin zum raschen Abbiegen. Um eine manuelle Parameteradaption abhĂ€ngig von der jeweiligen Szene zu vermeiden, werden drei verschiedene AnsĂ€tze zur automatisierten Anpassung der Filterparameter an die vorliegende Situation vorgestellt und verglichen. Dies stellt den zweiten Hauptbeitrag der Arbeit dar. Weitere wichtige BeitrĂ€ge sind zwei alternative Initialisierungsmethoden, eine robuste Ausreißerbehandlung, ein probabilistischer Ansatz zur Zuordnung neuer Objektpunkte, sowie die Fusion des bildbasierten Verfahrens mit einem Radar-Sensor. Das Gesamtsystem wird im Rahmen dieser Arbeit systematisch anhand von simulierten und realen Straßenverkehrsszenen evaluiert. Die Ergebnisse zeigen, dass das vorgestellte Verfahren in der Lage ist, die unbekannten Objektparameter auch unter schwierigen Umgebungsbedingungen, beispielsweise bei Nacht, schnellen Abbiegemanövern oder unter Teilverdeckungen, sehr prĂ€zise zu schĂ€tzen. Die Grenzen des Systems werden ebenfalls sorgfĂ€ltig untersucht

    Stereo vision for obstacle detection in autonomous vehicle navigation

    Get PDF
    Master'sMASTER OF ENGINEERIN

    A Comprehensive Introduction of Visual-Inertial Navigation

    Full text link
    In this article, a tutorial introduction to visual-inertial navigation(VIN) is presented. Visual and inertial perception are two complementary sensing modalities. Cameras and inertial measurement units (IMU) are the corresponding sensors for these two modalities. The low cost and light weight of camera-IMU sensor combinations make them ubiquitous in robotic navigation. Visual-inertial Navigation is a state estimation problem, that estimates the ego-motion and local environment of the sensor platform. This paper presents visual-inertial navigation in the classical state estimation framework, first illustrating the estimation problem in terms of state variables and system models, including related quantities representations (Parameterizations), IMU dynamic and camera measurement models, and corresponding general probabilistic graphical models (Factor Graph). Secondly, we investigate the existing model-based estimation methodologies, these involve filter-based and optimization-based frameworks and related on-manifold operations. We also discuss the calibration of some relevant parameters, also initialization of state of interest in optimization-based frameworks. Then the evaluation and improvement of VIN in terms of accuracy, efficiency, and robustness are discussed. Finally, we briefly mention the recent development of learning-based methods that may become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure

    Correntropy: Answer to non-Gaussian noise in modern SLAM applications?

    Get PDF
    The problem of non-Gaussian noise/outliers has been intrinsic in modern Simultaneous Localization and Mapping (SLAM) applications. Despite numerous algorithms in SLAM, it has become crucial to address this problem in the realm of modern robotics applications. This work focuses on addressing the above-mentioned problem by incorporating the usage of correntropy in SLAM. Before correntropy, multiple attempts of dealing with non-Gaussian noise have been proposed with significant progress over time but the underlying assumption of Gaussianity might not be enough in real-life applications in robotics.Most of the modern SLAM algorithms propose the `best' estimates given a set of sensor measurements. Apart from addressing the non-Gaussian problems in a SLAM system, our work attempts to address the more complex part concerning SLAM: (a) If one of the sensors gives faulty measurements over time (`Faulty' measurements can be non-Gaussian in nature), how should a SLAM framework adapt to such scenarios? (b) In situations where there is a manual intervention or a 3rd party attacker tries to change the measurements and affect the overall estimate of the SLAM system, how can a SLAM system handle such situations?(addressing the Self Security aspect of SLAM). Given these serious situations how should a modern SLAM system handle the issue of the previously mentioned problems in (a) and (b)? We explore the idea of correntropy in addressing the above-mentioned problems in popular filtering-based approaches like Kalman Filters(KF) and Extended Kalman Filters(EKF), which highlights the `Localization' part in SLAM. Later on, we propose a framework of fusing the odometeries computed individually from a stereo sensor and Lidar sensor (Iterative Closest point Algorithm (ICP) based odometry). We describe the effectiveness of using correntropy in this framework, especially in situations where a 3rd party attacker attempts to corrupt the Lidar computed odometry. We extend the usage of correntropy in the `Mapping' part of the SLAM (Registration), which is the highlight of our work. Although registration is a well-established problem, earlier approaches to registration are very inefficient with large rotations and translation. In addition, when the 3D datasets used for alignment are corrupted with non-Gaussian noise (shot/impulse noise), prior state-of-the-art approaches fail. Our work has given birth to another variant of ICP, which we name as Correntropy Similarity Matrix ICP (CoSM-ICP), which is robust to large translation and rotations as well as to shot/impulse noise. We verify through results how well our variant of ICP outperforms the other variants under large rotations and translations as well as under large outliers/non-Gaussian noise. In addition, we deploy our CoSM algorithm in applications where we compute the extrinsic calibration of the Lidar-Stereo sensor as well as Lidar-Camera calibration using a planar checkerboard in a single frame. In general, through results, we verify how efficiently our approach of using correntropy can be used in tackling non-Gaussian noise/shot noise/impulse noise in robotics applications

    Self-Localization for Autonomous Driving Using Vector Maps and Multi-Modal Odometry

    Get PDF
    One of the fundamental requirements in automated driving is having accurate vehicle localization. It is because different modules such as motion planning and control require accurate location and heading of the ego-vehicle to navigate within the drivable region safely. Global Navigation Satellite Systems (GNSS) can provide the geolocation of the vehicle in different outdoor environments. However, they suffer from poor observability and even signal loss in GNSS-denied environments such as city canyons. Map-based self-localization systems are the other tools to estimate the pose of the vehicle in known environments. The main purpose of this research is to design a real-time self-localization system for autonomous driving. To provide short-term constraints over the self-localization system a multi-modal vehicle odometry algorithm is developed that fuses an Inertial Measurement Unit (IMU), a camera, a Lidar, and a GNSS through an Error-State Kalman Filter (ESKF). Additionally, a Machine-Learning (ML)-based odometry algorithm is developed to compensate for the self-localization unavailability through kernel-based regression models that fuse IMU, encoders, and a steering sensor along with recent historical measurement data. The simulation and experimental results demonstrate that the vehicle odometry can be estimated with good accuracy. Based on the main objective of the thesis, a novel computationally efficient self-localization algorithm is developed that uses geospatial information from High-Definition (HD) maps along with observation of nearby landmarks. This approach uses situation- and uncertainty-aware attention mechanisms to select “suitable” landmarks at any drivable location within the known environment based on their observability and level of uncertainty. By using landmarks that are invariant to seasonal changes and knowing “where to look” proactively, robustness and computational efficiency are improved. The developed localization system is implemented and experimentally evaluated on WATonoBus, the University of Waterloo's autonomous shuttle. The experimental results confirm excellent computational efficiency and good accuracy
    • 

    corecore