33 research outputs found

    Road Friction Estimation for Connected Vehicles using Supervised Machine Learning

    Full text link
    In this paper, the problem of road friction prediction from a fleet of connected vehicles is investigated. A framework is proposed to predict the road friction level using both historical friction data from the connected cars and data from weather stations, and comparative results from different methods are presented. The problem is formulated as a classification task where the available data is used to train three machine learning models including logistic regression, support vector machine, and neural networks to predict the friction class (slippery or non-slippery) in the future for specific road segments. In addition to the friction values, which are measured by moving vehicles, additional parameters such as humidity, temperature, and rainfall are used to obtain a set of descriptive feature vectors as input to the classification methods. The proposed prediction models are evaluated for different prediction horizons (0 to 120 minutes in the future) where the evaluation shows that the neural networks method leads to more stable results in different conditions.Comment: Published at IV 201

    Imitation Learning for Vision-based Lane Keeping Assistance

    Full text link
    This paper aims to investigate direct imitation learning from human drivers for the task of lane keeping assistance in highway and country roads using grayscale images from a single front view camera. The employed method utilizes convolutional neural networks (CNN) to act as a policy that is driving a vehicle. The policy is successfully learned via imitation learning using real-world data collected from human drivers and is evaluated in closed-loop simulated environments, demonstrating good driving behaviour and a robustness for domain changes. Evaluation is based on two proposed performance metrics measuring how well the vehicle is positioned in a lane and the smoothness of the driven trajectory.Comment: International Conference on Intelligent Transportation Systems (ITSC

    Automatically Learning Formal Models from Autonomous Driving Software

    Get PDF
    The correctness of autonomous driving software is of utmost importance, as incorrect behavior may have catastrophic consequences. Formal model-based engineering techniques can help guarantee correctness and thereby allow the safe deployment of autonomous vehicles. However, challenges exist for widespread industrial adoption of formal methods. One of these challenges is the model construction problem. Manual construction of formal models is time-consuming, error-prone, and intractable for large systems. Automating model construction would be a big step towards widespread industrial adoption of formal methods for system development, re-engineering, and reverse engineering. This article applies active learning techniques to obtain formal models of an existing (under development) autonomous driving software module implemented in MATLAB. This demonstrates the feasibility of automated learning for automotive industrial use. Additionally, practical challenges in applying automata learning, and possible directions for integrating automata learning into the automotive software development workflow, are discussed

    A State-Space Approach to Dynamic Nonnegative Matrix Factorization

    Full text link

    Selected Topics in Inertial and Visual Sensor Fusion : Calibration, Observability Analysis and Applications

    No full text
    Recent improvements in the development of inertial and visual sensors allow building small, lightweight, and cheap motion capture systems, which are becoming a standard feature of smartphones and personal digital assistants. This dissertation describes developments of new motion sensing strategies using the inertial and inertial-visual sensors. The thesis contributions are presented in two parts. The first part focuses mainly on the use of inertial measurement units. First, the problem of sensor calibration is addressed and a low-cost and accurate method to calibrate the accelerometer cluster of this unit is proposed. The method is based on the maximum likelihood estimation framework, which results in a minimum variance unbiased estimator.Then using the inertial measurement unit, a probabilistic user-independent method is proposed for pedestrian activity classification and gait analysis.The work targets two groups of applications including human activity classificationand joint human activity and gait-phase classification.The developed methods are based on continuous hidden Markov models. The achieved relative figure-of-merits using the collected data validate the reliability of the proposed methods for the desired applications. In the second part, the problem of inertial and visual sensor fusion is studied.This part describes the contributions related to sensor calibration, motion estimation,and observability analysis. The proposed visual-inertial schemes in this part can mainly be divided into three systems. For each system, an estimation approach is proposed and its observability properties are analyzed.Moreover, the performances of the proposed methods are illustrated using both simulations and experimental data. Firstly, a novel calibration scheme is proposed to estimate the relative transformation between the inertial and visual sensors, which are rigidly mounted together. The main advantage of the developed method is that the calibration is performed using a planar mirror instead of using a calibration pattern.By performing the observability analysis for this system, it is proved that the calibration parameters are observable. Moreover, the achieved results show subcentimeter and subdegree accuracy for the calibration parameters.Secondly, an ego-motion estimation approach is introduced that is based on using horizontal plane features where the camera is restricted to be downward looking. The observability properties of this system are then analyzed when only one feature point is used.In particular, it is proved that the system has only three unobservable directions corresponding to global translations parallel to the horizontal plane, and rotations around the gravity vector.Hence, compared to general visual-inertial navigation systems, an advantage of the proposed system is that the vertical translation becomes observable.Finally, a 6-DoF positioning system is developed based on using only planar features on a desired horizontal plane. Compared to the previously mentioned approach, the restriction of using a downward looking camera is relaxed, while the observability properties of the system are preserved.The achieved results indicate promising accuracy and reliability of the proposed algorithm and validate the findings of the theoretical analysis and 6-DoF motion estimation.The proposed motion estimation approach is then extended by developing a new planar feature detection method. Hence, a complete positioning approach is introduced, which simultaneously performs 6-DoF motion estimation and horizontal plane feature detection.QC 20140312</p

    Selected Topics in Inertial and Visual Sensor Fusion : Calibration, Observability Analysis and Applications

    No full text
    Recent improvements in the development of inertial and visual sensors allow building small, lightweight, and cheap motion capture systems, which are becoming a standard feature of smartphones and personal digital assistants. This dissertation describes developments of new motion sensing strategies using the inertial and inertial-visual sensors. The thesis contributions are presented in two parts. The first part focuses mainly on the use of inertial measurement units. First, the problem of sensor calibration is addressed and a low-cost and accurate method to calibrate the accelerometer cluster of this unit is proposed. The method is based on the maximum likelihood estimation framework, which results in a minimum variance unbiased estimator.Then using the inertial measurement unit, a probabilistic user-independent method is proposed for pedestrian activity classification and gait analysis.The work targets two groups of applications including human activity classificationand joint human activity and gait-phase classification.The developed methods are based on continuous hidden Markov models. The achieved relative figure-of-merits using the collected data validate the reliability of the proposed methods for the desired applications. In the second part, the problem of inertial and visual sensor fusion is studied.This part describes the contributions related to sensor calibration, motion estimation,and observability analysis. The proposed visual-inertial schemes in this part can mainly be divided into three systems. For each system, an estimation approach is proposed and its observability properties are analyzed.Moreover, the performances of the proposed methods are illustrated using both simulations and experimental data. Firstly, a novel calibration scheme is proposed to estimate the relative transformation between the inertial and visual sensors, which are rigidly mounted together. The main advantage of the developed method is that the calibration is performed using a planar mirror instead of using a calibration pattern.By performing the observability analysis for this system, it is proved that the calibration parameters are observable. Moreover, the achieved results show subcentimeter and subdegree accuracy for the calibration parameters.Secondly, an ego-motion estimation approach is introduced that is based on using horizontal plane features where the camera is restricted to be downward looking. The observability properties of this system are then analyzed when only one feature point is used.In particular, it is proved that the system has only three unobservable directions corresponding to global translations parallel to the horizontal plane, and rotations around the gravity vector.Hence, compared to general visual-inertial navigation systems, an advantage of the proposed system is that the vertical translation becomes observable.Finally, a 6-DoF positioning system is developed based on using only planar features on a desired horizontal plane. Compared to the previously mentioned approach, the restriction of using a downward looking camera is relaxed, while the observability properties of the system are preserved.The achieved results indicate promising accuracy and reliability of the proposed algorithm and validate the findings of the theoretical analysis and 6-DoF motion estimation.The proposed motion estimation approach is then extended by developing a new planar feature detection method. Hence, a complete positioning approach is introduced, which simultaneously performs 6-DoF motion estimation and horizontal plane feature detection.QC 20140312</p

    Vision-aided Inertial Navigation Using Planar Terrain Features

    No full text
    The idea is to implement a vision-aided inertial navigation system (INS) for estimating inertial measurement unit (IMU)-camera ego-motion. The system consists of a ground facing monocular camera mounted on an IMU that is observing ground plane feature points. The motion estimation procedure is through tracking detected corresponding feature points between two successive image frames. The main contribution of this paper is a novel closed-form measurement model based on the image data and IMU output signals. In contrast to existing methods, our algorithm is independent of the underlying vision algorithm such as image motion estimation or optical flow algorithms for camera motion estimation. Additionally, unlike the visual-SLAM based methods, our approach is not based on data association. The algorithm has been implemented using an Extended Kalman filter (EKF), which propagates the current and the last state of the system updated in the previous measurement state. Simulation results show that the introduced method is persistent to the level of the noise and works well even with few numbers of features.© 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 2011122

    IMU-camera Self-Calibration Using Planar Mirror Reflection

    No full text
    In this paper, we first look at the problem of estimating the transformation between an inertial measurement unit (IMU) and a calibrated camera, based on images of planar mirror reflections (IPMR) of arbitrary feature points with unknown positions. Assuming that only the reflection of the feature points are observable by the camera, the IMU-camera calibration parameters and the position of the feature points in the camera frame are estimated using the Sigma-Point Kalman filter framework. In the next step, we consider the case of estimating varying camera intrinsic parameters using the estimated static parameters from the previous stage. Therefore, the estimated parameters are used as initial values in the state space model of the system to estimate the camera intrinsic parameters together with the rest of the parameters. The proposed method does not rely on using a fixed calibration pattern whose feature points' positions are known relative to the navigation frame. Additionally, the motion of the camera, which is mounted on the IMU, is not limited to be planar with respect to the mirror. Instead, the reflection of the feature points with unknown positions in the camera body frame are tracked over time. Simulation results show subcentimeter and subdegree accuracy for both IMU-camera translation and rotation parameters as well as submillimeter and subpixel accuracy for the position of the feature points and camera intrinsic parameters, respectively.© 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111123</p
    corecore