4 research outputs found

    Stochastic constraints for vision-aided inertial navigation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (p. 107-110).This thesis describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any camera and tracking algorithm may be used, as long as the visual output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional Simultaneous Localization and Mapping (SLAM/CML), our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.by David D. Diel.S.M

    Optical-Flow Based Detection of Moving Objects in Traffic Scenes

    Get PDF
    Traffic is increasing continuously. Nevertheless the number of traffic fatalities decreased in the past. One reason for this are the passive safety systems, such as side crash protection or airbag, which have been engineered the last decades and which are standard in today's cars. Active safety systems are increasingly developed. They are able to avoid or at least to mitigate accidents. For example, the adaptive cruise control (ACC) original designed as a comfort system is developed towards an emergency brake system. Active safety requires sensors perceiving the vehicle environment. ACC uses radar or laser scanner. However, cameras are also interesting sensors as they are capable of processing visual information such as traffic signs or lane markings. In traffic moving objects (cars, bicyclists, pedestrians) play an important role. To perceive them is essential for active safety systems. This thesis deals with the detection of moving objects utilizing a monocular camera. The detection is based on the motions within the video stream (optical flow). If the ego-motion and the location of the camera with respect to the road plane are known the viewed scene can be 3D reconstructed exploiting the measured optical flow. In this thesis an overview of existing algorithms estimating the ego-motion is given. Based on it a suitable algorithm is selected and extended by a motion model. The latter one considerably increases the accuracy as well as the robustness of the estimate. The location of the camera with respect to the road plane is estimated using the optical flow on the road. The road might be temporary low-textured making it hard to measure the optical flow. Consequently, the road homography estimate will be poor. A novel Kalman filtering approach combining the estimate of the ego-motion and the estimate of the road homography leads to far better results. The 3D reconstruction of the viewed scene is performed pointwise for each measured optical flow vector. A point is reconstructed through intersection of the viewing rays which are determined by the optical flow vector. This only yields a correct result for static, i.e. non-moving, points. Further, static points fulfill four constraints: epipolar constraint, trifocal constraint, positive depth constraint, and positive height constraint. If at least one constraint is violated the point is moving. For the first time an error metric is developed exploiting all four constraints. It measures the deviation from the constraints quantitatively in a unified manner. Based on this error metric the detection limits are investigated. It is shown that overtaking objects are detected very well whereas objects being overtaken are detected hardly. Oncoming objects on a straight road are not detected by means of the available constraints. Only if one assumes that these objects are opaque and touch the ground the detection becomes feasible. An appropriate heuristic is introduced. In conclusion, the developed algorithms are a system to detect moving points robustly. The problem of clustering the detected moving points to objects is outlined. It serves as a starting point for further research activities

    Fusion de données capteurs visuels et inertiels pour l'estimation de la pose d'un corps rigide

    Get PDF
    AbstractThis thesis addresses the problems of pose estimation of a rigid body moving in 3D space by fusing data from inertial and visual sensors. The inertial measurements are provided from an I.M.U. (Inertial Measurement Unit) composed by accelerometers and gyroscopes. Visual data are from cameras, which positioned on the moving object, provide images representative of the perceived visual field. Thus, the implicit measure directions of fixed lines in the space of the scene from their projections on the plane of the image will be used in the attitude estimation. The approach was first to address the problem of measuring visual sensors after a long sequence using the characteristics of the image. Thus, a line tracking algorithm has been proposed based on optical flow of the extracted points and line matching approach by minimizing the Euclidean distance. Thereafter, an observer in the SO(3) space has been proposed to estimate the relative orientation of the object in the 3D scene by merging the data from the proposed lines tracking algorithm with Gyro data. The observer gain was developed using a Kalman filter type M.E.K.F. (Multiplicative Extended Kalman Filter). The problem of ambiguity in the sign of the measurement directions of the lines was considered in the design of the observer. Finally, the estimation of the relative position and the absolute velocity of the rigid body in the 3D scene have been processed. Two observers were proposed: the first one is an observer cascaded with decoupled from the estimation of the attitude and position estimation. The estimation result of the attitude observer feeds a nonlinear observer using measurements from the accelerometers in order to provide an estimate of the relative position and the absolute velocity of the rigid body. The second observer, designed directly in SE (3) for simultaneously estimating the position and orientation of a rigid body in 3D scene by fusing inertial data (accelerometers, gyroscopes), and visual data using a Kalman filter (M.E.K.F.). The performance of the proposed methods are illustrated and validated by different simulation resultsCette thèse traite la problématique d'estimation de la pose (position relative et orientation) d'un corps rigide en mouvement dans l’espace 3D par fusion de données issues de capteurs inertiels et visuels. Les mesures inertielles sont fournies à partir d’une centrale inertielle composée de gyroscopes 3 axes et d’accéléromètres 3 axes. Les données visuelles sont issues d’une caméra. Celle-ci est positionnée sur le corps rigide en mouvement, elle fournit des images représentatives du champ visuel perçu. Ainsi, les mesures implicites des directions des lignes, supposées fixes dans l’espace de la scène, projetées sur le plan de l’image seront utilisées dans l’algorithme d’estimation de l’attitude. La démarche consistait d’abord à traiter le problème de la mesure issue du capteur visuel sur une longue séquence en utilisant les caractéristiques de l’image. Ainsi, un algorithme de suivi de lignes a été proposé en se basant sur les techniques de calcul du flux optique des points extraits des lignes à suivre et utilisant une approche de mise en correspondance par minimisation de la distance euclidienne. Par la suite, un observateur conçu dans l’espace SO(3) a été proposé afin d’estimer l’orientation relative du corps rigide dans la scène 3D en fusionnant les données issues de l’algorithme de suivi de lignes avec les données des gyroscopes. Le gain de l’observateur a été élaboré en utilisant un filtre de Kalman de type M.E.K.F. (Multiplicative Extended Kalman Filter). Le problème de l’ambigüité du signe dû à la mesure implicite des directions des lignes a été considéré dans la conception de cet observateur. Enfin, l’estimation de la position relative et de la vitesse absolue du corps rigide dans la scène 3D a été traitée. Deux observateurs ont été proposés : le premier est un observateur en cascade avec découplage entre l’estimation de l’attitude et l’estimation de la position. L’estimation issue de l’observateur d’attitude alimente un observateur non linéaire utilisant des mesures issues des accéléromètres afin de fournir une estimation de la position relative et de la vitesse absolue du corps rigide. Le deuxième observateur, conçu quant à lui directement dans SE(3) , utilise un filtre de Kalman de type M.E.K.F afin d’estimer la pose par fusion de données inertielles (accéléromètres, gyromètres) et des données visuelles. Les performances des méthodes proposées sont illustrées et validées par différents résultats de simulatio
    corecore