58 research outputs found

    Mobile robot localization using a Kalman filter and relative bearing measurements to known landmarks

    Get PDF
    This paper discusses mobile robot localization using a single, fixed camera that is capable of detecting predefined landmarks in the environment. For each visible landmark, the camera provides a relative bearing but not a relative range. This research represents work toward an inexpensive sensor that could be added to a mobile robot in order to provide more accurate estimates of the robot\u27s location. It uses the Kalman filter as a framework, which is a proven method for incorporating sensor data into navigation problems. In the simulations presented later, it is assumed that the filter can perform accurate feature recognition. In the experimental setup, however, a webcam and an open source library are used to recognize and track bearing to a set of unique markers. Although this research requires that the landmark locations be known, in contrast to research in simultaneous localization and mapping, the results are still useful in an industrial setting where placing known landmarks would be acceptable

    Localization of autonomous ground vehicles in dense urban environments

    Get PDF
    The localization of autonomous ground vehicles in dense urban environments poses a challenge. Applications in classical outdoor robotics rely on the availability of GPS systems in order to estimate the position. However, the presence of complex building structures in dense urban environments hampers a reliable localization based on GPS. Alternative approaches have to be applied In order to tackle this problem. This thesis proposes an approach which combines observations of a single perspective camera and odometry in a probabilistic framework. In particular, the localization in the space of appearance is addressed. First, a topological map of reference places in the environment is built. Each reference place is associated with a set of visual features. A feature selection is carried out in order to obtain distinctive reference places. The topological map is extended to a hybrid representation by the use of metric information from Geographic Information Systems (GIS) and satellite images. The localization is solved in terms of the recognition of reference places. A particle lter implementation incorporating this and the vehicle's odometry is presented. The proposed system is evaluated based on multiple experiments in exemplary urban environments characterized by high building structures and a multitude of dynamic objects

    Underwater vehicle localization using range measurements

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 79-83).This thesis investigates the problem of cooperative navigation of autonomous marine vehicles using range-only acoustic measurements. We consider the use of a single maneuvering autonomous surface vehicle (ASV) to aid the navigation of one or more submerged autonomous underwater vehicles (AUVs), using acoustic range measurements combined with position measurements for the ASV when data packets are transmitted. The AUV combines the data from the surface vehicle with its proprioceptive sensor measurements to compute its trajectory. We present an experimental demonstration of this approach, using an extended Kalman filter (EKF) for state estimation. We analyze the observability properties of the cooperative ASV/AUV localization problem and present experimental results comparing several different state estimators. Using the weak observability theorem for nonlinear systems, we demonstrate that this cooperative localization problem is best attacked using nonlinear least squares (NLS) optimization. We investigate the convergence of NLS applied to the cooperative ASV/AUV localization problem. Though we show that the localization problem is non-convex, we propose an algorithm that under certain assumptions (the accumulative dead reckoning variance is much bigger than the variance of the range measurements, and that range measurement errors are bounded) achieves convergence by choosing initial conditions that lie in convex areas. We present experimental results for this approach and compare it to alternative state estimators, demonstrating superior performance.by Georgios Papadopoulos.S.M

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Non-Parametric Learning for Monocular Visual Odometry

    Get PDF
    This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bayesian regression technique, the Gaussian Process (GP), is used to elect the most probable transformation function hypothesis from input to output, based on training data collected prior and during navigation. Other than eliminating the need for a geometric model and traditional camera calibration, this approach also allows for scale recovery even in a monocular configuration, and provides a natural treatment of uncertainties due to the probabilistic nature of GPs. Several extensions to the traditional GP framework are introduced and discussed in depth, and they constitute the core of the contributions of this thesis to the machine learning and robotics community. The proposed framework is tested in a wide variety of scenarios, ranging from urban and off-road ground vehicles to unconstrained 3D unmanned aircrafts. The results show a significant improvement over traditional visual odometry algorithms, and also surpass results obtained using other sensors, such as laser scanners and IMUs. The incorporation of these results to a SLAM scenario, using a Exact Sparse Information Filter (ESIF), is shown to decrease global uncertainty by exploiting revisited areas of the environment. Finally, a technique for the automatic segmentation of dynamic objects is presented, as a way to increase the robustness of image information and further improve visual odometry results

    Non-Parametric Learning for Monocular Visual Odometry

    Get PDF
    This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bayesian regression technique, the Gaussian Process (GP), is used to elect the most probable transformation function hypothesis from input to output, based on training data collected prior and during navigation. Other than eliminating the need for a geometric model and traditional camera calibration, this approach also allows for scale recovery even in a monocular configuration, and provides a natural treatment of uncertainties due to the probabilistic nature of GPs. Several extensions to the traditional GP framework are introduced and discussed in depth, and they constitute the core of the contributions of this thesis to the machine learning and robotics community. The proposed framework is tested in a wide variety of scenarios, ranging from urban and off-road ground vehicles to unconstrained 3D unmanned aircrafts. The results show a significant improvement over traditional visual odometry algorithms, and also surpass results obtained using other sensors, such as laser scanners and IMUs. The incorporation of these results to a SLAM scenario, using a Exact Sparse Information Filter (ESIF), is shown to decrease global uncertainty by exploiting revisited areas of the environment. Finally, a technique for the automatic segmentation of dynamic objects is presented, as a way to increase the robustness of image information and further improve visual odometry results

    Development and evaluation of low cost 2-d lidar based traffic data collection methods

    Get PDF
    Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable. To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays. A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89. Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection

    Cooperative Vehicle Perception and Localization Using Infrastructure-based Sensor Nodes

    Get PDF
    Reliable and accurate Perception and Localization (PL) are necessary for safe intelligent transportation systems. The current vehicle-based PL techniques in autonomous vehicles are vulnerable to occlusion and cluttering, especially in busy urban driving causing safety concerns. In order to avoid such safety issues, researchers study infrastructure-based PL techniques to augment vehicle sensory systems. Infrastructure-based PL methods rely on sensor nodes that each could include camera(s), Lidar(s), radar(s), and computation and communication units for processing and transmitting the data. Vehicle to Infrastructure (V2I) communication is used to access the sensor node processed data to be fused with the onboard sensor data. In infrastructure-based PL, signal-based techniques- in which sensors like Lidar are used- can provide accurate positioning information while vision-based techniques can be used for classification. Therefore, in order to take advantage of both approaches, cameras are cooperatively used with Lidar in the infrastructure sensor node (ISN) in this thesis. ISNs have a wider field of view (FOV) and are less likely to suffer from occlusion. Besides, they can provide more accurate measurements since they are fixed at a known location. As such, the fusion of both onboard and ISN data has the potential to improve the overall PL accuracy and reliability. This thesis presents a framework for cooperative PL in autonomous vehicles (AVs) by fusing ISN data with onboard sensor data. The ISN includes cameras and Lidar sensors, and the proposed camera Lidar fusion method combines the sensor node information with vehicle motion models and kinematic constraints to improve the performance of PL. One of the main goals of this thesis is to develop a wind induced motion compensation module to address the problem of time-varying extrinsic parameters of the ISNs. The proposed module compensates for the effect of the motion of ISN posts due to wind or other external disturbances. To address this issue, an unknown input observer is developed that uses the motion model of the light post as well as the sensor data. The outputs of the ISN, the positions of all objects in the FOV, are then broadcast so that autonomous vehicles can access the information via V2I connectivity to fuse with their onboard sensory data through the proposed cooperative PL framework. In the developed framework, a KCF is implemented as a distributed fusion method to fuse ISN data with onboard data. The introduced cooperative PL incorporates the range-dependent accuracy of the ISN measurements into fusion to improve the overall PL accuracy and reliability in different scenarios. The results show that using ISN data in addition to onboard sensor data improves the performance and reliability of PL in different scenarios, specifically in occlusion cases
    corecore