15 research outputs found

    Leveraging Structure from Motion to Localize Inaccessible Bus Stops

    Full text link
    The detection of hazardous conditions near public transit stations is necessary for ensuring the safety and accessibility of public transit. Smart city infrastructures aim to facilitate this task among many others through the use of computer vision. However, most state-of-the-art computer vision models require thousands of images in order to perform accurate detection, and there exist few images of hazardous conditions as they are generally rare. In this paper, we examine the detection of snow-covered sidewalks along bus routes. Previous work has focused on detecting other vehicles in heavy snowfall or simply detecting the presence of snow. However, our application has an added complication of determining if the snow covers areas of importance and can cause falls or other accidents (e.g. snow covering a sidewalk) or simply covers some background area (e.g. snow on a neighboring field). This problem involves localizing the positions of the areas of importance when they are not necessarily visible. We introduce a method that utilizes Structure from Motion (SfM) rather than additional annotated data to address this issue. Specifically, our method learns the locations of sidewalks in a given scene by applying a segmentation model and SfM to images from bus cameras during clear weather. Then, we use the learned locations to detect if and where the sidewalks become obscured with snow. After evaluating across various threshold parameters, we identify an optimal range at which our method consistently classifies different categories of sidewalk images correctly. Although we demonstrate an application for snow coverage along bus routes, this method can extend to other hazardous conditions as well. Code for this project is available at https://github.com/ind1010/SfM_for_BusEdge

    Path planning with pose SLAM

    Get PDF
    The probabilistic belief networks that result from standard feature-based simultaneous localization and map building (SLAM) approaches cannot be directly used to plan trajectories. The reason is that they produce a sparse graph of landmark estimates and their probabilistic relations, which is of little value to find collision free paths for navigation. In contrast, we argue in this paper that Pose SLAM graphs can be directly used as belief roadmaps (BRMs). The original BRM algorithm assumes a known model of the environment from which probabilistic sampling generates a roadmap. In our work, the roadmap is built on-line by the Pose SLAM algorithm. The result is a hybrid BRM-Pose SLAM method that devises optimal navigation strategies on-line by searching for the path with lowest accumulated uncertainty for the robot pose. The method is validated over synthetic data and standard SLAM datasets.Postprint (published version

    Magnopark, Smart Parking Detection Based on Cellphone Magnetic Sensor

    Get PDF
    We introduce a solution that uses the availability of heavy crowds and their smart devices, to gain more result as to where potential parking is possible. By leveraging the raw magnetometer, gyroscope, and accelerometer data, we are able to detect parking spots through the natural movement exerted by the walking pedestrians on the sidewalks beside the streets. Dating back as far as 2013, a very large portion of pedestrians composing the crowds on the sidewalk, possessed at least one smart device in their hand or pocket14]. It is this statistic that fuels our application, in which we depend on crowds or even a steady rate of pedestrians, telling others around them where unoccupied parking sport are, without making a single bit of noise. In other words, we use the walking pedestrians’ cellphone sensors to classify the sidewalk parking spots as occupied and vacant. The more pedestrians walking on the sidewalk, the more accurate our application works. As the years and technological advances both increase, we predict that the number of smart devices will only increase, allowing our solution to become much more precise and useful. The biggest contribution of our study can be summarized as follows: • Implementation of Magnopark; a high accuracy parking spot localization system using internal smart phone sensors • Evaluation and test of Magnopark in different situations and places • Test of Magnopark for different users with different walking habits and speed • Development of an algorithm to detect the users’ stride, speed, and direction change • Building a classification model based on the features extracted from the cellphone sensors • Pushing the classified data to the cloud for the drivers’ us

    Model Predictive Control System Design of a passenger car for Valet Parking Scenario

    Get PDF
    A recent expansion of passenger cars’ automated functions has led to increasingly challenging design problems for the engineers. Among this the development of Automated Valet Parking is the latest addition. The system represents the next evolution of automated system giving the vehicle greater autonomy: the efforts of most automotive OEMs go towards achieving market deployment of such automated function. To this end the focus of each OEM is on taking part to this competitive endeavor and succeed by developing a proprietary solution with the support of hardware and software suppliers. Within this framework the present work aims at developing an effective control strategy for the considered scenarios. In order to reach this goal a Model Predictive Control approach is employed taking advantage of previous works within the automotive OEM in the automated driving field. The control algorithm is developed in a Simulink® simulation according to the requirements of the application and tested; results show the control strategy successfully drives the vehicle on the predefined path

    Localisation and navigation in GPS-denied environments using RFID tags

    Get PDF
    Includes bibliographical references.This dissertation addresses the autonomous localisation and navigation problem in the context of an underground mining environment. This kind of environment has little or no features as well as no access to GPS or stationary towers, which are usually used for navigation. In addition dust and debris may hinder optical methods for ranging. This study looks at the feasibility of using randomly distributed RFID tags to autonomously navigate in this environment. Clustering of observed tags are used for localisation, subsequently value iteration is used to navigate to a defined goal. Results are presented, concluding that it is feasible to localise and navigate using only RFID tags, in simulation. Localisation feasibility is also confirmed by experimental measurements

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Direct Visual-Inertial Odometry using Epipolar Constraints for Land Vehicles

    Get PDF
    Autonomously operating vehicles are being developed to take over human supervision in applications such as search and rescue, surveillance, exploration and scientific data collection. For a vehicle to operate autonomously, it is important for it to predict its location with respect to its surrounding in order to make decisions about its next movement. Simultaneous Localization and Mapping (SLAM) is a technique that utilizes information from multiple sensors to not only estimate the vehicle's location but also simultaneously build a map of the environment. Substantial research efforts are being devoted to make pose predictions using fewer sensors. Currently, laser scanners, which are expensive, have been used as a primary sensor for environment perception as they measure obstacle distance with good accuracy and generate a point-cloud map of the surrounding. Recently, researchers have used the method of triangulation to generate similar point-cloud maps using only cameras, which are relatively inexpensive. However, point-clouds generated from cameras have an unobservable scale factor. To get an estimate of scale, measurements from an additional sensor such as another camera (stereo configuration), laser scanners, wheel encoders, GPS or IMU, can be used. Wheel encoders are known to suffer from inaccuracies and drifts, using laser scanners is not cost effective, and GPS measurements come with high uncertainty. Therefore, stereo-camera and camera-IMU methods have been topics of constant development for the last decade. A stereo-camera pair is typically used with a graphics processing unit (GPU) to generate a dense environment reconstruction. The scale is estimated from the pre-calculated base-line (distance between camera centers) measurement. However, when the environment features are far away, the base-line becomes negligible to be effectively used for triangulation and the stereo-configuration reduces to monocular. Moreover, when the environment is texture-less, information from visual measurements only cannot be used. An IMU provides metric measurements but suffers from significant drifts. Hence, in a camera-IMU configuration, an IMU typically is used only for short-durations, i.e. in-between two camera frames. This is desirable as it not only helps to estimate the global scale, but also to give a pose estimate during temporary camera failure. Due to these reasons, a camera-IMU configuration is being increasingly used in applications such as in Unmanned Aerial Vehicles (UAVs) and Augmented/ Virtual Reality (AR/VR). This thesis presents a novel method for visual-inertial odometry for land vehicles which is robust to unintended, but unavoidable bumps, encountered when an off-road land vehicle traverses over potholes, speed-bumps or general change in terrain. In contrast to tightly-coupled methods for visual-inertial odometry, the joint visual and inertial residuals is split into two separate steps and the inertial optimization is performed after the direct-visual alignment step. All visual and geometric information encoded in a key-frame are utilized by including the inverse-depth variances in the optimization objective, making this method a direct approach. The primary contribution of this work is the use of epipolar constraints, computed from a direct-image alignment, to correct pose prediction obtained by integrating IMU measurements, while simultaneously building a semi-dense map of the environment in real-time. Through experiments, both indoor and outdoor, it is shown that the proposed method is robust to sudden spikes in inertial measurements while achieving better accuracy than the state-of-the art direct, tightly-coupled visual-inertial fusion method. In the future, the proposed method can be augmented with loop-closure and re-localization to enhance the pose prediction accuracy. Further, semantic segmentation of point-clouds can be useful for applications such as object labeling and generating obstacle-free path

    Mapping, planning and exploration with Pose SLAM

    Get PDF
    This thesis reports research on mapping, path planning, and autonomous exploration. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common SLAM approach, adopting Pose SLAM as the basic state estimation machinery. The main contribution of this thesis is an approach that allows a mobile robot to plan a path using the map it builds with Pose SLAM and to select the appropriate actions to autonomously construct this map. Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. In Pose SLAM, observations come in the form of relative-motion measurements between robot poses. With regards to extending the original Pose SLAM formulation, this thesis studies the computation of such measurements when they are obtained with stereo cameras and develops the appropriate noise propagation models for such case. Furthermore, the initial formulation of Pose SLAM assumes poses in SE(2) and in this thesis we extend this formulation to SE(3), parameterizing rotations either with Euler angles and quaternions. We also introduce a loop closure test that exploits the information from the filter using an independent measure of information content between poses. In the application domain, we present a technique to process the 3D volumetric maps obtained with this SLAM methodology, but with laser range scanning as the sensor modality, to derive traversability maps. Aside from these extensions to Pose SLAM, the core contribution of the thesis is an approach for path planning that exploits the modeled uncertainties in Pose SLAM to search for the path in the pose graph with the lowest accumulated robot pose uncertainty, i.e., the path that allows the robot to navigate to a given goal with the least probability of becoming lost. An added advantage of the proposed path planning approach is that since Pose SLAM is agnostic with respect to the sensor modalities used, it can be used in different environments and with different robots, and since the original pose graph may come from a previous mapping session, the paths stored in the map already satisfy constraints not easy modeled in the robot controller, such as the existence of restricted regions, or the right of way along paths. The proposed path planning methodology has been extensively tested both in simulation and with a real outdoor robot. Our path planning approach is adequate for scenarios where a robot is initially guided during map construction, but autonomous during execution. For other scenarios in which more autonomy is required, the robot should be able to explore the environment without any supervision. The second core contribution of this thesis is an autonomous exploration method that complements the aforementioned path planning strategy. The method selects the appropriate actions to drive the robot so as to maximize coverage and at the same time minimize localization and map uncertainties. An occupancy grid is maintained for the sole purpose of guaranteeing coverage. A significant advantage of the method is that since the grid is only computed to hypothesize entropy reduction of candidate map posteriors, it can be computed at a very coarse resolution since it is not used to maintain neither the robot localization estimate, nor the structure of the environment. Our technique evaluates two types of actions: exploratory actions and place revisiting actions. Action decisions are made based on entropy reduction estimates. By maintaining a Pose SLAM estimate at run time, the technique allows to replan trajectories online should significant change in the Pose SLAM estimate be detected. The proposed exploration strategy was tested in a common publicly available dataset comparing favorably against frontier based exploratio
    corecore