297 research outputs found

    Positioning Techniques with Smartphone Technology: Performances and Methodologies in Outdoor and Indoor Scenarios

    Get PDF
    Smartphone technology is widespread both in the academy and in the commercial world. Almost every people have today a smartphone in their pocket, that are not only used to call other people but also to share their location on social networks or to plan activities. Today with a smartphone we can compute our position using the sensors settled inside the device that may also include accelerometers, gyroscopes and magnetometers, teslameter, proximity sensors, barometer, and GPS/GNSS chipset. In this chapter we want to analyze the state-of-the-art of the positioning with smartphone technology, considering both outdoor and indoor scenarios. Particular attention will be paid to this last situation, where the accuracy can be improved fusing information coming from more than one sensor. In particular, we will investigate an innovative method of image recognition based (IRB) technology, particularly useful in GNSS denied environment, taking into account the two main problems that arise when the IRB positioning methods are considered: the first one is the optimization of the battery, that implies the minimization of the frame rate, and secondly the latencies due to image processing for visual search solutions, required by the size of the database with the 3D environment images

    What can be done with an embedded stereo-rig in urban environments?

    Get PDF
    International audienceThe development of the Autonomous Guided Vehicles (AGVs) with urban applications are now possible due to the recent solutions (DARPA Grand Challenge) developed to solve the Simultaneous Localization And Mapping (SLAM) problem: perception, path planning and control. For the last decade, the introduction of GPS systems and vision have been allowed the transposition of SLAM methods dedicated to indoor environments to outdoor ones. When the GPS data are unavailable, the current position of the mobile robot can be estimated by the fusion of data from odometer and/or Inertial Navigation System (INS). We detail in this article what can be done with an uncalibrated stereo-rig, when it is embedded in a vehicle which is going through urban roads. The methodology is based on features extracted on planes: we mainly assume the road at the foreground as the plane common to all the urban scenes but other planes like vertical frontages of buildings can be used if the features extracted on the road are not enough relevant. The relative motions of the coplanar features tracked with both cameras allow us to stimate the vehicle ego-motion with a high precision. Futhermore, the features which don't check the relative motion of the considered plane can be assumed as obstacles

    Evaluating the Capability of OpenStreetMap for Estimating Vehicle Localization Error

    Get PDF
    Accurate localization is an important part of successful autonomous driving. Recent studies suggest that when using map-based localization methods, the representation and layout of real-world phenomena within the prebuilt map is a source of error. To date, the investigations have been limited to 3D point clouds and normal distribution (ND) maps. This paper explores the potential of using OpenStreetMap (OSM) as a proxy to estimate vehicle localization error. Specifically, the experiment uses random forest regression to estimate mean 3D localization error from map matching using LiDAR scans and ND maps. Six map evaluation factors were defined for 2D geographic information in a vector format. Initial results for a 1.2 km path in Shinjuku, Tokyo, show that vehicle localization error can be estimated with 56.3% model prediction accuracy with two existing OSM data layers only. When OSM data quality issues (inconsistency and completeness) were addressed, the model prediction accuracy was improved to 73.1%
    • …
    corecore