7 research outputs found

    Indoor Localization System based on Artificial Landmarks and Monocular Vision

    Get PDF
     This paper presents a visual localization approach that is suitable for domestic and industrial environments as it enables accurate, reliable and robust pose estimation. The mobile robot is equipped with a single camera which update sits pose whenever a landmark is available on the field of view. The innovation presented by this research focuses on the artificial landmark system which has the ability to detect the presence of the robot, since both entities communicate with each other using an infrared signal protocol modulated in frequency. Besides this communication capability, each landmark has several high intensity light-emitting diodes (LEDs) that shine only for some instances according to the communication, which makes it possible for the camera shutter and the blinking of the LEDs to synchronize. This synchronization increases the system tolerance concerning changes in brightness in the ambient lights over time, independently of the landmarks location. Therefore, the environment’s ceiling is populated with several landmarks and an Extended Kalman Filter is used to combine the dead-reckoning and landmark information. This increases the flexibility of the system by reducing the number of landmarks required. The experimental evaluation was conducted in a real indoor environment with an autonomous wheelchair prototype

    Indoor Localization System based on Artificial Landmarks and Monocular Vision

    Full text link

    Motion Compatibility for Indoor Localization

    Get PDF
    Indoor localization -- a device's ability to determine its location within an extended indoor environment -- is a fundamental enabling capability for mobile context-aware applications. Many proposed applications assume localization information from GPS, or from WiFi access points. However, GPS fails indoors and in urban canyons, and current WiFi-based methods require an expensive, and manually intensive, mapping, calibration, and configuration process performed by skilled technicians to bring the system online for end users. We describe a method that estimates indoor location with respect to a prior map consisting of a set of 2D floorplans linked through horizontal and vertical adjacencies. Our main contribution is the notion of "path compatibility," in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for agreement with the prior map. Path compatibility is encoded in an HMM-based matching model, from which the method recovers the user s location trajectory from the low-level motion estimates. To recognize user motions, we present a motion labeling algorithm, extracting fine-grained user motions from sensor data of handheld mobile devices. We propose "feature templates," which allows the motion classifier to learn the optimal window size for a specific combination of a motion and a sensor feature function. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our motion labeling algorithm classifies user motions with 94.5% accuracy, and our trajectory matching algorithm can recover the user's location to within 5 meters on average after one minute of movements from an unknown starting location. Prior information, such as a known starting floor, further decreases the time required to obtain precise location estimate

    Proprioceptive Localization for Robots

    Get PDF
    Localization is a critical navigation function for mobile robots. Most localization methods employ a global position system (GPS), a lidar, and a camera which are exteroceptive sensors relying on the perception and recognition of landmarks in the environment. However, GPS signals may be unavailable because high-rise buildings may block GPS signals in urban areas. Poor weather and lighting conditions may challenge all exteroceptive sensors. In this dissertation, we focus on proprioceptive localization (PL) methods which refer to a new class of robot egocentric localization methods that do not rely on the perception and recognition of external landmarks. These methods depend on a prior map and proprioceptive sensors such as inertial measurement units (IMUs) and/or wheel encoders which are naturally immune to aforementioned adversary environmental conditions that may hinder exteroceptive sensors. PL is intended to be a low-cost and fallback solution when everything else fails. We first propose a method named proprioceptive localization assisted by magnetoreception (PLAM). PLAM employs a gyroscope and a compass to sense heading changes and matches the heading sequence with a pre-processed heading graph to localize the robot. Not all cases can be successful because degenerated maps may consist of rectangular grid-like streets and the robot may travel in a loop. To analyze these, we use information entropy to model map characteristics and perform both simulation and experiments to find out typical heading and information entropy requirements for localization. We further propose a method which allows continuous localization and is less limited by map degeneracy. Assisted by magnetoreception, we use IMUs and wheel encoders to estimate vehicle trajectory which is used to query a prior known map to obtain location. We named the proposed method as graph-based proprioceptive localization (GBPL). As a robot travels, we extract a sequence of heading-length values for straight segments from the trajectory and match the sequence with a pre-processed heading-length graph (HLG) abstracted from the prior known map to localize the robot under a graph-matching approach. Using HLG information, our location alignment and verification module compensates for trajectory drift, wheel slip, or tire inflation level. %The algorithm runs successfully in finding robot location continuously and achieves localization accuracy at the level that the prior map allows (less than 10m). With the development of communication technology, it becomes possible to leverage vehicle-to-vehicle (V2V) communication to develop a multiple vehicle/robot collaborative localization scheme. Named as collaborative graph-based proprioceptive localization (C-GBPL), we extract heading-length sequence from the trajectory as features. When rendezvousing with other vehicles, the ego vehicle aggregates the features from others and forms a merged query graph. We match the query graph with the HLG to localize the vehicle under a graph-to-graph matching approach. The C-GBPL algorithm significantly outperforms its single-vehicle counterpart in localization speed and robustness to trajectory and map degeneracy. Besides, we propose a PL method with WiFi in the indoor environment targeted at handling inconsistent access points (APs). We develop a windowed majority voting and statistical hypothesis testing-based approach to remove APs with large displacements between reference and query data sets. We refine the localization by applying maximum likelihood estimation method to the closed-form posterior location distribution over the filtered signal strength and AP sets in the time window. Our method achieves a mean localization error of less than 3.7 meters even when 70% of APs are inconsistent

    Indoor localization using place and motion signatures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesis.Includes bibliographical references (p. 141-153).Most current methods for 802.11-based indoor localization depend on either simple radio propagation models or exhaustive, costly surveys conducted by skilled technicians. These methods are not satisfactory for long-term, large-scale positioning of mobile devices in practice. This thesis describes two approaches to the indoor localization problem, which we formulate as discovering user locations using place and motion signatures. The first approach, organic indoor localization, combines the idea of crowd-sourcing, encouraging end-users to contribute place signatures (location RF fingerprints) in an organic fashion. Based on prior work on organic localization systems, we study algorithmic challenges associated with structuring such organic location systems: the design of localization algorithms suitable for organic localization systems, qualitative and quantitative control of user inputs to "grow" an organic system from the very beginning, and handling the device heterogeneity problem, in which different devices have different RF characteristics. In the second approach, motion compatibility-based indoor localization, we formulate the localization problem as trajectory matching of a user motion sequence onto a prior map. Our method estimates indoor location with respect to a prior map consisting of a set of 2D floor plans linked through horizontal and vertical adjacencies. To enable the localization system, we present a motion classification algorithm that estimates user motions from the sensors available in commodity mobile devices. We also present a route network generation method, which constructs a graph representation of all user routes from legacy floor plans. Given these inputs, our HMM-based trajectory matching algorithm recovers user trajectories. The main contribution is the notion of path compatibility, in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for metric/topological/semantic agreement with the prior map. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our method can recover the user's location to within several meters in one to two minutes after a "cold start."by Jun-geun Park.Ph.D

    Models and Algorithms for Ultra-Wideband Localization in Single- and Multi-Robot Systems

    Get PDF
    Location is a piece of information that empowers almost any type of application. In contrast to the outdoors, where global navigation satellite systems provide geo-spatial positioning, there are still millions of square meters of indoor space that are unaccounted for by location sensing technology. Moreover, predictions show that people’s activities are likely to shift more and more towards urban and indoor environments– the United Nations predict that by 2020, over 80% of the world’s population will live in cities. Meanwhile, indoor localization is a problem that is not simply solved: people, indoor furnishings, walls and building structures—in the eyes of a positioning sensor, these are all obstacles that create a very challenging environment. Many sensory modalities have difficulty in overcoming such harsh conditions when used alone. For this reason, and also because we aim for a portable, miniaturizable, cost-effective solution, with centimeter-level accuracy, we choose to solve the indoor localization problem with a hybrid approach that consists of two complementary components: ultra-wideband localization, and collaborative localization. In pursuit of the final, hybrid product, our research leads us to ask what benefits collaborative localization can provide to ultra-wideband localization—and vice versa. The road down this path includes diving into these orthogonal sub-domains of indoor localization to produce two independent localization solutions, before finally combining them to conclude our work. As for all systems that can be quantitatively examined, we recognize that the quality of our final product is defined by the rigor of our evaluation process. Thus, a core element of our work is the experimental setup, which we design in a modular fashion, and which we complexify incrementally according to the various stages of our studies. With the goal of implementing an evaluation system that is systematic, repeatable, and controllable, our approach is centered around the mobile robot. We harness this platform to emulate mobile targets, and track it in real-time with a highly reliable ground truth positioning system. Furthermore, we take advantage of the miniature size of our mobile platform, and include multiple entities to form a multi-robot system. This augmented setup then allows us to use the same experimental rigor to evaluate our collaborative localization strategies. Finally, we exploit the consistency of our experiments to perform cross-comparisons of the various results throughout the presented work. Ultra-wideband counts among the most interesting technologies for absolute indoor localization known to date. Owing to its fine delay resolution and its ability to penetrate through various materials, ultra-wideband provides a potentially high ranging accuracy, even in cluttered, non-line-of-sight environments. However, despite its desirable traits, the resolution of non-line-of-sight signals remains a hard problem. In other words, if a non-line-of-sight signal is not recognized as such, it leads to significant errors in the position estimate. Our work improves upon state-of-the-art by addressing the peculiarities of ultra-wideband signal propagation with models that capture the spatiality as well as the multimodal nature of the error statistics. Simultaneously, we take care to develop an underlying error model that is compact and that can be calibrated by means of efficient algorithms. In order to facilitate the usage of our multimodal error model, we use a localization algorithm that is based on particle filters. Our collaborative localization strategy distinguishes itself from prior work by emphasizing cost-efficiency, full decentralization, and scalability. The localization method is based on relative positioning and uses two quantities: relative range and relative bearing. We develop a relative robot detection model that integrates these measurements, and is embedded in our particle filter based localization framework. In addition to the robot detection model, we consider an algorithmic component, namely a reciprocal particle sampling routine, which is designed to facilitate the convergence of a robot’s position estimate. Finally, in order to reduce the complexity of our collaborative localization algorithm, and in order to reduce the amount of positioning data to be communicated between the robots, we develop a particle clustering method, which is used in conjunction with our robot detection model. The final stage of our research investigates the combined roles of collaborative localization and ultra-wideband localization. Numerous experiments are able to validate our overall localization strategy, and show that the performance can be significantly improved when using two complementary sensory modalities. Since the fusion of ultra-wideband positioning sensors with exteroceptive sensors has hardly been considered so far, our studies present pioneering work in this domain. Several insights indicate that collaboration—even if through noisy sensors—is a useful tool to reduce localization errors. In particular, we show that our collaboration strategy can provide the means to minimize the localization error, given that the collaborative design parameters are optimally tuned. Our final results show median localization errors below 10 cm in cluttered environments
    corecore