1,775 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Device-free Localization using Received Signal Strength Measurements in Radio Frequency Network

    Full text link
    Device-free localization (DFL) based on the received signal strength (RSS) measurements of radio frequency (RF)links is the method using RSS variation due to the presence of the target to localize the target without attaching any device. The majority of DFL methods utilize the fact the link will experience great attenuation when obstructed. Thus that localization accuracy depends on the model which describes the relationship between RSS loss caused by obstruction and the position of the target. The existing models is too rough to explain some phenomenon observed in the experiment measurements. In this paper, we propose a new model based on diffraction theory in which the target is modeled as a cylinder instead of a point mass. The proposed model can will greatly fits the experiment measurements and well explain the cases like link crossing and walking along the link line. Because the measurement model is nonlinear, particle filtering tracing is used to recursively give the approximate Bayesian estimation of the position. The posterior Cramer-Rao lower bound (PCRLB) of proposed tracking method is also derived. The results of field experiments with 8 radio sensors and a monitored area of 3.5m 3.5m show that the tracking error of proposed model is improved by at least 36 percent in the single target case and 25 percent in the two targets case compared to other models.Comment: This paper has been withdrawn by the author due to some mistake

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    Simultaneous Multi-Information Fusion and Parameter Estimation for Robust 3-D Indoor Positioning Systems

    Get PDF
    Typical WLAN based indoor positioning systems take the received signal strength (RSS) as the major information source. Due to the complicated indoor environment, the RSS measurements are hard to model and too noisy to achieve a satisfactory 3-D accuracy in multi-floor scenarios. To enhance the performance of WLAN positioning systems, extra information sources could be integrated. In this paper, a Bayesian framework is applied to fuse multi-information sources and estimate the spatial and time varying parameters simultaneously and adaptively. An application of this framework, which fuses pressure measurements, a topological building map with RSS measurements, and simultaneously estimates the pressure sensor bias, is investigated. Our experiments indicate that the localization performance is more accurate and robust by using our approach

    Device Free Localisation Techniques in Indoor Environments

    Get PDF
    The location estimation of a target for a long period was performed only by device based localisation technique which is difficult in applications where target especially human is non-cooperative. A target was detected by equipping a device using global positioning systems, radio frequency systems, ultrasonic frequency systems, etc. Device free localisation (DFL) is an upcoming technology in automated localisation in which target need not equip any device for identifying its position by the user. For achieving this objective, the wireless sensor network is a better choice due to its growing popularity. This paper describes the possible categorisation of recently developed DFL techniques using wireless sensor network. The scope of each category of techniques is analysed by comparing their potential benefits and drawbacks. Finally, future scope and research directions in this field are also summarised

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
    • …
    corecore