19,197 research outputs found

    Indoor Positioning and Navigation by Semantic Localization Based on Visual Context

    Get PDF
    Conventional indoor localization techniques rely on high-precision indoor 3/6 degrees-of-freedom (DOF) positioning of the user device which may be infeasible if the device lacks positioning sensors such as GPS or IMU, if such sensors are turned off, or if the sensors have insufficient accuracy. This disclosure describes techniques the use of language modeling techniques for providing indoor navigation capabilities in the absence of such sensor data based on the local visual context obtained with a camera. Text captions describing frames of the user’s visual context in an indoor space are generated. A collection of captions for the current and recently captured, timestamped frames of the visual context, and a suitable prompt and metadata are input to a large language model to determine the current location of the user within the indoor space. The techniques can be incorporated within any indoor digital mapping and navigation application via any device capable of capturing the visual context via a camera

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot

    Deep Network Uncertainty Maps for Indoor Navigation

    Full text link
    Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference on Humanoid Robots (Humanoids)

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    User Experience Enhancement on Smartphones using Wireless Communication Technologies

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2020. 8. 박세웅.Recently, various sensors as well as wireless communication technologies such as Wi-Fi and Bluetooth Low Energy (BLE) have been equipped with smartphones. In addition, in many cases, users use a smartphone while on the move, so if a wireless communication technologies and various sensors are used for a mobile user, a better user experience can be provided. For example, when a user moves while using Wi-Fi, the user experience can be improved by providing a seamless Wi-Fi service. In addition, it is possible to provide a special service such as indoor positioning or navigation by estimating the users mobility in an indoor environment, and additional services such as location-based advertising and payment systems can also be provided. Therefore, improving the user experience by using wireless communication technology and smartphones sensors is considered to be an important research field in the future. In this dissertation, we propose three systems that can improve the user experience or convenience by usingWi-Fi, BLE, and smartphones sensors: (i) BLEND: BLE beacon-aided fast Wi-Fi handoff for smartphones, (ii) PYLON: Smartphone based Indoor Path Estimation and Localization without Human Intervention, (iii) FINISH: Fully-automated Indoor Navigation using Smartphones with Zero Human Assistance. First, we propose fast handoff scheme called BLEND exploiting BLE as secondary radio. We conduct detailed analysis of the sticky client problem on commercial smartphones with experiment and close examination of Android source code. We propose BLEND, which exploits BLE modules to provide smartphones with prior knowledge of the presence and information of APs operating at 2.4 and 5 GHz Wi-Fi channels. BLEND operating with only application requires no hardware and Android source code modification of smartphones.We prototype BLEND with commercial smartphones and evaluate the performance in real environment. Our measurement results demonstrate that BLEND significantly improves throughput and video bitrate by up to 61% and 111%, compared to a commercial Android application, respectively, with negligible energy overhead. Second, we design a path estimation and localization system, termed PYLON, which is plug-and-play on Android smartphones. PYLON includes a novel landmark correction scheme that leverages real doors of indoor environments consisting of floor plan mapping, door passing time detection and correction. It operates without any user intervention. PYLON relaxes some requirements for localization systems. It does not require any modifications to hardware or software of smartphones, and the initial location of WiFi APs, BLE beacons, and users. We implement PYLON on five Android smartphones and evaluate it on two office buildings with the help of three participants to prove applicability and scalability. PYLON achieves very high floor plan mapping accuracy with a low localization error. Finally, We design a fully-automated navigation system, termed FINISH, which addresses the problems of existing previous indoor navigation systems. FINISH generates the radio map of an indoor building based on the localization system to determine the initial location of the user. FINISH relaxes some requirements for current indoor navigation systems. It does not require any human assistance to provide navigation instructions. In addition, it is plug-and-play on Android smartphones. We implement FINISH on five Android smartphones and evaluate it on five floors of an office building with the help of multiple users to prove applicability and scalability. FINISH determines the location of the user with extremely high accuracy with in one step. In summary, we propose systems that enhance the users convenience and experience by utilizing wireless infrastructures such as Wi-Fi and BLE and various smartphones sensors such as accelerometer, gyroscope, and barometer equipped in smartphones. Systems are implemented on commercial smartphones to verify the performance through experiments. As a result, systems show the excellent performance that can enhance the users experience.1 Introduction 1 1.1 Motivation 1 1.2 Overview of Existing Approaches 3 1.2.1 Wi-Fi handoff for smartphones 3 1.2.2 Indoor path estimation and localization 4 1.2.3 Indoor navigation 5 1.3 Main Contributions 7 1.3.1 BLEND: BLE Beacon-aided Fast Handoff for Smartphones 7 1.3.2 PYLON: Smartphone Based Indoor Path Estimation and Localization with Human Intervention 8 1.3.3 FINISH: Fully-automated Indoor Navigation using Smartphones with Zero Human Assistance 9 1.4 Organization of Dissertation 10 2 BLEND: BLE Beacon-Aided FastWi-Fi Handoff for Smartphones 11 2.1 Introduction 11 2.2 Related Work 14 2.2.1 Wi-Fi-based Handoff 14 2.2.2 WPAN-aided AP Discovery 15 2.3 Background 16 2.3.1 Handoff Procedure in IEEE 802.11 16 2.3.2 BSS Load Element in IEEE 802.11 16 2.3.3 Bluetooth Low Energy 17 2.4 Sticky Client Problem 17 2.4.1 Sticky Client Problem of Commercial Smartphone 17 2.4.2 Cause of Sticky Client Problem 20 2.5 BLEND: Proposed Scheme 21 2.5.1 Advantages and Necessities of BLE as Secondary Low-Power Radio 21 2.5.2 Overall Architecture 22 2.5.3 AP Operation 23 2.5.4 Smartphone Operation 24 2.5.5 Verification of aTH estimation 28 2.6 Performance Evaluation 30 2.6.1 Implementation and Measurement Setup 30 2.6.2 Saturated Traffic Scenario 31 2.6.3 Video Streaming Scenario 35 2.7 Summary 38 3 PYLON: Smartphone based Indoor Path Estimation and Localization without Human Intervention 41 3.1 Introduction 41 3.2 Background and Related Work 44 3.2.1 Infrastructure-Based Localization 44 3.2.2 Fingerprint-Based Localization 45 3.2.3 Model-Based Localization 45 3.2.4 Dead Reckoning 46 3.2.5 Landmark-Based Localization 47 3.2.6 Simultaneous Localization and Mapping (SLAM) 47 3.3 System Overview 48 3.3.1 Notable RSSI Signature 49 3.3.2 Smartphone Operation 50 3.3.3 Server Operation 51 3.4 Path Estimation 52 3.4.1 Step Detection 52 3.4.2 Step Length Estimation 54 3.4.3 Walking Direction 54 3.4.4 Location Update 55 3.5 Landmark Correction Part 1: Virtual Room Generation 56 3.5.1 RSSI Stacking Difference 56 3.5.2 Virtual Room Generation 57 3.5.3 Virtual Graph Generation 59 3.5.4 Physical Graph Generation 60 3.6 Landmark Correction Part 2: From Floor Plan Mapping to Path Correction 60 3.6.1 Candidate Graph Generation 60 3.6.2 Backbone Node Mapping 62 3.6.3 Dead-end Node Mapping 65 3.6.4 Final Candidate Graph Selection 66 3.6.5 Door Passing Time Detection 68 3.6.6 Path Correction 70 3.7 Particle Filter 71 3.8 Performance Evaluation 73 3.8.1 Implementation and Measurement Setup 73 3.8.2 Step Detection Accuracy 77 3.8.3 Floor Plan Mapping Accuracy 77 3.8.4 Door Passing Time 78 3.8.5 Walking Direction and Localization Performance 81 3.8.6 Impact of WiFi AP and BLE Beacon Number 84 3.8.7 Impact of Walking Distance and Speed 84 3.8.8 Performance on Different Areas 87 3.9 Summary 87 4 FINISH: Fully-automated Indoor Navigation using Smartphones with Zero Human Assistance 91 4.1 Introduction 91 4.2 Related Work 92 4.2.1 Localization-based Navigation System 92 4.2.2 Peer-to-peer Navigation System 93 4.3 System Overview 93 4.3.1 System Architecture 93 4.3.2 An Example for Navigation 95 4.4 Level Change Detection and Floor Decision 96 4.4.1 Level Change Detection 96 4.5 Real-time navigation 97 4.5.1 Initial Floor and Location Decision 97 4.5.2 Orientation Adjustment 98 4.5.3 Shortest Path Estimation 99 4.6 Performance Evaluation 99 4.6.1 Initial Location Accuracy 99 4.6.2 Real-Time Navigation Accuracy 100 4.7 Summary 101 5 Conclusion 102 5.1 Research Contributions 102 5.2 Future Work 103 Abstract (In Korean) 118 감사의 글Docto

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU

    Toward a unified PNT, Part 1: Complexity and context: Key challenges of multisensor positioning

    Get PDF
    The next generation of navigation and positioning systems must provide greater accuracy and reliability in a range of challenging environments to meet the needs of a variety of mission-critical applications. No single navigation technology is robust enough to meet these requirements on its own, so a multisensor solution is required. Known environmental features, such as signs, buildings, terrain height variation, and magnetic anomalies, may or may not be available for positioning. The system could be stationary, carried by a pedestrian, or on any type of land, sea, or air vehicle. Furthermore, for many applications, the environment and host behavior are subject to change. A multi-sensor solution is thus required. The expert knowledge problem is compounded by the fact that different modules in an integrated navigation system are often supplied by different organizations, who may be reluctant to share necessary design information if this is considered to be intellectual property that must be protected
    corecore