155 research outputs found

    An Adaptive Human Activity-Aided Hand-Held Smartphone-Based Pedestrian Dead Reckoning Positioning System

    Get PDF
    Pedestrian dead reckoning (PDR), enabled by smartphones’ embedded inertial sensors, is widely applied as a type of indoor positioning system (IPS). However, traditional PDR faces two challenges to improve its accuracy: lack of robustness for different PDR-related human activities and positioning error accumulation over elapsed time. To cope with these issues, we propose a novel adaptive human activity-aided PDR (HAA-PDR) IPS that consists of two main parts, human activity recognition (HAR) and PDR optimization. (1) For HAR, eight different locomotion-related activities are divided into two classes: steady-heading activities (ascending/descending stairs, stationary, normal walking, stationary stepping, and lateral walking) and non-steady-heading activities (door opening and turning). A hierarchical combination of a support vector machine (SVM) and decision tree (DT) is used to recognize steady-heading activities. An autoencoder-based deep neural network (DNN) and a heading range-based method to recognize door opening and turning, respectively. The overall HAR accuracy is over 98.44%. (2) For optimization methods, a process automatically sets the parameters of the PDR differently for different activities to enhance step counting and step length estimation. Furthermore, a method of trajectory optimization mitigates PDR error accumulation utilizing the non-steady-heading activities. We divided the trajectory into small segments and reconstructed it after targeted optimization of each segment. Our method does not use any a priori knowledge of the building layout, plan, or map. Finally, the mean positioning error of our HAA-PDR in a multilevel building is 1.79 m, which is a significant improvement in accuracy compared with a baseline state-of-the-art PDR system

    Map++: A Crowd-sensing System for Automatic Map Semantics Identification

    Full text link
    Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among others. Our analysis shows that cell-phones sensors with humans in vehicles or walking get affected by the different road features, which can be mined to extend the features of both free and commercial mapping services. We present the design and implementation of Map++ and evaluate it in a large city. Our evaluation shows that we can detect the different semantics accurately with at most 3% false positive rate and 6% false negative rate for both vehicle and pedestrian-based features. Moreover, we show that Map++ has a small energy footprint on the cell-phones, highlighting its promise as a ubiquitous digital maps enriching service.Comment: Published in the Eleventh Annual IEEE International Conference on Sensing, Communication, and Networking (IEEE SECON 2014

    RuDaCoP: The Dataset for Smartphone-based Intellectual Pedestrian Navigation

    Full text link
    This paper presents the large and diverse dataset for development of smartphone-based pedestrian navigation algorithms. This dataset consists of about 1200 sets of inertial measurements from sensors of several smartphones. The measurements are collected while walking through different trajectories up to 10 minutes long. The data are accompanied by the high accuracy ground truth collected with two foot-mounted inertial measurement units and post-processed by the presented algorithms. The dataset suits both for training of intellectual pedestrian navigation algorithms based on learning techniques and for development of pedestrian navigation algorithms based on classical approaches. The dataset is accessible at http://gartseev.ru/projects/ipin2019

    Motion Compatibility for Indoor Localization

    Get PDF
    Indoor localization -- a device's ability to determine its location within an extended indoor environment -- is a fundamental enabling capability for mobile context-aware applications. Many proposed applications assume localization information from GPS, or from WiFi access points. However, GPS fails indoors and in urban canyons, and current WiFi-based methods require an expensive, and manually intensive, mapping, calibration, and configuration process performed by skilled technicians to bring the system online for end users. We describe a method that estimates indoor location with respect to a prior map consisting of a set of 2D floorplans linked through horizontal and vertical adjacencies. Our main contribution is the notion of "path compatibility," in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for agreement with the prior map. Path compatibility is encoded in an HMM-based matching model, from which the method recovers the user s location trajectory from the low-level motion estimates. To recognize user motions, we present a motion labeling algorithm, extracting fine-grained user motions from sensor data of handheld mobile devices. We propose "feature templates," which allows the motion classifier to learn the optimal window size for a specific combination of a motion and a sensor feature function. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our motion labeling algorithm classifies user motions with 94.5% accuracy, and our trajectory matching algorithm can recover the user's location to within 5 meters on average after one minute of movements from an unknown starting location. Prior information, such as a known starting floor, further decreases the time required to obtain precise location estimate

    Improving Foot-Mounted Inertial Navigation Through Real-Time Motion Classification

    Full text link
    We present a method to improve the accuracy of a foot-mounted, zero-velocity-aided inertial navigation system (INS) by varying estimator parameters based on a real-time classification of motion type. We train a support vector machine (SVM) classifier using inertial data recorded by a single foot-mounted sensor to differentiate between six motion types (walking, jogging, running, sprinting, crouch-walking, and ladder-climbing) and report mean test classification accuracy of over 90% on a dataset with five different subjects. From these motion types, we select two of the most common (walking and running), and describe a method to compute optimal zero-velocity detection parameters tailored to both a specific user and motion type by maximizing the detector F-score. By combining the motion classifier with a set of optimal detection parameters, we show how we can reduce INS position error during mixed walking and running motion. We evaluate our adaptive system on a total of 5.9 km of indoor pedestrian navigation performed by five different subjects moving along a 130 m path with surveyed ground truth markers.Comment: In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN'17), Sapporo, Japan, Sep. 18-21, 201

    Enabling smart city resilience: post-disaster response and structural health monitoring

    Get PDF
    The concept of Smart Cities has been introduced to categorize a vast area of activities to enhance the quality of life of citizens. A central feature of these activities is the pervasive use of Information and Communication Technologies (ICT), helping cities to make better use of limited resources. Indeed, the ASCE Vision for Civil Engineering in 2025 (ASCE 2007) portends a future in which engineers will rely on and leverage real-time access to a living database, sensors, diagnostic tools, and other advanced technologies to ensure that informed decisions are made. However, these advances in technology take place against a backdrop of the deterioration of infrastructure, in addition to natural and human-made disasters. Moreover, recent events constantly remind us of the tremendous devastation that natural and human-made disasters can wreak on society. As such, emergency response procedures and resilience are among the crucial dimensions of any Smart City plan. The U.S. Department of Homeland Security (DHS) has recently launched plans to invest $50 million to develop cutting-edge emergency response technologies for Smart Cities. Furthermore, after significant disasters have taken place, it is imperative that emergency facilities and evacuation routes, including bridges and highways, be assessed for safety. The objective of this research is to provide a new framework that uses commercial off-the-shelf (COTS) devices such as smartphones, digital cameras, and unmanned aerial vehicles to enhance the functionality of Smart Cities, especially with respect to emergency response and civil infrastructure monitoring/assessment. To achieve this objective, this research focuses on post-disaster victim localization and assessment, first responder tracking and event localization, and vision-based structural monitoring/assessment, including the use of unmanned aerial vehicles (UAVs). This research constitutes a significant step toward the realization of Smart City Resilience

    Enabling smart city resilience: Post-disaster response and structural health monitoring

    Get PDF
    The concept of Smart Cities has been introduced to categorize a vast area of activities to enhance the quality of life of citizens. A central feature of these activities is the pervasive use of Information and Communication Technologies (ICT), helping cities to make better use of limited resources. Indeed, the ASCE Vision for Civil Engineering in 2025 (ASCE 2007) portends a future in which engineers will rely on and leverage real-time access to a living database, sensors, diagnostic tools, and other advanced technologies to ensure that informed decisions are made. However, these advances in technology take place against a backdrop of the deterioration of infrastructure, in addition to natural and human-made disasters. Moreover, recent events constantly remind us of the tremendous devastation that natural and human-made disasters can wreak on society. As such, emergency response procedures and resilience are among the crucial dimensions of any Smart City plan. The U.S. Department of Homeland Security (DHS) has recently launched plans to invest $50 million to develop cutting-edge emergency response technologies for Smart Cities. Furthermore, after significant disasters have taken place, it is imperative that emergency facilities and evacuation routes, including bridges and highways, be assessed for safety. The objective of this research is to provide a new framework that uses commercial off-the-shelf (COTS) devices such as smartphones, digital cameras, and unmanned aerial vehicles to enhance the functionality of Smart Cities, especially with respect to emergency response and civil infrastructure monitoring/assessment. To achieve this objective, this research focuses on post-disaster victim localization and assessment, first responder tracking and event localization, and vision-based structural monitoring/assessment, including the use of unmanned aerial vehicles (UAVs). This research constitutes a significant step toward the realization of Smart City Resilience.National Science Foundation Grant No. 1030454Association of American RailroadsOpe

    Applying multimodal sensing to human location estimation

    Get PDF
    Mobile devices like smartphones and smartwatches are beginning to "stick" to the human body. Given that these devices are equipped with a variety of sensors, they are becoming a natural platform to understand various aspects of human behavior. This dissertation will focus on just one dimension of human behavior, namely "location". We will begin by discussing our research on localizing humans in indoor environments, a problem that requires precise tracking of human footsteps. We investigated the benefits of leveraging smartphone sensors (accelerometers, gyroscopes, magnetometers, etc.) into the indoor localization framework, which breaks away from pure radio frequency based localization (e.g., cellular, WiFi). Our research leveraged inherent properties of indoor environments to perform localization. We also designed additional solutions, where computer vision was integrated with sensor fusion to offer highly precise localization. We will close this thesis with micro-scale tracking of the human wrist and demonstrate how motion data processing is indeed a "double-edged sword", offering unprecedented utility on one hand while breaching privacy on the other

    Indoor positioning for smartphones without infrastructure and user adaptable

    Get PDF
    Given that the classic solutions for positioning outdoors, such as GPS (Global Positioning System) or GNSS (Global Navigation Satellite System) do not work indoors, there have been emerging multiple alternatives for Indoor Location. Usually these solutions require extensive and complex installations, which involve high costs. In this thesis we present a robust indoor positioning solution for smartphones that maximizes location accuracy while minimizes the required infrastructure. We have considered two main modes of displacement: walking and in a vehicle. Our solution is robust to different users, allows them to carry the phone in different positions and allows to use the device freely while performing different daily activities, such as walking, driving , going up and down stairs, etc. We achieved that by developing a robust indoor positioning system that combines information from multiple sources such as radio frequency readings and inertial sensors
    • …
    corecore