569 research outputs found

    ์ ๋ถ„ ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ• ์œตํ•ฉ์„ ์ด์šฉํ•œ ์Šค๋งˆํŠธํฐ ๋‹ค์ค‘ ๋™์ž‘์—์„œ ๋ณดํ–‰ ํ•ญ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2020. 8. ๋ฐ•์ฐฌ๊ตญ.In this dissertation, an IA-PA fusion-based PDR (Pedestrian Dead Reckoning) using low-cost inertial sensors is proposed to improve the indoor position estimation. Specifically, an IA (Integration Approach)-based PDR algorithm combined with measurements from PA (Parametric Approach) is constructed so that the algorithm is operated even in various poses that occur when a pedestrian moves with a smartphone indoors. In addition, I propose an algorithm that estimates the device attitude robustly in a disturbing situation by an ellipsoidal method. In addition, by using the machine learning-based pose recognition, it is possible to improve the position estimation performance by varying the measurement update according to the poses. First, I propose an adaptive attitude estimation based on ellipsoid technique to accurately estimate the direction of movement of a smartphone device. The AHRS (Attitude and Heading Reference System) uses an accelerometer and a magnetometer as measurements to calculate the attitude based on the gyro and to compensate for drift caused by gyro sensor errors. In general, the attitude estimation performance is poor in acceleration and geomagnetic disturbance situations, but in order to effectively improve the estimation performance, this dissertation proposes an ellipsoid-based adaptive attitude estimation technique. When a measurement disturbance comes in, it is possible to update the measurement more accurately than the adaptive estimation technique without considering the direction by adjusting the measurement covariance with the ellipsoid method considering the direction of the disturbance. In particular, when the disturbance only comes in one axis, the proposed algorithm can use the measurement partly by updating the other two axes considering the direction. The proposed algorithm shows its effectiveness in attitude estimation under disturbances through the rate table and motion capture equipment. Next, I propose a PDR algorithm that integrates IA and PA that can be operated in various poses. When moving indoors using a smartphone, there are many degrees of freedom, so various poses such as making a phone call, texting, and putting a pants pocket are possible. In the existing smartphone-based positioning algorithms, the position is estimated based on the PA, which can be used only when the pedestrian's walking direction and the device's direction coincide, and if it does not, the position error due to the mismatch in angle is large. In order to solve this problem, this dissertation proposes an algorithm that constructs state variables based on the IA and uses the position vector from the PA as a measurement. If the walking direction and the device heading do not match based on the pose recognized through machine learning technique, the position is updated in consideration of the direction calculated using PCA (Principal Component Analysis) and the step length obtained through the PA. It can be operated robustly even in various poses that occur. Through experiments considering various operating conditions and paths, it is confirmed that the proposed method stably estimates the position and improves performance even in various indoor environments.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ €๊ฐ€ํ˜• ๊ด€์„ฑ์„ผ์„œ๋ฅผ ์ด์šฉํ•œ ๋ณดํ–‰ํ•ญ๋ฒ•์‹œ์Šคํ…œ (PDR: Pedestrian Dead Reckoning)์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ๋ณดํ–‰์ž๊ฐ€ ์‹ค๋‚ด์—์„œ ์Šค๋งˆํŠธํฐ์„ ๋“ค๊ณ  ์ด๋™ํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋™์ž‘ ์ƒํ™ฉ์—์„œ๋„ ์šด์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก, ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฐ˜ ์ธก์ •์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์ ๋ถ„ ๊ธฐ๋ฐ˜์˜ ๋ณดํ–‰์ž ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌ์„ฑํ•œ๋‹ค. ๋˜ํ•œ ํƒ€์›์ฒด ๊ธฐ๋ฐ˜ ์ž์„ธ ์ถ”์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌ์„ฑํ•˜์—ฌ ์™ธ๋ž€ ์ƒํ™ฉ์—์„œ๋„ ๊ฐ•์ธํ•˜๊ฒŒ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ์ถ”๊ฐ€์ ์œผ๋กœ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ๋ฐ˜์˜ ๋™์ž‘ ์ธ์‹ ์ •๋ณด๋ฅผ ์ด์šฉ, ๋™์ž‘์— ๋”ฐ๋ฅธ ์ธก์ •์น˜ ์—…๋ฐ์ดํŠธ๋ฅผ ๋‹ฌ๋ฆฌํ•จ์œผ๋กœ์จ ์œ„์น˜ ์ถ”์ • ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ๋จผ์ € ์Šค๋งˆํŠธํฐ ๊ธฐ๊ธฐ์˜ ์ด๋™ ๋ฐฉํ–ฅ์„ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•ด ํƒ€์›์ฒด ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜ ์ ์‘ ์ž์„ธ ์ถ”์ •์„ ์ œ์•ˆํ•œ๋‹ค. ์ž์„ธ ์ถ”์ • ๊ธฐ๋ฒ• (AHRS: Attitude and Heading Reference System)์€ ์ž์ด๋กœ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ž์„ธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ์ž์ด๋กœ ์„ผ์„œ์˜ค์ฐจ์— ์˜ํ•ด ๋ฐœ์ƒํ•˜๋Š” ๋“œ๋ฆฌํ”„ํŠธ๋ฅผ ๋ณด์ •ํ•˜๊ธฐ ์œ„ํ•ด ์ธก์ •์น˜๋กœ ๊ฐ€์†๋„๊ณ„์™€ ์ง€์ž๊ณ„๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ฐ€์† ๋ฐ ์ง€์ž๊ณ„ ์™ธ๋ž€ ์ƒํ™ฉ์—์„œ๋Š” ์ž์„ธ ์ถ”์ • ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š”๋ฐ, ์ถ”์ • ์„ฑ๋Šฅ์„ ํšจ๊ณผ์ ์œผ๋กœ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํƒ€์›์ฒด ๊ธฐ๋ฐ˜ ์ ์‘ ์ž์„ธ ์ถ”์ • ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ธก์ •์น˜ ์™ธ๋ž€์ด ๋“ค์–ด์˜ค๋Š” ๊ฒฝ์šฐ, ์™ธ๋ž€์˜ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•˜์—ฌ ํƒ€์›์ฒด ๊ธฐ๋ฒ•์œผ๋กœ ์ธก์ •์น˜ ๊ณต๋ถ„์‚ฐ์„ ์กฐ์ •ํ•ด์คŒ์œผ๋กœ์จ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์€ ์ ์‘ ์ถ”์ • ๊ธฐ๋ฒ•๋ณด๋‹ค ์ •ํ™•ํ•˜๊ฒŒ ์ธก์ •์น˜ ์—…๋ฐ์ดํŠธ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ์™ธ๋ž€์ด ํ•œ ์ถ•์œผ๋กœ๋งŒ ๋“ค์–ด์˜ค๋Š” ๊ฒฝ์šฐ, ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•ด ๋‚˜๋จธ์ง€ ๋‘ ์ถ•์— ๋Œ€ํ•ด์„œ๋Š” ์—…๋ฐ์ดํŠธ ํ•ด์คŒ์œผ๋กœ์จ ์ธก์ •์น˜๋ฅผ ๋ถ€๋ถ„์ ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ ˆ์ดํŠธ ํ…Œ์ด๋ธ”, ๋ชจ์…˜ ์บก์ณ ์žฅ๋น„๋ฅผ ํ†ตํ•ด ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ž์„ธ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋‹ค์Œ์œผ๋กœ ๋‹ค์–‘ํ•œ ๋™์ž‘์—์„œ๋„ ์šด์šฉ ๊ฐ€๋Šฅํ•œ ์ ๋ถ„ ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ์œตํ•ฉํ•˜๋Š” ๋ณดํ–‰ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ์Šค๋งˆํŠธํฐ์„ ์ด์šฉํ•ด ์‹ค๋‚ด๋ฅผ ์ด๋™ํ•  ๋•Œ์—๋Š” ์ž์œ ๋„๊ฐ€ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ์ „ํ™” ๊ฑธ๊ธฐ, ๋ฌธ์ž, ๋ฐ”์ง€ ์ฃผ๋จธ๋‹ˆ ๋„ฃ๊ธฐ ๋“ฑ ๋‹ค์–‘ํ•œ ๋™์ž‘์ด ๋ฐœ์ƒ ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ธฐ์กด์˜ ์Šค๋งˆํŠธํฐ ๊ธฐ๋ฐ˜ ๋ณดํ–‰ ํ•ญ๋ฒ•์—์„œ๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๋Š”๋ฐ, ์ด๋Š” ๋ณดํ–‰์ž์˜ ์ง„ํ–‰ ๋ฐฉํ–ฅ๊ณผ ๊ธฐ๊ธฐ์˜ ๋ฐฉํ–ฅ์ด ์ผ์น˜ํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋ฉฐ ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ ์ž์„ธ ์˜ค์ฐจ๋กœ ์ธํ•œ ์œ„์น˜ ์˜ค์ฐจ๊ฐ€ ํฌ๊ฒŒ ๋ฐœ์ƒํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ ๋ถ„ ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์ƒํƒœ๋ณ€์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ๋‚˜์˜ค๋Š” ์œ„์น˜ ๋ฒกํ„ฐ๋ฅผ ์ธก์ •์น˜๋กœ ์‚ฌ์šฉํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๋งŒ์•ฝ ๊ธฐ๊ณ„ํ•™์Šต์„ ํ†ตํ•ด ์ธ์‹ํ•œ ๋™์ž‘์„ ๋ฐ”ํƒ•์œผ๋กœ ์ง„ํ–‰ ๋ฐฉํ–ฅ๊ณผ ๊ธฐ๊ธฐ ๋ฐฉํ–ฅ์ด ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ์ฃผ์„ฑ๋ถ„ ๋ถ„์„์„ ํ†ตํ•ด ๊ณ„์‚ฐํ•œ ์ง„ํ–‰๋ฐฉํ–ฅ์„ ์ด์šฉํ•ด ์ง„ํ–‰ ๋ฐฉํ–ฅ์„, ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ์–ป์€ ๋ณดํญ์œผ๋กœ ๊ฑฐ๋ฆฌ๋ฅผ ์—…๋ฐ์ดํŠธํ•ด ์คŒ์œผ๋กœ์จ ๋ณดํ–‰ ์ค‘ ๋ฐœ์ƒํ•˜๋Š” ์—ฌ๋Ÿฌ ๋™์ž‘์—์„œ๋„ ๊ฐ•์ธํ•˜๊ฒŒ ์šด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‹ค์–‘ํ•œ ๋™์ž‘ ์ƒํ™ฉ ๋ฐ ๊ฒฝ๋กœ๋ฅผ ๊ณ ๋ คํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ์œ„์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ๋‹ค์–‘ํ•œ ์‹ค๋‚ด ํ™˜๊ฒฝ์—์„œ๋„ ์•ˆ์ •์ ์œผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๊ณ  ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Motivation and Background 1 1.2 Objectives and Contribution 5 1.3 Organization of the Dissertation 6 Chapter 2 Pedestrian Dead Reckoning System 8 2.1 Overview of Pedestrian Dead Reckoning 8 2.2 Parametric Approach 9 2.2.1 Step detection algorithm 11 2.2.2 Step length estimation algorithm 13 2.2.3 Heading estimation 14 2.3 Integration Approach 15 2.3.1 Extended Kalman filter 16 2.3.2 INS-EKF-ZUPT 19 2.4 Activity Recognition using Machine Learning 21 2.4.1 Challenges in HAR 21 2.4.2 Activity recognition chain 22 Chapter 3 Attitude Estimation in Smartphone 26 3.1 Adaptive Attitude Estimation in Smartphone 26 3.1.1 Indirect Kalman filter-based attitude estimation 26 3.1.2 Conventional attitude estimation algorithms 29 3.1.3 Adaptive attitude estimation using ellipsoidal methods 30 3.2 Experimental Results 36 3.2.1 Simulation 36 3.2.2 Rate table experiment 44 3.2.3 Handheld rotation experiment 46 3.2.4 Magnetic disturbance experiment 49 3.3 Summary 53 Chapter 4 Pedestrian Dead Reckoning in Multiple Poses of a Smartphone 54 4.1 System Overview 55 4.2 Machine Learning-based Pose Classification 56 4.2.1 Training dataset 57 4.2.2 Feature extraction and selection 58 4.2.3 Pose classification result using supervised learning in PDR 62 4.3 Fusion of the Integration and Parametric Approaches in PDR 65 4.3.1 System model 67 4.3.2 Measurement model 67 4.3.3 Mode selection 74 4.3.4 Observability analysis 76 4.4 Experimental Results 82 4.4.1 AHRS results 82 4.4.2 PCA results 84 4.4.3 IA-PA results 88 4.5 Summary 100 Chapter 5 Conclusions 103 5.1 Summary of the Contributions 103 5.2 Future Works 105 ๊ตญ๋ฌธ์ดˆ๋ก 125 Acknowledgements 127Docto

    An Adaptive Human Activity-Aided Hand-Held Smartphone-Based Pedestrian Dead Reckoning Positioning System

    Get PDF
    Pedestrian dead reckoning (PDR), enabled by smartphonesโ€™ embedded inertial sensors, is widely applied as a type of indoor positioning system (IPS). However, traditional PDR faces two challenges to improve its accuracy: lack of robustness for different PDR-related human activities and positioning error accumulation over elapsed time. To cope with these issues, we propose a novel adaptive human activity-aided PDR (HAA-PDR) IPS that consists of two main parts, human activity recognition (HAR) and PDR optimization. (1) For HAR, eight different locomotion-related activities are divided into two classes: steady-heading activities (ascending/descending stairs, stationary, normal walking, stationary stepping, and lateral walking) and non-steady-heading activities (door opening and turning). A hierarchical combination of a support vector machine (SVM) and decision tree (DT) is used to recognize steady-heading activities. An autoencoder-based deep neural network (DNN) and a heading range-based method to recognize door opening and turning, respectively. The overall HAR accuracy is over 98.44%. (2) For optimization methods, a process automatically sets the parameters of the PDR differently for different activities to enhance step counting and step length estimation. Furthermore, a method of trajectory optimization mitigates PDR error accumulation utilizing the non-steady-heading activities. We divided the trajectory into small segments and reconstructed it after targeted optimization of each segment. Our method does not use any a priori knowledge of the building layout, plan, or map. Finally, the mean positioning error of our HAA-PDR in a multilevel building is 1.79 m, which is a significant improvement in accuracy compared with a baseline state-of-the-art PDR system

    Advanced Map Matching Technologies and Techniques for Pedestrian/Wheelchair Navigation

    Get PDF
    Due to the constantly increasing technical advantages of mobile devices (such as smartphones), pedestrian/wheelchair navigation recently has achieved a high level of interest as one of smartphonesโ€™ potential mobile applications. While vehicle navigation systems have already reached a certain level of maturity, pedestrian/wheelchair navigation services are still in their infancy. By comparing vehicle navigation systems, a set of map matching requirements and challenges unique in pedestrian/wheelchair navigation is identified. To provide navigation assistance to pedestrians and wheelchair users, there is a need for the design and development of new map matching techniques. The main goal of this research is to investigate and develop advanced map matching technologies and techniques particular for pedestrian/wheelchair navigation services. As the first step in map matching, an adaptive candidate segment selection algorithm is developed to efficiently find candidate segments. Furthermore, to narrow down the search for the correct segment, advanced mathematical models are applied. GPS-based chain-code map matching, Hidden Markov Model (HMM) map matching, and fuzzy-logic map matching algorithms are developed to estimate real-time location of users in pedestrian/wheelchair navigation systems/services. Nevertheless, GPS signal is not always available in areas with high-rise buildings and even when there is a signal, the accuracy may not be high enough for localization of pedestrians and wheelchair users on sidewalks. To overcome these shortcomings of GPS, multi-sensor integrated map matching algorithms are investigated and developed in this research. These algorithms include a movement pattern recognition algorithm, using accelerometer and compass data, and a vision-based positioning algorithm to fill in signal gaps in GPS positioning. Experiments are conducted to evaluate the developed algorithms using real field test data (GPS coordinates and other sensors data). The experimental results show that the developed algorithms and the integrated sensors, i.e., a monocular visual odometry, a GPS, an accelerometer, and a compass, can provide high-quality and uninterrupted localization services in pedestrian/wheelchair navigation systems/services. The map matching techniques developed in this work can be applied to various pedestrian/wheelchair navigation applications, such as tracking senior citizens and children, or tourist service systems, and can be further utilized in building walking robots and automatic wheelchair navigation systems

    Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks

    Get PDF
    With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train deep convolutional neural network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy

    Off-line evaluation of indoor positioning systems in different scenarios: the experiences from IPIN 2020 competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.Track 3 organizers were supported by the European Unionโ€™s Horizon 2020 Research and Innovation programme under the Marie Skล‚odowska Curie Grant 813278 (A-WEAR: A network for dynamic WEarable Applications with pRivacy constraints), MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE), INSIGNIA (MICINN ref. PTQ2018-009981), and REPNIN+ (MICINN, ref. TEC2017-90808-REDT). We would like to thanks the UJIโ€™s Library managers and employees for their support while collecting the required datasets for Track 3. Track 5 organizers were supported by JST-OPERA Program, Japan, under Grant JPMJOP1612. Track 7 organizers were supported by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of โ€œBAYERN DIGITAL II. โ€ Team UMinho (Track 3) was supported by FCTโ€”Fundaรงรฃo para a Ciรชncia e Tecnologia within the R&D Units Project Scope under Grant UIDB/00319/2020, and the Ph.D. Fellowship under Grant PD/BD/137401/2018. Team YAI (Track 3) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 109-2221-E-197-026. Team Indora (Track 3) was supported in part by the Slovak Grant Agency, Ministry of Education and Academy of Science, Slovakia, under Grant 1/0177/21, and in part by the Slovak Research and Development Agency under Contract APVV-15-0091. Team TJU (Track 3) was supported in part by the National Natural Science Foundation of China under Grant 61771338 and in part by the Tianjin Research Funding under Grant 18ZXRHSY00190. Team Next-Newbie Reckoners (Track 3) were supported by the Singapore Government through the Industry Alignment Fundโ€”Industry Collaboration Projects Grant. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU). Team KawaguchiLab (Track 5) was supported by JSPS KAKENHI under Grant JP17H01762. Team WHU&AutoNavi (Track 6) was supported by the National Key Research and Development Program of China under Grant 2016YFB0502202. Team YAI (Tracks 6 and 7) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 110-2634-F-155-001

    Off-Line Evaluation of Indoor Positioning Systems in Different Scenarios: The Experiences From IPIN 2020 Competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements

    Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks

    Get PDF
    With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train deep convolutional neural network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased

    UWB and WiFi Systems as Passive Opportunistic Activity Sensing Radars

    Get PDF
    Human Activity Recognition (HAR) is becoming increasingly important in smart homes and healthcare applications such as assisted-living and remote health monitoring. In this paper, we use Ultra-Wideband (UWB) and commodity WiFi systems for the passive sensing of human activities. These systems are based on a receiver-only radar network that detects reflections of ambient Radio-Frequency (RF) signals from humans in the form of Channel Impulse Response (CIR) and Channel State Information (CSI). An experiment was performed whereby the transmitter and receiver were separated by a fixed distance in a Line-of-Sight (LoS) setting. Five activities were performed in between them, namely, sitting, standing, lying down, standing from the floor and walking. We use the high-resolution CIRs provided by the UWB modules as features in machine and deep learning algorithms for classifying the activities. Experimental results show that a classification performance with an F1-score as high as 95.53% is achieved using processed UWB CIR data as features. Furthermore, we analysed the classification performance in the same physical layout using CSI data extracted from a dedicated WiFi Network Interface Card (NIC). In this case, maximum F1-scores of 92.24% and 80.89% are obtained when amplitude CSI data and spectrograms are used as features, respectively
    • โ€ฆ
    corecore