523 research outputs found

    RSS-based wireless LAN indoor localization and tracking using deep architectures

    Get PDF
    Wireless Local Area Network (WLAN) positioning is a challenging task indoors due to environmental constraints and the unpredictable behavior of signal propagation, even at a fixed location. The aim of this work is to develop deep learning-based approaches for indoor localization and tracking by utilizing Received Signal Strength (RSS). The study proposes Multi-Layer Perceptron (MLP), One and Two Dimensional Convolutional Neural Networks (1D CNN and 2D CNN), and Long Short Term Memory (LSTM) deep networks architectures for WLAN indoor positioning based on the data obtained by actual RSS measurements from an existing WLAN infrastructure in a mobile user scenario. The results, using different types of deep architectures including MLP, CNNs, and LSTMs with existing WLAN algorithms, are presented. The Root Mean Square Error (RMSE) is used as the assessment criterion. The proposed LSTM Model 2 achieved a dynamic positioning RMSE error of 1.73 m, which outperforms probabilistic WLAN algorithms such as Memoryless Positioning (RMSE: 10.35 m) and Nonparametric Information (NI) filter with variable acceleration (RMSE: 5.2 m) under the same experiment environment.ECSEL Joint Undertaking ; European Union's H2020 Framework Programme (H2020/2014-2020) Grant ; National Authority TUBITA

    A survey of deep learning approaches for WiFi-based indoor positioning

    Get PDF
    One of the most popular approaches for indoor positioning is WiFi fingerprinting, which has been intrinsically tackled as a traditional machine learning problem since the beginning, to achieve a few metres of accuracy on average. In recent years, deep learning has emerged as an alternative approach, with a large number of publications reporting sub-metre positioning accuracy. Therefore, this survey presents a timely, comprehensive review of the most interesting deep learning methods being used for WiFi fingerprinting. In doing so, we aim to identify the most efficient neural networks, under a variety of positioning evaluation metrics for different readers. We will demonstrate that despite the new emerging WiFi signal measures (i.e. CSI and RTT), RSS produces competitive performances under deep learning. We will also show that simple neural networks outperform more complex ones in certain environments

    AtLoc: Attention Guided Camera Localization

    Full text link
    Deep learning has achieved impressive results in camera localization, but current single-image techniques typically suffer from a lack of robustness, leading to large outliers. To some extent, this has been tackled by sequential (multi-images) or geometry constraint approaches, which can learn to reject dynamic objects and illumination conditions to achieve better performance. In this work, we show that attention can be used to force the network to focus on more geometrically robust objects and features, achieving state-of-the-art performance in common benchmark, even if using only a single image as input. Extensive experimental evidence is provided through public indoor and outdoor datasets. Through visualization of the saliency maps, we demonstrate how the network learns to reject dynamic objects, yielding superior global camera pose regression performance. The source code is avaliable at https://github.com/BingCS/AtLoc

    PlaNet - Photo Geolocation with Convolutional Neural Networks

    Full text link
    Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model

    LOCALIZATION OF PEOPLE IN GNSS-DENIED ENVIRONMENTS USING NEURAL-INERTIAL PREDICTION AND KALMAN FILTER CORRECTION

    Get PDF
    This thesis presents a method based on neural networks and Kalman filters for estimating the position of a person carrying a mobile device (i.e., cell phone or tablet) that can communicate with static UWB sensors or is carried in an environment with known landmark positions. This device is used to collect and share inertial measurement unit (IMU) information — which includes data from sensors such as accelerometers, gyroscopes, and magnetometers — and UWB and landmark information. The collected data, in combination with other necessary initial condition information, is input into a pre-trained deep neural network (DNN) which predicts the movement of the person. The prediction result is then periodically — based on outside measurement availability — updated to produce a more accurate result. The update process utilizes a Kalman Filter approach that relies on empirical and statistical models for DNN prediction and sensor noise. Therefore, the approach combines the principles of artificial intelligence and filtering techniques to produce a complete system which converts raw data to trajectory results of people. The initial tests were completed indoors where known landmark locations were compared with predicted positions. In a second set of experiments, GNSS location signals were combined with position estimation for correction. The final result shows the correction of neural network prediction with data from UWB sensors having known locations. Prediction and correction trajectories are shown and compared with the ground truth for applicable environments. The results show that the proposed system is accurate and reliable for predicting the trajectory of a person and can be used in future applications that require the localization of people in scenarios where GNSS is degraded or unavailable, such as indoors, in forests, or underground

    Pedestrian Navigation using Artificial Neural Networks and Classical Filtering Techniques

    Get PDF
    The objective of this thesis is to explore the improvements achieved through using classical filtering methods with Artificial Neural Network (ANN) for pedestrian navigation techniques. ANN have been improving dramatically in their ability to approximate various functions. These neural network solutions have been able to surpass many classical navigation techniques. However, research using ANN to solve problems appears to be solely focused on the ability of neural networks alone. The combination of ANN with classical filtering methods has the potential to bring beneficial aspects of both techniques to increase accuracy in many different applications. Pedestrian navigation is used as a medium to explore this process using a localization and a Pedestrian Dead Reckoning (PDR) approach. Pedestrian navigation is primarily dominated by Global Positioning System (GPS) based navigation methods, but urban and indoor environments pose difficulties for using GPS for navigation. A novel urban data set is created for testing various localization and PDR based pedestrian navigation solutions. Cell phone data is collected including images, accelerometer, gyroscope, and magnetometer data to train the ANN. The ANN methods are explored first trying to achieve a low root mean square error (RMSE) of the predicted and original trajectory. After analyzing the localization and PDR solutions they are combined into an extended Kalman Filter (EKF) to achieve a 20% reduction in the RMSE. This takes the best localization results of 35m combined with underperforming PDR solution with a 171m RMSE to create an EKF solution of 28m of a one hour test collect
    • …
    corecore