64 research outputs found
Integração de localização baseada em movimento na aplicação móvel EduPARK
More and more, mobile applications require precise localization solutions in a variety of environments. Although GPS is widely used as localization solution, it may present some accuracy problems in special conditions such as unfavorable weather or spaces with multiple obstructions such as public parks. For these scenarios, alternative solutions to GPS are of extreme relevance and are widely studied recently. This dissertation studies the case of EduPARK application, which is an augmented reality application that is implemented in the Infante D. Pedro park in Aveiro. Due to the poor accuracy of GPS in this park, the implementation of positioning and marker-less augmented reality functionalities presents difficulties. Existing relevant systems are analyzed, and an architecture based on pedestrian dead reckoning is proposed. The corresponding implementation is presented, which consists of a positioning solution using the sensors available in the smartphones, a step detection algorithm, a distance traveled estimator, an orientation estimator and a position estimator. For the validation of this solution, functionalities were implemented in the EduPARK application for testing purposes and usability tests performed. The results obtained show that the proposed solution can be an alternative to provide accurate positioning within the Infante D. Pedro park, thus enabling the implementation of functionalities of geocaching and marker-less augmented reality.Cada vez mais, as aplicações móveis requerem soluções de localização precisa nos mais variados ambientes. Apesar de o GPS ser amplamente usado como solução para localização, pode apresentar alguns problemas de precisão em condições especiais, como mau tempo, ou espaços com várias obstruções, como parques públicos. Para estes casos, soluções alternativas ao GPS são de extrema relevância e veem sendo desenvolvidas. A presente dissertação estuda o caso do projeto EduPARK, que é uma aplicação móvel de realidade aumentada para o parque Infante D. Pedro em Aveiro. Devido à fraca precisão do GPS nesse parque, a implementação de funcionalidades baseadas no posionamento e de realidade aumentada sem marcadores apresenta dificuldades. São analisados sistemas relevantes existentes e é proposta uma arquitetura baseada em localização de pedestres. Em seguida é apresentada a correspondente implementação, que consiste numa solução de posicionamento usando os sensores disponiveis nos smartphones, um algoritmo de deteção de passos, um estimador de distância percorrida, um estimador de orientação e um estimador de posicionamento. Para a validação desta solução, foram implementadas funcionalidades na aplicação EduPARK para fins de teste, e realizados testes com utilizadores e testes de usabilidade. Os resultados obtidos demostram que a solução proposta pode ser uma alternativa para a localização no interior do parque Infante D. Pedro, viabilizando desta forma a implementação de funcionalidades baseadas no posicionamento e de realidade aumenta sem marcadores.EduPARK é um projeto financiado por Fundos FEDER através do Programa Operacional Competitividade e Internacionalização - COMPETE 2020 e por Fundos Nacionais através da FCT - Fundação para a Ciência e a Tecnologia no âmbito do projeto POCI-01-0145-FEDER-016542.Mestrado em Engenharia Informátic
The IPIN 2019 Indoor Localisation Competition - Description and Results
IPIN 2019 Competition, sixth in a series of IPIN competitions, was held at the CNR Research Area of Pisa (IT), integrated into the program of the IPIN 2019 Conference. It included two on-site real-time Tracks and three off-site Tracks. The four Tracks presented in this paper were set in the same environment, made of two buildings close together for a total usable area of 1000 m 2 outdoors and and 6000 m 2 indoors over three floors, with a total path length exceeding 500 m. IPIN competitions, based on the EvAAL framework, have aimed at comparing the accuracy performance of personal positioning systems in fair and realistic conditions: past editions of the competition were carried in big conference settings, university campuses and a shopping mall. Positioning accuracy is computed while the person carrying the system under test walks at normal walking speed, uses lifts and goes up and down stairs or briefly stops at given points. Results presented here are a showcase of state-of-the-art systems tested side by side in real-world settings as part of the on-site real-time competition Tracks. Results for off-site Tracks allow a detailed and reproducible comparison of the most recent positioning and tracking algorithms in the same environment as the on-site Tracks
The IPIN 2019 Indoor Localisation Competition—Description and Results
IPIN 2019 Competition, sixth in a series of IPIN competitions, was held at the CNR Research Area of Pisa (IT), integrated into the program of the IPIN 2019 Conference. It included two on-site real-time Tracks and three off-site Tracks. The four Tracks presented in this paper were set in the same environment, made of two buildings close together for a total usable area of 1000 m 2 outdoors and and 6000 m 2 indoors over three floors, with a total path length exceeding 500 m. IPIN competitions, based on the EvAAL framework, have aimed at comparing the accuracy performance of personal positioning systems in fair and realistic conditions: past editions of the competition were carried in big conference settings, university campuses and a shopping mall. Positioning accuracy is computed while the person carrying the system under test walks at normal walking speed, uses lifts and goes up and down stairs or briefly stops at given points. Results presented here are a showcase of state-of-the-art systems tested side by side in real-world settings as part of the on-site real-time competition Tracks. Results for off-site Tracks allow a detailed and reproducible comparison of the most recent positioning and tracking algorithms in the same environment as the on-site Tracks
SLAM for Visually Impaired People: A Survey
In recent decades, several assistive technologies for visually impaired and
blind (VIB) people have been developed to improve their ability to navigate
independently and safely. At the same time, simultaneous localization and
mapping (SLAM) techniques have become sufficiently robust and efficient to be
adopted in the development of assistive technologies. In this paper, we first
report the results of an anonymous survey conducted with VIB people to
understand their experience and needs; we focus on digital assistive
technologies that help them with indoor and outdoor navigation. Then, we
present a literature review of assistive technologies based on SLAM. We discuss
proposed approaches and indicate their pros and cons. We conclude by presenting
future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure
Vision-Aided Pedestrian Navigation for Challenging GNSS Environments
There is a strong need for an accurate pedestrian navigation system, functional also in GNSS challenging environments, namely urban areas and indoors, for improved safety and to enhance everyday life. Pedestrian navigation is mainly needed in these environments that are challenging for GNSS but also for other RF positioning systems and some non-RF systems such as the magnetometry used for heading due to the presence of ferrous material. Indoor and urban navigation has been an active research area for years. There is no individual system at this time that can address all needs set for pedestrian navigation in these environments, but a fused solution of different sensors can provide better accuracy, availability and continuity. Self-contained sensors, namely digital compasses for measuring heading, gyroscopes for heading changes and accelerometers for the user speed, constitute a good option for pedestrian navigation. However, their performance suffers from noise and biases that result in large position errors increasing with time. Such errors can however be mitigated using information about the user motion obtained from consecutive images taken by a camera carried by the user, provided that its position and orientation with respect to the user’s body are known. The motion of the features in the images may then be transformed into information about the user’s motion. Due to its distinctive characteristics, this vision-aiding complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability. This thesis discusses the concepts of a visual gyroscope that provides the relative user heading and a visual odometer that provides the translation of the user between the consecutive images. Both methods use a monocular camera carried by the user. The visual gyroscope monitors the motion of virtual features, called vanishing points, arising from parallel straight lines in the scene, and from the change of their location that resolves heading, roll and pitch. The method is applicable to the human environments as the straight lines in the structures enable the vanishing point perception. For the visual odometer, the ambiguous scale arising when using the homography between consecutive images to observe the translation is solved using two different methods. First, the scale is computed using a special configuration intended for indoors. Secondly, the scale is resolved using differenced GNSS carrier phase measurements of the camera in a method aimed at urban environments, where GNSS can’t perform alone due to tall buildings blocking the required line-of-sight to four satellites. However, the use of visual perception provides position information by exploiting a minimum of two satellites and therefore the availability of navigation solution is substantially increased. Both methods are sufficiently tolerant for the challenges of visual perception in indoor and urban environments, namely low lighting and dynamic objects hindering the view. The heading and translation are further integrated with other positioning systems and a navigation solution is obtained. The performance of the proposed vision-aided navigation was tested in various environments, indoors and urban canyon environments to demonstrate its effectiveness. These experiments, although of limited durations, show that visual processing efficiently complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability
Towards high-accuracy augmented reality GIS for architecture and geo-engineering
L’architecture et la géo-ingénierie sont des domaines où les professionnels doivent prendre des décisions critiques. Ceux-ci requièrent des outils de haute précision pour les assister dans leurs tâches quotidiennes. La Réalité Augmentée (RA) présente un excellent potentiel pour ces professionnels en leur permettant de faciliter l’association des plans 2D/3D représentatifs des ouvrages sur lesquels ils doivent intervenir, avec leur perception de ces ouvrages dans la réalité. Les outils de visualisation s’appuyant sur la RA permettent d’effectuer ce recalage entre modélisation spatiale et réalité dans le champ de vue de l’usager. Cependant, ces systèmes de RA nécessitent des solutions de positionnement en temps réel de très haute précision. Ce n’est pas chose facile, spécialement dans les environnements urbains ou sur les sites de construction. Ce projet propose donc d’investiguer les principaux défis que présente un système de RA haute précision basé sur les panoramas omnidirectionels.Architecture and geo-engineering are application domains where professionals need to take critical decisions. These professionals require high-precision tools to assist them in their daily decision taking process. Augmented Reality (AR) shows great potential to allow easier association between the abstract 2D drawings and 3D models representing infrastructure under reviewing and the actual perception of these objects in the reality. The different visualization tools based on AR allow to overlay the virtual models and the reality in the field of view of the user. However, the architecture and geo-engineering context requires high-accuracy and real-time positioning from these AR systems. This is not a trivial task, especially in urban environments or on construction sites where the surroundings may be crowded and highly dynamic. This project investigates the accuracy requirements of mobile AR GIS as well as the main challenges to address when tackling high-accuracy AR based on omnidirectional panoramas
Recommended from our members
MULTI-SENSOR LOCALIZATION AND TRACKING IN DISASTER MANAGEMENT AND INDOOR WAYFINDING FOR VISUALLY IMPAIRED USERS
This dissertation proposes a series of multi-sensor localization and tracking algorithms particularly developed for two important application domains, which are disaster management and indoor wayfinding for blind and visually impaired (BVI) users. For disaster management, we developed two different localization algorithms, one each for Radio Frequency Identification (RFID) and Bluetooth Low Energy (BLE) technology, which enable the disaster management system to track patients in real-time. Both algorithms work in the absence of any pre-deployed infrastructure along with smartphones and wearable devices. Regarding indoor wayfinding for BVI users, we have explored several types of indoor positioning techniques including BLE-based, inertial, visual and hybrid approaches to offer accurate and reliable location and orientation in complex navigation spaces. In this dissertation, significant contributions have been made in the design and implementation of various localization and tracking algorithms under different requirements of certain applications
- …