9 research outputs found
Image Based Indoor Navigation
Over the last years researchers proposed numerous indoor localization and navigation systems. However, solutions that use WiFi or Radio Frequency Identification require infrastructure to be deployed in the navigation area and infrastructure less techniques, e.g. the ones based on mobile cell ID or dead reckoning suffer from large accuracy errors.
In this Thesis, we present a novel approach of infrastructure-less indoor navigation system based on computer vision Structure from Motion techniques. We implemented a prototype localization and navigation system which can build a navigation map using area photos as input and accurately locate a user in the map.
In our client-server architecture based system, a client is a mobile application, which allows a user to locate her or his position by simply taking a photo. The server handles map creation, localization queries and path finding. After the implementation, we evaluated the localization accuracy and latency of the system by benchmarking navigation queries and the model creation algorithm.
The system is capable of successfully navigating in Aalto University computer science department library. We were able to achieve an average error of 0.26 metres for successfully localised photos. In the Thesis, we also present challenges that we solved to adapt computer vision techniques for localisation purposes. Finally we observe the possible future work topics to adapt the system to a wide use
Effects of Gamified Augmented Reality in Public Spaces
Advancements in smartphone technology have resulted in the proliferation of Augmented Reality (AR) applications and games. Researchers have acknowledged the great potential of AR applications to enhance entertainment and improve learning experiences. In this study, we examined the potential effects of gamified AR in public places. We developed ARQuiz, an AR-based quiz game, for a public exhibition space and conducted a user study with respondents via survey (N = 176; 55.68% female, mean age 35.94 and SD = 11.89) and face-To-face interview (N = 28; 57.14% female, mean age 31.07 and SD = 7.42). We analyzed the relationship between perceived application usefulness, perceived application enjoyment, perceived exhibition enjoyment, and perceived quiz enjoyment. In addition, we examined perceived sociability before and after the quiz, quiz score, and user behavior in the exhibition space. The results indicate that visitors who enjoyed playing the ARQuiz game enjoyed the exhibition more, obtained better quiz results and felt more social after visiting the exhibition. Furthermore, the ARQuiz was regarded as a possible platform for improving visitors learning and overall experiences in public exhibitions. Although some players expressed concerns about the privacy and intrusiveness of AR, our results indicate that a well-designed AR game may boost the overall satisfaction of an exhibition visit and increase players sociabilitys. © 2013 IEEE.Peer reviewe
Does Augmented Reality Affect Sociability, Entertainment, and Learning? A Field Experiment
Augmented reality (AR) applications have recently emerged for entertainment and educational purposes and have been proposed to have positive effects on social interaction. In this study, we investigated the impact of a mobile, indoor AR feature on sociability, entertainment, and learning. We conducted a field experiment using a quiz game in a Finnish science center exhibition. We divided participants (N = 372) into an experimental group (AR app users) and two control groups (non-AR app users; pen-and-paper participants), including 28 AR users of follow-up interviews. We used Kruskal–Wallis rank test to compare the experimental groups and the content analysis method to explore AR users’ experiences. Although interviewed AR participants recognized the entertainment value and learning opportunities for AR, we did not detect an increase in perceived sociability, social behavior, positive affect, or learning performance when comparing the experimental groups. Instead, AR interviewees experienced a strong conflict between the two different realities. Despite the engaging novelty value of new technology, performance and other improvements do not automatically emerge. We also discuss potential conditional factors. Future research and development of AR and related technologies should note the possible negative effects of dividing attention to both realities
Enabling Ubiquitous Augmented Reality with Crowdsourced Indoor Mapping and Localization
With a proliferation of sensor-rich small form factor devices such as smart glasses and smartphones, augmented reality (AR) applications attracted tremendous interest from both, industry professionals and academics. AR applications enrich the real-world view, seen by a user, with additional information such as computer-generated 3D artifacts that blend seamlessly with real-world objects. Although popular AR applications, especially AR games, are already used by millions of people, enabling shared and ubiquitous AR experiences is still challenging. It is still highly challenging to provide persistent AR experience which aligns artificial objects seamlessly with designated real-world places and allows multiple users to simultaneously perceive the same objects. Furthermore, enabling truly ubiquitous AR requires AR applications to work in arbitrary environments, while users access the applications via commodity devices such as smartphones.
In this dissertation, we focus on enabling technologies for ubiquitous multi-user AR applications for indoor environments. We observe that an accurate, real-time localization system is required, in order to provide ubiquitous AR experience indoors. Consequently, we investigate the applicability of computer vision-based techniques for efficient indoor mapping and study how the maps can be used to enable accurate six-degrees-of-freedom positioning, suitable for AR-based applications.
Specifically, we investigate applicability of visual crowdsourcing for mapping and providing accurate and infrastructure-less indoor localization and navigation services. Furthermore, we develop mobile AR applications that use the developed indoor positioning services. We solve the challenge to enable energy-efficient and accurate real-time position and facing direction tracking, which is required to enable seamless AR experiences. Finally, we focus on deployment of the developed real-time AR-based systems on a hierarchical edge cloud environment. In particular, we focus on initial computing capacity planning that satisfies the Quality of Service requirements of the developed systems. In this dissertation we conduct empirical studies in order to answer the research questions. We develop a practical indoor mapping and localization system and a smartphone application that uses the localization system for AR-based indoor navigation. The results of this work provide basis for enabling ubiquitous AR experience within entertainment, productivity and social applications
Edge capacity planning for real time compute-intensive applications
| openaire: EC/H2020/825496/EU//5G-MOBIXCloud computing is a major breakthrough in enabling multi-user scalable web services, process offloading and infrastructure cost savings. However, public clouds impose high network latency which became a bottleneck for real time applications such as mobile augmented reality applications. A widely accepted solution is to move latency sensitive services from the centralized cloud to the edge of the Internet, close to service users. An important prerequisite for deploying applications at the edge is determining initial required edge capacity. However, little has been done to provide reliable estimates of required computing capacity under Quality-of-Service (QoS) constraints. Differently from previous works that focus only on applications' CPU usage, in this paper, we propose a novel, queuing theory based edge capacity planning solution that takes into account both CPU and GPU usages of real-time compute-intensive applications. Our solution satisfies the QoS requirements in terms of response delays while minimizing the number of required edge computing nodes, assuming that the nodes are with fixed CPU/GPU capacity. We demonstrate the applicability and accuracy of our solution through extensive evaluation, including a case study using real-life applications. The results show that our solution maximizes the resource utilization through intelligent combinations of service requests, and can accurately estimate the minimal amount of CPU and GPU capacity required for satisfying the QoS requirements.Peer reviewe
Low-cost mapping of RFID tags using reader-equipped smartphones
This paper proposes a low-cost solution for mapping and locating UHF-band RFID tags in a 3D space using reader-equipped smartphones. Our solution includes a mobile augmented reality application for data collection and information visualization, and a cloud-based application server for calculating locations of the reader-equipped smartphones and the read RFID tags. Our solution applies computer vision and motion sensing techniques to track 3D locations of the RFID reader based on the visual and inertial sensor data collected from the companion smartphones. Meanwhile, it obtains the exact locations of RFID tags by calculating their relative positions from the readers based on the Angle of Arrival (AoA) concept. Our solution can be implemented with any low-cost fixed transmit power RFID readers, since it only requires the readers to report identifiers of read RFID tags. Furthermore, our solution does not require machine-controlled uniform movement of RFID readers, as it can handle the bias in the readings collected from randomly scattered positions. We have evaluated our solution with experiments in real environments using a commercially-off-the-shelf RFID reader and an Android phone. Results show that the average error in the positions of RFID tags is around 25cm for each of orthogonal axes on the floor plane, with the orders of RFID tags correctly detected in most cases.Peer reviewe
ViNav
OA-julkaisu. Lisätään artikkeli, kun julkaistu IEEE:n tietokannassa.Smartphone-based indoor navigation services are desperately needed in indoor environments. However, the adoption of them has been relatively slow, due to the lack of ne-grained and up-to-date indoor maps, or the potentially high deployment and maintenance cost of infrastructure-based indoor localization solutions. This work proposes ViNav, a scalable and cost-effcient system that implements indoor mapping, localization and navigation based on visual and inertial sensor data collected from smartphones. ViNav applies structure-from-motion (SfM) techniques to reconstruct 3D models of indoor environments from crowdsourced images, locates points of interest (POI) in 3D models, and compiles navigation meshes for path finding. ViNav implements image-based localization that identifies users' positions and facing directions, and leverages this feature to calibrate dead-reckoning-based user trajectories and sensor fingerprints collected along the trajectories. The calibrated information is utilized for building more informative and accurate indoor maps, and lowering the response delay of localization requests. According to our experimental results in a university building and a supermarket, the system works properly and our indoor localization achieves competitive performance compared with traditional approaches: in a supermarket, ViNav locates users within 2 seconds, with a distance error less than 1 meter and a facing direction error less than 6 degrees.Peer reviewe
SnapTask
Visual crowdsourcing (VCS) offers an inexpensive method to collect visual data for implementing tasks, such as 3D mapping and place detection, thanks to the prevalence of smartphone cameras. However, without proper guidance, participants may not always collect data from desired locations with a required Quality-of-Information (QoI). This often causes either a lack of data in certain areas, or extra overheads for processing unnecessary redundancy. In this work, we propose SnapTask, a participatory VCS system that aims at creating complete indoor maps by guiding participants to efficiently collect visual data of high QoI. It applies Structure-from-Motion (SfM) techniques to reconstruct 3D models of indoor environments, which are then converted into indoor maps. To increase coverage with minimal redundancy, SnapTask determines locations for the next data collection tasks by analyzing the coverage of the generated 3D model and the camera views of the collected images. In addition, it overcomes the limitations of SfM techniques by utilizing crowdsourced annotations to reconstruct featureless surfaces (e.g. glass walls) in the 3D model. According to a field test in a library, the indoor map generated by SnapTask successfully reconstructs 100% of the library walls and 98.12% of objects and traversal areas within the library. With the same amount of input data our design of guided data collection increases the map coverage by 20.72% and 34.45%, respectively, compared with unguided participatory and opportunistic VCS.Peer reviewe