24,730 research outputs found

    Augmented Reality user interface analysis in mobile devices

    Get PDF
    [ENGLISH] The presence of high-end phones in the telephony market, has allowed consumers to have access to the computational power of mobile smart-phone devices. Powerful processors, combined with cameras and ease of development encourage an increasing number of Augmented Reality (AR) researchers to adopt mobile smart-phones as AR platform. The same way, Augmented Reality on mobile devices has become increasingly popular for many applications, including search and location, tourism, and shopping. An interesting development platform for implementing an AR application is Unity 3D. Unity 3D has a powerful library that provides access to the integrated sensors in mobile devices. Implementation of an AR application that utilizes data from sensors of mobile phone device is the primary basis for this project. In late 2011, the API of Google maps stopped being free, this fact has prompted many developers looking for an API with the same features besides free, this option is OpenStreetMap, and the main objective of this thesis is the implementation of a Location-Based Augmented Reality application which in turn has a dedicated interface to mapping services. Finally, this thesis proposes to test the implemented application to analyze usage preferences between users to map services and location-base

    Mobile-Based Interactive Music for Public Spaces

    Get PDF
    With the emergence of modern mobile devices equipped with various types of built-in sensors, interactive art has become easily accessible to everyone, musicians and non-musicians alike. These efficient computers are able to analyze human activity, location, gesture, etc., and based on this information dynamically change, or create an artwork in realtime. This thesis presents an interactive mobile system that solely uses the standard embedded sensors available in current typical smart devices such as phones, and tablets to create an audio-only augmented reality for a singled out public space in order to explore the potential for social-musical interaction, without the need for any significant external infrastructure

    FGLP: A Federated Fine-Grained Location Prediction System for Mobile Users

    Full text link
    Fine-grained location prediction on smart phones can be used to improve app/system performance. Application scenarios include video quality adaptation as a function of the 5G network quality at predicted user locations, and augmented reality apps that speed up content rendering based on predicted user locations. Such use cases require prediction error in the same range as the GPS error, and no existing works on location prediction can achieve this level of accuracy. We present a system for fine-grained location prediction (FGLP) of mobile users, based on GPS traces collected on the phones. FGLP has two components: a federated learning framework and a prediction model. The framework runs on the phones of the users and also on a server that coordinates learning from all users in the system. FGLP represents the user location data as relative points in an abstract 2D space, which enables learning across different physical spaces. The model merges Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNN), where BiLSTM learns the speed and direction of the mobile users, and CNN learns information such as user movement preferences. FGLP uses federated learning to protect user privacy and reduce bandwidth consumption. Our experimental results, using a dataset with over 600,000 users, demonstrate that FGLP outperforms baseline models in terms of prediction accuracy. We also demonstrate that FGLP works well in conjunction with transfer learning, which enables model reusability. Finally, benchmark results on several types of Android phones demonstrate FGLP's feasibility in real life

    Location-based technologies for learning

    Get PDF
    Emerging technologies for learning report - Article exploring location based technologies and their potential for educatio

    Innovative strategies for 3D visualisation using photogrammetry and 3D scanning for mobile phones

    Get PDF
    3D model generation through Photogrammetry is a modern overlay of digital information representing real world objects in a virtual world. The immediate scope of this study aims at generating 3D models using imagery and overcoming the challenge of acquiring accurate 3D meshes. This research aims to achieve optimised ways to document raw 3D representations of real life objects and then converting them into retopologised, textured usable data through mobile phones. Augmented Reality (AR) is a projected combination of real and virtual objects. A lot of work is done to create market dependant AR applications so customers can view products before purchasing them. The need is to develop a product independent photogrammetry to AR pipeline which is freely available to create independent 3D Augmented models. Although for the particulars of this research paper, the aim would be to compare and analyse different open source SDK’s and libraries for developing optimised 3D Mesh using Photogrammetry/3D Scanning which will contribute as a main skeleton to the 3D-AR pipeline. Natural disasters, global political crisis, terrorist attacks and other catastrophes have led researchers worldwide to capture monuments using photogrammetry and laser scans. Some of these objects of “global importance” are processed by companies including CyArk (Cyber Archives) and UNESCO’s World Heritage Centre, who work against time to preserve these historical monuments, before they are damaged or in some cases completely destroyed. The need is to question the significance of preserving objects and monuments which might be of value locally to a city or town. What is done to preserve those objects? This research would develop pipelines for collecting and processing 3D data so the local communities could contribute towards restoring endangered sites and objects using their smartphones and making these objects available to be viewed in location based AR. There exist some companies which charge relatively large amounts of money for local scanning projects. This research would contribute as a non-profitable project which could be later used in school curriculums, visitor attractions and historical preservation organisations all over the globe at no cost. The scope isn’t limited to furniture, museums or marketing, but could be used for personal digital archiving as well. This research will capture and process virtual objects using Mobile Phones comparing methodologies used in Computer Vision design from data conversion on Mobile phones to 3D generation, texturing and retopologising. The outcomes of this research will be used as input for generating AR which is application independent of any industry or product
    corecore