11 research outputs found

    Augmented Reality Mobile apps Development with Unity and Vuforia SDK

    Get PDF
    This paper shows how to create an animation of a game character for a mobile application with augmented reality. The Unity platform, Vuforia SDK and Android OS were used for work. For this work, the Fighting Unity-Chan model was downloaded. This is an animation resource pack that specializes in fighting or action games. This code shows how a button is created (GUI.Button), it assigns an animation that will work when you click on it, the size of the button and its name. To create a smooth and correct transition between animations, Animator was created, in which all animations are shown taking into account the time of their action. For example, to switch from a state of rest (Idle) to some other state, for example, to a state of impact (Hicick), it is necessary to spend time in the amount of 1 second

    Development of an Augmented Reality Mobile Application in Educational Purposes

    Get PDF
    This work is devoted to the issues of visualization and data processing, in particular, improving the visualization of three-dimensional objects using the technology of augmented reality in education. This article describes the development of augmented reality mobile applications on the theme of the galaxy. The main purpose of the application: educational and cognitive. In the process of performing work, computer graphics, algorithms and modeling methods were used. Use case: there are special images on the stand that the mobile application recognizes and shows the created 3D models of the “Sun” and “Milky Way”. In addition, while the 3D model is being displayed, a short training audio lecture will be held. To create a 3D model of objects, the Unity program was used in conjunction with the augmented reality platform Vuforia

    Using Augmented Reality Technologies for Mobile Application Development

    Get PDF
    This work is devoted to reviews of existing methods of working with AR and methods for its implementation. The second part of this work contains a description of the development of the interface for the prototype of the “Mendeleev's coin” information system. To create a 3D model of objects, the Unity program was used in conjunction with the augmented reality platform Vuforia. To test the performance, Android devices with average technical indicators were selected. The application compiled for testing showed stable operation

    The Geometry and Usage of the Supplementary Fisheye Lenses in Smartphones

    Get PDF
    Nowadays, mobile phones are more than a device that can only satisfy the communication need between people. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. In addition to this advantage, it is experimented whether fisheye lens and mobile phone combination can be used in a photogrammetric way, and if so, what will be the result. Fisheye lens equipment used with mobile phones was tested in this study. For this, standard calibration of ‘Olloclip 3 in one’ fisheye lens used with iPhone 4S mobile phone and ‘Nikon FC‐E9’ fisheye lens used with Nikon Coolpix8700 are compared based on equidistant model. This experimental study shows that Olloclip 3 in one fisheye lens developed for mobile phones has at least the similar characteristics with classic fisheye lenses. The dimensions of fisheye lenses used with smart phones are getting smaller and the prices are reducing. Moreover, as verified in this study, the accuracy of fisheye lenses used in smartphones is better than conventional fisheye lenses. The use of smartphones with fisheye lenses will give the possibility of practical applications to ordinary users in the near future

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    Herramientas de desarrollo libres para aplicaciones de realidad aumentada con Android. Análisis comparativo entre ellas

    Full text link
    [ES] Estudio de las herramientas existentes para el desarrollo de aplicaciones de RA para dispositivos móviles con sistema operativo Android e identificación de ventajas e inconvenientes entre ellas.[EN] Study of existing frameworks for AR applications development for Android mobile devices and identification of strengths and weaknesses among them.Serrano Mamolar, A. (2012). Herramientas de desarrollo libres para aplicaciones de realidad aumentada con Android. Análisis comparativo entre ellas. http://hdl.handle.net/10251/18028Archivo delegad

    Fusion de données capteurs étendue pour applications vidéo embarquées

    Get PDF
    This thesis deals with sensor fusion between camera and inertial sensors measurements in order to provide a robust motion estimation algorithm for embedded video applications. The targeted platforms are mainly smartphones and tablets. We present a real-time, 2D online camera motion estimation algorithm combining inertial and visual measurements. The proposed algorithm extends the preemptive RANSAC motion estimation procedure with inertial sensors data, introducing a dynamic lagrangian hybrid scoring of the motion models, to make the approach adaptive to various image and motion contents. All these improvements are made with little computational cost, keeping the complexity of the algorithm low enough for embedded platforms. The approach is compared with pure inertial and pure visual procedures. A novel approach to real-time hybrid monocular visual-inertial odometry for embedded platforms is introduced. The interaction between vision and inertial sensors is maximized by performing fusion at multiple levels of the algorithm. Through tests conducted on sequences with ground-truth data specifically acquired, we show that our method outperforms classical hybrid techniques in ego-motion estimation.Le travail réalisé au cours de cette thèse se concentre sur la fusion des données d'une caméra et de capteurs inertiels afin d'effectuer une estimation robuste de mouvement pour des applications vidéos embarquées. Les appareils visés sont principalement les téléphones intelligents et les tablettes. On propose une nouvelle technique d'estimation de mouvement 2D temps réel, qui combine les mesures visuelles et inertielles. L'approche introduite se base sur le RANSAC préemptif, en l'étendant via l'ajout de capteurs inertiels. L'évaluation des modèles de mouvement se fait selon un score hybride, un lagrangien dynamique permettant une adaptation à différentes conditions et types de mouvements. Ces améliorations sont effectuées à faible coût, afin de permettre une implémentation sur plateforme embarquée. L'approche est comparée aux méthodes visuelles et inertielles. Une nouvelle méthode d'odométrie visuelle-inertielle temps réelle est présentée. L'interaction entre les données visuelles et inertielles est maximisée en effectuant la fusion dans de multiples étapes de l'algorithme. A travers des tests conduits sur des séquences acquises avec la vérité terrain, nous montrons que notre approche produit des résultats supérieurs aux techniques classiques de l'état de l'art

    Creación de experiencias de realidad aumentada realistas por usuarios finales

    Get PDF
    Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Juan Manuel Dodero Beardo.- Secretario: Andrea Bellucci.- Vocal: Camino Fernández Llama

    Actas del XVII Congreso Internacional de Interacción Persona-Ordenador

    Get PDF
    [ES]En la presente publicación se recogen los trabajos aceptados como ponencias, en cada una de sus modalidades, para el XVII Congreso Internacional de Interacción Persona-Ordenador (Interacción 2016), que se celebra del 13 al 16 de septiembre de 2016 en Salamanca, dentro del marco del IV Congreso Español de Informática (CEDI 2016). Este congreso es promovido por la Asociación de Interacción Persona-Ordenador (AIPO) y su organización ha recaído en esta ocasión, en el grupo de GRIAL de la Universidad de Salamanca. Interacción 2016 es un congreso internacional que tiene como principal objetivo promover y difundir los avances recientes en el área de la Interacción Persona-Ordenador, tanto a nivel académico como empresarial. En este simposio se presentarán nuevas metodologías, novedosos dispositivos de interacción e interfaces de usuario, así como herramientas para su creación y evaluación en los ámbitos industriales y experimentales
    corecore