4,180 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Applying Augmented Reality to Outdoors Industrial Use

    Get PDF
    Augmented Reality (AR) is currently gaining popularity in multiple different fields. However, the technology for AR still requires development in both hardware and software when considering industrial use. In order to create immersive AR applications, more accurate pose estimation techniques to define virtual camera location are required. The algorithms for pose estimation often require a lot of processing power, which makes robust pose estimation a difficult task when using mobile devices or designated AR tools. The difficulties are even larger in outdoor scenarios where the environment can vary a lot and is often unprepared for AR. This thesis aims to research different possibilities for creating AR applications for outdoor environments. Both hardware and software solutions are considered, but the focus is more on software. The majority of the thesis focuses on different visual pose estimation and tracking techniques for natural features. During the thesis, multiple different solutions were tested for outdoor AR. One commercial AR SDK was tested, and three different custom software solutions were developed for an Android tablet. The custom software solutions were an algorithm for combining data from magnetometer and a gyroscope, a natural feature tracker and a tracker based on panorama images. The tracker based on panorama images was implemented based on an existing scientific publication, and the presented tracker was further developed by integrating it to Unity 3D and adding a possibility for augmenting content. This thesis concludes that AR is very close to becoming a usable tool for professional use. The commercial solutions currently available are not yet ready for creating tools for professional use, but especially for different visualization tasks some custom solutions are capable of achieving a required robustness. The panorama tracker implemented in this thesis seems like a promising tool for robust pose estimation in unprepared outdoor environments.Lisätyn todellisuuden suosio on tällä hetkellä kasvamassa usealla eri alalla. Saatavilla olevat ohjelmistot sekä laitteet eivät vielä riitä lisätyn todellisuuden soveltamiseen ammattimaisessa käytössä. Erityisesti posen estimointi vaatii tarkempia menetelmiä, jotta immersiivisten lisätyn todellisuuden sovellusten kehittäminen olisi mahdollista. Posen estimointiin (laitteen asennon- sekä paikan arviointiin) käytetyt algoritmit ovat usein monimutkaisia, joten ne vaativat merkittävästi laskentatehoa. Laskentatehon vaatimukset ovat usein haasteellisia varsinkin mobiililaitteita sekä lisätyn todellisuuden laitteita käytettäessä. Lisäongelmia tuottaa myös ulkotilat, jossa ympäristö voi muuttua usein ja ympäristöä ei ole valmisteltu lisätyn todellisuuden sovelluksille. Diplomityön tarkoituksena on tutkia mahdollisuuksia lisätyn todellisuuden sovellusten kehittämiseen ulkotiloihin. Sekä laitteisto- että ohjelmistopohjaisia ratkaisuja käsitellään. Ohjelmistopohjaisia ratkaisuja käsitellään työssä laitteistopohjaisia ratkaisuja laajemmin. Suurin osa diplomityöstä keskittyy erilaisiin visuaalisiin posen estimointi tekniikoihin, jotka perustuvat kuvasta tunnistettujen luonnollisten piirteiden seurantaan. Työn aikana testattiin useita ratkaisuja ulkotiloihin soveltuvaan lisättyyn todellisuuteen. Yhtä kaupallista työkalua testattiin, jonka lisäksi toteutettiin kolme omaa sovellusta Android tableteille. Työn aikana kehitetyt sovellukset olivat yksinkertainen algoritmi gyroskoopin ja magnetometrin datan yhdistämiseen, luonnollisen piirteiden seuranta-algoritmi sekä panoraamakuvaan perustuva seuranta-algoritmi. Panoraamakuvaan perustuva seuranta-algoritmi on toteuteutettu toisen tieteellisen julkaisun pohjalta, ja algoritmia jatkokehitettiin integroimalla se Unity 3D:hen. Unity 3D-integrointi mahdollisti myös sisällön esittämisen lisätyn todellisuuden avulla. Työn lopputuloksena todetaan, että lisätyn todellisuuden teknologia on lähellä pistettä, jossa lisätyn todellisuuden työkaluja voitaisiin käyttää ammattimaisessa käytössä. Tällä hetkellä saatavilla olevat kaupalliset työkalut eivät vielä pääse ammattikäytön vaatimalle tasolle, mutta erityisesti visualisointitehtäviin soveltuvia ei-kaupallisia ratkaisuja on jo olemassa. Lisäksi työn aikana toteutetun panoraamakuviin perustuvan seuranta-algoritmin todetaan olevan lupaava työkalu posen estimointiin ulkotiloissa.Siirretty Doriast

    Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors

    Full text link
    Visual attention is highly fragmented during mobile interactions, but the erratic nature of attention shifts currently limits attentive user interfaces to adapting after the fact, i.e. after shifts have already happened. We instead study attention forecasting -- the challenging task of predicting users' gaze behaviour (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate that our method can forecast bidirectional attention shifts and predict whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.Comment: 13 pages, 9 figure

    Aspects of User Experience in Augmented Reality

    Get PDF

    An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices

    Get PDF
    In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state of the art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This paper aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.PostprintPeer reviewe

    Designing for Mixed Reality Urban Exploration

    Get PDF
    This paper introduces a design framework for mixed reality urban exploration (MRUE), based on a concrete implementation in a historical city. The framework integrates different modalities, such as virtual reality (VR), augmented reality (AR), and haptics-audio interfaces, as well as advanced features such as personalized recommendations, social exploration, and itinerary management. It permits to address a number of concerns regarding information overload, safety, and quality of the experience, which are not sufficiently tackled in traditional non-integrated approaches. This study presents an integrated mobile platform built on top of this framework and reflects on the lessons learned.Peer reviewe

    Designing for Mixed Reality Urban Exploration

    Get PDF
    This paper introduces a design framework for mixed reality urban exploration (MRUE), based on a concrete implementation in a historical city. The framework integrates different modalities, such as virtual reality (VR), augmented reality (AR), and haptics-audio interfaces, as well as advanced features such as personalized recommendations, social exploration, and itinerary management. It permits to address a number of concerns regarding information overload, safety, and quality of the experience, which are not sufficiently tackled in traditional non-integrated approaches. This study presents an integrated mobile platform built on top of this framework and reflects on the lessons learned
    corecore