15,751 research outputs found

    Enabling Real-Time Shared Environments on Mobile Head-Mounted Displays

    Get PDF
    Head-Mounted Displays (HMDs) are becoming more prevalent consumer devices, allowing users to experience scenes and environments from a point of view naturally controlled by their movement. However there is limited application of this experiential paradigm to telecommunications -- that is, where a HMD user can 'call' a mobile phone user and begin to look around in their environment. In this thesis we present a telepresence system for connecting mobile phone users with people wearing HMDs, allowing the HMD user to experience the environment of the mobile user in real-time. We developed an Android application that supports generating and transmitting high quality spherical panorama based environments in real-time, and a companion application for HMDs to view those environments live. This thesis focusses on the technical challenges involved with creating panoramic environments of sufficient quality to be suitable for viewing inside a HMD, given the constraints that arise from using mobile phones. We present computer vision techniques optimised for these constrained conditions, justifying the trade-offs made between speed and quality. We conclude by comparing our solution to conceptually similar past research along the metrics of computation speed and output quality

    Digital Urban - The Visual City

    Get PDF
    Nothing in the city is experienced by itself for a city’s perspicacity is the sum of its surroundings. To paraphrase Lynch (1960), at every instant, there is more than we can see and hear. This is the reality of the physical city, and thus in order to replicate the visual experience of the city within digital space, the space itself must convey to the user a sense of place. This is what we term the “Visual City”, a visually recognisable city built out of the digital equivalent of bricks and mortar, polygons, textures, and most importantly data. Recently there has been a revolution in the production and distribution of digital artefacts which represent the visual city. Digital city software that was once in the domain of high powered personal computers, research labs and professional software are now in the domain of the public-at-large through both the web and low-end home computing. These developments have gone hand in hand with the re-emergence of geography and geographic location as a way of tagging information to non-proprietary web-based software such as Google Maps, Google Earth, Microsoft’s Virtual Earth, ESRI’s ArcExplorer, and NASA’s World Wind, amongst others. The move towards ‘digital earths’ for the distribution of geographic information has, without doubt, opened up a widespread demand for the visualization of our environment where the emphasis is now on the third dimension. While the third dimension is central to the development of the digital or visual city, this is not the only way the city can be visualized for a number of emerging tools and ‘mashups’ are enabling visual data to be tagged geographically using a cornucopia of multimedia systems. We explore these social, textual, geographical, and visual technologies throughout this chapter

    Immersion on the Edge: A Cooperative Framework for Mobile Immersive Computing

    Full text link
    Immersive computing (IC) technologies such as virtual reality and augmented reality are gaining tremendous popularity. In this poster, we present CoIC, a Cooperative framework for mobile Immersive Computing. The design of CoIC is based on a key insight that IC tasks among different applications or users might be similar or redundant. CoIC enhances the performance of mobile IC applications by caching and sharing computation-intensive IC results on the edge. Our preliminary evaluation results on an AR application show that CoIC can reduce the recognition and rendering latency by up to 52.28% and 75.86% respectively on current mobile devices.Comment: This poster has been accepted by the SIGCOMM in June 201

    InLoc: Indoor Visual Localization with Dense Matching and View Synthesis

    Get PDF
    We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor environments. The method proceeds along three steps: (i) efficient retrieval of candidate poses that ensures scalability to large-scale environments, (ii) pose estimation using dense matching rather than local features to deal with textureless indoor scenes, and (iii) pose verification by virtual view synthesis to cope with significant changes in viewpoint, scene layout, and occluders. Second, we collect a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data
    • 

    corecore