16,050 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Enabling Self-aware Smart Buildings by Augmented Reality

    Full text link
    Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 201
    corecore