41,439 research outputs found

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Augmented Reality in Astrophysics

    Full text link
    Augmented Reality consists of merging live images with virtual layers of information. The rapid growth in the popularity of smartphones and tablets over recent years has provided a large base of potential users of Augmented Reality technology, and virtual layers of information can now be attached to a wide variety of physical objects. In this article, we explore the potential of Augmented Reality for astrophysical research with two distinct experiments: (1) Augmented Posters and (2) Augmented Articles. We demonstrate that the emerging technology of Augmented Reality can already be used and implemented without expert knowledge using currently available apps. Our experiments highlight the potential of Augmented Reality to improve the communication of scientific results in the field of astrophysics. We also present feedback gathered from the Australian astrophysics community that reveals evidence of some interest in this technology by astronomers who experimented with Augmented Posters. In addition, we discuss possible future trends for Augmented Reality applications in astrophysics, and explore the current limitations associated with the technology. This Augmented Article, the first of its kind, is designed to allow the reader to directly experiment with this technology.Comment: 15 pages, 11 figures. Accepted for publication in Ap&SS. The final publication will be available at link.springer.co

    Emerging technologies for learning report (volume 3)

    Get PDF

    Bringing tabletop technologies to kindergarten children

    Get PDF
    Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype
    corecore