3,535 research outputs found

    An observational study of children interacting with an augmented story book

    Get PDF
    We present findings of an observational study investigating how young children interact with augmented reality story books. Children aged between 6 and 7 read and interacted with one of two story books aimed at early literacy education. The books pages were augmented using animated virtual 3D characters, sound, and interactive tasks. Introducing novel media to young children requires system and story designers to consider not only technological issues but also questions arising from story design and the design of interactive sequences. We discuss findings of our study and implications regarding the implementation of augmented story books

    Collaborative geographic visualization

    Get PDF
    Dissertação apresentada na Faculdade de CiĂȘncias e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil GestĂŁo e Sistemas AmbientaisThe present document is a revision of essential references to take into account when developing ubiquitous Geographical Information Systems (GIS) with collaborative visualization purposes. Its chapters focus, respectively, on general principles of GIS, its multimedia components and ubiquitous practices; geo-referenced information visualization and its graphical components of virtual and augmented reality; collaborative environments, its technological requirements, architectural specificities, and models for collective information management; and some final considerations about the future and challenges of collaborative visualization of GIS in ubiquitous environment

    VR : Time Machine

    Get PDF
    Time Machine is an immersive Virtual Reality installation that explains – in simple terms – the Striatal Beat Frequency (SBF) model of time perception. The installation was created as a collaboration between neuroscientists within the field of time perception along with a team of digital designers and audio composers/engineers. This paper outlines the process, as well as the lessons learned, while designing the virtual reality experience that aims to simplify a complex idea to a novice audience. The authors describe in detail the process of creating the world, the user experience mechanics and the methods of placing information in the virtual place in order to enhance the learning experience. The work was showcased at the 4th International Conference on Time Perspective, where the authors collected feedback from the audience. The paper concludes with a reflection on the work and some suggestions for the next iteration of the project

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Shared User Interfaces of Physiological Data: Systematic Review of Social Biofeedback Systems and Contexts in HCI

    Get PDF
    As an emerging interaction paradigm, physiological computing is increasingly being used to both measure and feed back information about our internal psychophysiological states. While most applications of physiological computing are designed for individual use, recent research has explored how biofeedback can be socially shared between multiple users to augment human-human communication. Reflecting on the empirical progress in this area of study, this paper presents a systematic review of 64 studies to characterize the interaction contexts and effects of social biofeedback systems. Our findings highlight the importance of physio-temporal and social contextual factors surrounding physiological data sharing as well as how it can promote social-emotional competences on three different levels: intrapersonal, interpersonal, and task-focused. We also present the Social Biofeedback Interactions framework to articulate the current physiological-social interaction space. We use this to frame our discussion of the implications and ethical considerations for future research and design of social biofeedback interfaces.Comment: [Accepted version, 32 pages] Clara Moge, Katherine Wang, and Youngjun Cho. 2022. Shared User Interfaces of Physiological Data: Systematic Review of Social Biofeedback Systems and Contexts in HCI. In CHI Conference on Human Factors in Computing Systems (CHI'22), ACM, https://doi.org/10.1145/3491102.351749

    Procedurally generated AI compound media for expanding audial creations, broadening immersion and perception experience

    Get PDF
    Recently, the world has been gaining vastly increasing access to more and more advanced artificial intelligence tools. This phenomenon does not bypass the world of sound and visual art, and both of these worlds can benefit in ways yet unexplored, drawing them closer to one another. Recent breakthroughs open possibilities to utilize AI driven tools for creating generative art and using it as a compound of other multimedia. The aim of this paper is to present an original concept of using AI to create a visual compound material to existing audio source. This is a way of broadening accessibility thus appealing to different human senses using source media, expanding its initial form. This research utilizes a novel method of enhancing fundamental material consisting of text audio or text source (script) and sound layer (audio play) by adding an extra layer of multimedia experience – a visual one, generated procedurally. A set of images generated by AI tools, creating a story-telling animation as a new way to immerse into the experience of sound perception and focus on the initial audial material. The main idea of the paper consists of creating a pipeline, form of a blueprint for the process of procedural image generation based on the source context (audial or textual) transformed into text prompts and providing toolsto automate it by programming a set of code instructions. This process allows creation of coherent and cohesive (to a certain extent) visual cues accompanying audial experience levering it to multimodal piece of art. Using nowadays technologies, creators can enhance audial forms procedurally, providing them with visual context. The paper refers to current possibilities, use cases, limitations and biases giving presented tools and solutions

    Updating the art history curriculum: incorporating virtual and augmented reality technologies to improve interactivity and engagement

    Get PDF
    Master's Project (M.Ed.) University of Alaska Fairbanks, 2017This project investigates how the art history curricula in higher education can borrow from and incorporate emerging technologies currently being used in art museums. Many art museums are using augmented reality and virtual reality technologies to transform their visitors' experiences into experiences that are interactive and engaging. Art museums have historically offered static visitor experiences, which have been mirrored in the study of art. This project explores the current state of the art history classroom in higher education, which is historically a teacher-centered learning environment and the learning effects of that environment. The project then looks at how art museums are creating visitor-centered learning environments; specifically looking at how they are using reality technologies (virtual and augmented) to transition into digitally interactive learning environments that support various learning theories. Lastly, the project examines the learning benefits of such tools to see what could (and should) be implemented into the art history curricula at the higher education level and provides a sample section of a curriculum demonstrating what that implementation could look like. Art and art history are a crucial part of our culture and being able to successfully engage with it and learn from it enables the spread of our culture through digital means and of digital culture
    • 

    corecore