3,535 research outputs found
An observational study of children interacting with an augmented story book
We present findings of an observational study investigating how young children interact with augmented reality story books. Children aged between 6 and 7 read and interacted with one of two story books aimed at early literacy education. The books pages were augmented using animated virtual 3D characters, sound, and interactive tasks. Introducing novel media to young children requires system and story designers to consider not only technological issues but also questions arising from story design and the design of interactive sequences. We discuss findings of our study and implications regarding the implementation of augmented story books
Collaborative geographic visualization
DissertaçaÌo apresentada na Faculdade de CiĂȘncias e Tecnologia da Universidade Nova de
Lisboa para a obtençaÌo do grau de Mestre em Engenharia do Ambiente, perfil GestĂŁo e
Sistemas AmbientaisThe present document is a revision of essential references to take into account when developing ubiquitous Geographical Information Systems (GIS) with collaborative
visualization purposes.
Its chapters focus, respectively, on general principles of GIS, its multimedia components and ubiquitous practices; geo-referenced information visualization and its graphical components of virtual and augmented reality; collaborative environments, its technological requirements, architectural specificities, and models for collective information management; and some final considerations about the future and challenges of collaborative visualization of GIS in ubiquitous environment
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the realityâvirtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to âdigitizeâ the performerâs movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performerâs behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
VR : Time Machine
Time Machine is an immersive Virtual Reality installation that explains â in simple terms â the Striatal Beat Frequency (SBF) model of time perception. The installation was created as a collaboration between neuroscientists within the field of time perception along with a team of digital designers and audio composers/engineers. This paper outlines the process, as well as the lessons learned, while designing the virtual reality experience that aims to simplify a complex idea to a novice audience. The authors describe in detail the process of creating the world, the user experience mechanics and the methods of placing information in the virtual place in order to enhance the learning experience. The work was showcased at the 4th International Conference on Time Perspective, where the authors collected feedback from the audience. The paper concludes with a reflection on the work and some suggestions for the next iteration of the project
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
Shared User Interfaces of Physiological Data: Systematic Review of Social Biofeedback Systems and Contexts in HCI
As an emerging interaction paradigm, physiological computing is increasingly
being used to both measure and feed back information about our internal
psychophysiological states. While most applications of physiological computing
are designed for individual use, recent research has explored how biofeedback
can be socially shared between multiple users to augment human-human
communication. Reflecting on the empirical progress in this area of study, this
paper presents a systematic review of 64 studies to characterize the
interaction contexts and effects of social biofeedback systems. Our findings
highlight the importance of physio-temporal and social contextual factors
surrounding physiological data sharing as well as how it can promote
social-emotional competences on three different levels: intrapersonal,
interpersonal, and task-focused. We also present the Social Biofeedback
Interactions framework to articulate the current physiological-social
interaction space. We use this to frame our discussion of the implications and
ethical considerations for future research and design of social biofeedback
interfaces.Comment: [Accepted version, 32 pages] Clara Moge, Katherine Wang, and Youngjun
Cho. 2022. Shared User Interfaces of Physiological Data: Systematic Review of
Social Biofeedback Systems and Contexts in HCI. In CHI Conference on Human
Factors in Computing Systems (CHI'22), ACM,
https://doi.org/10.1145/3491102.351749
Procedurally generated AI compound media for expanding audial creations, broadening immersion and perception experience
Recently, the world has been gaining vastly increasing access to more and more advanced artificial intelligence tools. This phenomenon does not bypass the world of sound and visual art, and both of these worlds can benefit in ways yet unexplored, drawing them closer to one another. Recent breakthroughs open possibilities to utilize AI driven tools for creating generative art and using it as a compound of other multimedia. The aim of this paper is to present an original concept of using AI to create a visual compound material to existing audio source. This is a way of broadening accessibility thus appealing to different human senses using source media, expanding its initial form. This research utilizes a novel method of enhancing fundamental material consisting of text audio or text source (script) and sound layer (audio play) by adding an extra layer of multimedia experience â a visual one, generated procedurally. A set of images generated by AI tools, creating a story-telling animation as a new way to immerse into the experience of sound perception and focus on the initial audial material. The main idea of the paper consists of creating a pipeline, form of a blueprint for the process of procedural image generation based on the source context (audial or textual) transformed into text prompts and providing toolsto automate it by programming a set of code instructions. This process allows creation of coherent and cohesive (to a certain extent) visual cues accompanying audial experience levering it to multimodal piece of art. Using nowadays technologies, creators can enhance audial forms procedurally, providing them with visual context. The paper refers to current possibilities, use cases, limitations and biases giving presented tools and solutions
Updating the art history curriculum: incorporating virtual and augmented reality technologies to improve interactivity and engagement
Master's Project (M.Ed.) University of Alaska Fairbanks, 2017This project investigates how the art history curricula in higher education can borrow from and incorporate emerging technologies currently being used in art museums. Many art museums are using augmented reality and virtual reality technologies to transform their visitors' experiences into experiences that are interactive and engaging. Art museums have historically offered static visitor experiences, which have been mirrored in the study of art. This project explores the current state of the art history classroom in higher education, which is historically a teacher-centered learning environment and the learning effects of that environment. The project then looks at how art museums are creating visitor-centered learning environments; specifically looking at how they are using reality technologies (virtual and augmented) to transition into digitally interactive learning environments that support various learning theories. Lastly, the project examines the learning benefits of such tools to see what could (and should) be implemented into the art history curricula at the higher education level and provides a sample section of a curriculum demonstrating what that implementation could look like. Art and art history are a crucial part of our culture and being able to successfully engage with it and learn from it enables the spread of our culture through digital means and of digital culture
- âŠ