976 research outputs found

    Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality

    Get PDF
    Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational concepts as a future direction, and presents the challenges faced in implementing the IEIVR metaphor due to extended periods of immersion. Results from the conducted studies help in formulating guidelines for virtual reality and education researchers working in STEM education and training, and for educators and curriculum developers seeking to improve student engagement in the STEM fields

    un(distance)

    Get PDF
    What does it meant to be together apart? How can we create and share ideas from an (un)distance? Please join us in this experimental panel in a talk show format. The panel will be moderated in channels: A zoom moderator (Anna Nacher and Annie Abrahams), a framapad moderator (Deena Larsen),a twitter moderator (Johannah Rodgers) and panel participants: Eugenio Tisselli, Kirill Azernyy, Renee Carmichael, and Roderick Coover will weave their experiences and works into a panoply of insights into what it means to practice undistancing, or b(e/r)ing us together to share ideas, potential collaborations, and partnerships over various (di)stances and (di)scourse: Eugenio--(un)distance to the material sources of electronic technologies Renee--(un)distancing between the function and the feeling Kyrill--(un)distance to artifacts Anna - inhabiting various degrees of (un)distancing Deena-(un)distance to collaborate with those who can not be present physically (disability, travel, funding) Annie-(un)distance as a life-long practice This panel was originally envisioned as an experiment to provide a talk show format discussion between online participants and in person participants in Orlando. As the ELO conference moves online, this Undistance panel will experiment with the potential for completely online networking and exchanging ideas at this and future ELO conferences. As this discussion will center around long-distance writing practices, and will self-reflexively discuss how effective on-line community discussion works, this experimental venue will demonstrate that more in-depth online exchanges are possible. ELO members will engage online in an exchange about their practice, weaving in an array of video extracts and demos together in a real-time e-lit fabric that also includes unexpected interruptions, time lapses, and glitches as part of the expected process. The online audience will participate via videoconferencing (Zoom or conference online venue), hashtag discussion via live tweeting (#ELO#UnDistance), and collaborative annotating via Framapad, thus usingopen access communication tools and incorporating as many live online channels as possible. Panelists will employ short provocations to the other panelists (ranging from scholarly observations to theoretical manifestos to pointed questions to silence) in an interrupt/collaborate collage of ideas. The final 15 minutes of the panel will be transferred to an open Framapad where panelists and audience will continue the discussion. Before: The Framapad will be open before the panel with a series of questions on distance collaborations--please add your insights. During The panel will be moderated during the videoconferencing portion of the panel in channels (Zoom, Twitter, YouTube, and Framapad) concentrating on audience questions during the last portion of the panel. After We will transcribe and record the Zoom and leave the Framapad open for afterthoughts and further discussion for a week at the end of the conference. The Framapad will then be archived at that point as a seed crystal for articles, scholarly observations, and historic record of thoughts on distance collaboration. Archives of the framapad are available here: https://annuel2.framapad.org/p/(un)distance/timeslider https://annuel2.framapad.org/p/(un)distanceBIS/timeslider And the PDFs can be downloaded below

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Evaluating Effects of Character Appearance on Ownership and Learning in Virtual Applications

    Get PDF
    Virtual applications are now a dominant commercial and social platform. Sixty-seven percent of households own a gaming device, and eighty-one percent of the United States population has a social media profile. Now, virtual reality appears to be the next technological frontier that will take over mainstream markets. New, low-cost devices for virtual reality or mixed reality such as the Oculus Rift, Sony\u27s PlayStation VR, or Samsung\u27s Gear VR are already available or have been announced and might even outperform previous high-cost systems. With the prevalence of this technology, it is important to know how it influences us. One common factor that has remained popular in virtual applications throughout its evolution are characters. How does the appearance of characters affect us in virtual applications and virtual reality? Towards understanding these effects, this research presents findings on results when character model appearance is altered in an educational application and in self-representative avatars. Results from our experiments show that allowing character customization in an educational software results in higher learning outcomes for participants. We also find that when controlling self-avatars, some participants can feel that they own any virtual hand model given to them in virtual reality. In addition, we find that participants generally feel the strongest ownership for virtual hands that appear human-like. Finally, we find that participants experience stronger feelings of ownership and realism when they are able to control virtual hands directly rather than with a hand-held device, and that the virtual reality task must first be considered to determine which modality and hand size are the most applicable. These results contribute to knowledge for how to best create characters for users in virtual applications and environments

    The sentiment of a virtual rock concert

    Get PDF
    We created a virtual reality version of a 1983 performance by Dire Straits, this being a highly complex scenario consisting of both the virtual band performance and the appearance and behaviour of the virtual audience surrounding the participants. Our goal was to understand the responses of participants, and to learn how this type of scenario might be improved for later reconstructions of other concerts. To understand the responses of participants we carried out two studies which used senti- ment analysis of texts written by the participants. Study 1 (n = 25) (Beacco et al. in IEEE Virtual Reality: 538–545, 2021) had the unexpected finding that negative sentiment was caused by the virtual audience, where e.g. some participants were fearful of being harassed by audience members. In Study 2 (n = 26) notwithstanding some changes, the audience again led to negative sentiment—e.g. a feeling of being stared at. For Study 2 we compared sentiment with questionnaire scores, finding that the illusion of being at the concert was associated with positive sentiment for males but negative for females. Overall, we found sentiment was dominated by responses to the audience rather than the band. Participants had been placed in an unusual situation, being alone at a concert, surrounded by strangers, who seemed to pose a social threat for some of them. We relate our findings to the concept of Plausibility, the illusion that events and situations in the VR are really happening. The results indicate high Plausibility, since the negative sentiment, for example in response to being started at, only makes sense if the events are experienced as actually happening. We conclude with the need for co-design of VR scenarios, and the use of sentiment analysis in this process, rather than sole reliance on concepts proposed by researchers, typically expressed through questionnaires, which may not reflect the experiences of participants.Postprint (published version

    Virtually Sensuous (Geographies): Towards a Strategy for Archiving Multi-user Experiential and Participatory Installations

    Get PDF
    This paper explores potential strategies for the audio-visual documentation of a multi-user choreographic digital installation entitled Sensuous Geographies using VR technologies. The installation was interactive, fully immersive and participatory, with the general public initiating the details of the installation’s sonic and visual worlds. At the time of the making of Sensuous Geographies, the means of documenting participatory installations in action was limited to video documentation and photographs, which represent a third-person perspective. This article suggests that new forms of technology provide an opportunity to archive interactive choreographic installations in such a way that the choreographic forms and embodied experience they generate can be re-presented in audiovisual form to historians and audiences of the future. This article expands on a conference presentation of the same title given at the DocPerform2 Symposium, City University. London in November 2017

    Creative Machine

    Get PDF
    Curators: William Latham, Atau Tanaka and Frederic Fol Leymarie A major exhibition exploring the twilight world of human/machine creativity, including installations, video and computer art, Artificial Intelligence, robotics and Apps by leading artists from Goldsmiths and international artists by invitation. The vision for organising the Creative Machine Exhibition is to show exciting works by key international artists, Goldsmiths staff and selected students who use original software and hardware development in the creative production of their work. The range of work on show, which could be broadly termed Computer Art, includes mechanical drawing devices, kinetic sculpture driven by fuzzy logic, images produced using machine learning, simulated cellular growth forms and the self-generating works using automated aesthetics, VR, 3D printing, and social telephony networks. Traditionally, Computer Art has held a maverick position on the edge of mainstream contemporary culture with its origins in Russian Constructivist Art, biological systems, “geeky” software conferences, rave / techno music and indie computer games. These artists have defined their own channels for exhibiting their work and organised conferences and at times been entrepreneurial at building collaborations with industry at both a corporate and startup level (with the early computer artists in the 1970s and 1980s needing to work with computer corporations to get access to computers). Alongside this, interactive media art drew upon McLuhan’s notion of technology as extensions of the human to create participatory, interactive artworks by making use of novel interface technology that has been developed since the 1980s. However, with new techniques such as 3D printing, the massive spread of sophisticated sensors in consumer devices like smartphones, and the use of robotics by artists, digital art would appear to have an opportunity to come more to the fore in public consciousness. This exhibition is timely in that it coincides with an apparent wider growth of public interest in digital art, as shown by the Digital Revolution exhibition at the Barbican, London and the recent emergence of commercial galleries such as Bitforms in New York and Carroll / Fletcher in London, which, acquire and show technology-based art. The Creative Machine exhibition is the first event to make use of Goldsmiths’ new Sonics Immersive Media Lab (SIML) Chamber. This advanced surround audiovisual projection space is a key part of the St James-Hatcham refurbishment. The facility was funded by capital funding from the Engineering & Physical Sciences Research Council (EPSRC) and Goldsmiths, as well as research funding from the European Research Council (ERC). This is connected respectively to the Intelligent Games/Game Intelligence (IGGI) Centre for Doctoral Training, and Atau Tanaka’s MetaGesture Music (MGM) ERC grant. The space was built by the SONICS, a cross-departmental research special interest group at Goldsmiths that brings together the departments of Computing, Music, Media & Communications, Sociology, Visual Cultures, and Cultural Studies. It was designed in consultation with the San Francisco-based curator, Naut Humon, to be compatible with the Cinechamber system there. During Creative Machines, we shall see, in the SIML space, multiscreen screenings of work by Yoichiro Kawaguchi, Naoko Tosa, and Vesna Petresin, as well as a new immersive media work by IGGI researcher Memo Akten

    From rituals to magic: Interactive art and HCI of the past, present, and future

    Get PDF
    The connection between art and technology is much tighter than is commonly recognized. The emergence of aesthetic computing in the early 2000s has brought renewed focus on this relationship. In this article, we articulate how art and Human–Computer Interaction (HCI) are compatible with each other and actually essential to advance each other in this era, by briefly addressing interconnected components in both areas—interaction, creativity, embodiment, affect, and presence. After briefly introducing the history of interactive art, we discuss how art and HCI can contribute to one another by illustrating contemporary examples of art in immersive environments, robotic art, and machine intelligence in art. Then, we identify challenges and opportunities for collaborative efforts between art and HCI. Finally, we reiterate important implications and pose future directions. This article is intended as a catalyst to facilitate discussions on the mutual benefits of working together in the art and HCI communities. It also aims to provide artists and researchers in this domain with suggestions about where to go next
    • …
    corecore