531 research outputs found

    Virtual reality meets diabetes

    Get PDF
    This is the final version. Available from SAGE Publications via the DOI in this record. Background. This article provides a detailed summary of virtual reality (VR) and augmented reality (AR) applications in diabetes. The purpose of this comparative review is to identify application areas, direction and provide foundation for future virtual reality tools in diabetes. Method. Features and benefits of each VR diabetes application are compared and discussed, following a thorough review of literature on virtual reality for diabetes using multiple databases. The weaknesses of existing VR applications are discussed and their strengths identified so that these can be carried forward. A novel virtual reality diabetes tool prototype is also developed and presented. Results. This research identifies three major categories where VR is being used in diabetes: education, prevention and treatment. Within diabetes education, there are three target groups: clinicians, adults with diabetes and children with diabetes. Both VR and AR have shown benefits in areas of Type 1 and Type 2 diabetes. Conclusions. Virtual reality and augmented reality in diabetes have demonstrated potential to enhance training of diabetologists and enhance education, prevention and treatment for adults and children with Type 1 or Type 2 diabetes. Future research can continually build on virtual and augmented reality diabetes applications by integrating wide stakeholder inputs and diverse digital platforms. Several areas of VR diabetes are in early stages, with advantages and opportunities. Further VR diabetes innovations are encouraging to enhance training, management and treatment of diabetes.Royal Academy of EngineeringNational Institute for Health ResearchExeter Center of Excellence in Diabetes (ExCEeD

    Second-Person Surveillance: Politics of User Implication in Digital Documentaries

    Get PDF
    This dissertation analyzes digital documentaries that utilize second-person address and roleplay to make users feel implicated in contemporary refugee crises, mass incarceration in the U.S., and state and corporate surveillances. Digital documentaries are seemingly more interactive and participatory than linear film and video documentary as they are comprised of a variety of auditory, visual, and written media, utilize networked technologies, and turn the documentary audience into a documentary user. I draw on scholarship from documentary, game, new media, and surveillance studies to analyze how second-person address in digital documentaries is configured through user positioning and direct address within the works themselves, in how organizations and creators frame their productions, and in how users and players respond in reviews, discussion forums, and Let’s Plays. I build on Michael Rothberg’s theorization of the implicated subject to explore how these digital documentaries bring the user into complicated relationality with national and international crises. Visually and experientially implying that users bear responsibility to the subjects and subject matter, these works can, on the one hand, replicate modes of liberal empathy for suffering, distant “others” and, on the other, simulate one’s own surveillant modes of observation or behavior to mirror it back to users and open up one’s offline thoughts and actions as a site of critique. This dissertation charts how second-person address shapes and limits the political potentialities of documentary projects and connects them to a lineage of direct address from educational and propaganda films, museum exhibits, and serious games. By centralizing the user’s individual experience, the interventions that second-person digital documentaries can make into social discourse change from public, institution-based education to more privatized forms of sentimental education geared toward personal edification and self-realization. Unless tied to larger initiatives or movements, I argue that digital documentaries reaffirm a neoliberal politics of individual self-regulation and governance instead of public education or collective, social intervention. Chapter one focuses on 360-degree virtual reality (VR) documentaries that utilize the feeling of presence to position users as if among refugees and as witnesses to refugee experiences in camps outside of Europe and various dwellings in European cities. My analysis of Clouds Over Sidra (Gabo Arora and Chris Milk 2015) and The Displaced (Imraan Ismail and Ben C. Solomon 2015) shows how these VR documentaries utilize observational realism to make believable and immersive their representations of already empathetic refugees. The empathetic refugee is often young, vulnerable, depoliticized and dehistoricized and is a well-known trope in other forms of humanitarian media that continues into VR documentaries. Forced to Flee (Zahra Rasool 2017), I am Rohingya (Zahra Rasool 2017), So Leben Flüchtlinge in Berlin (Berliner Morgenpost 2017), and Limbo: A Virtual Experience of Waiting for Asylum (Shehani Fernando 2017) disrupt easy immersions into realistic-looking VR experiences of stereotyped representations and user identifications and, instead, can reflect back the user’s political inaction and surveillant modes of looking. Chapter two analyzes web- and social media messenger-based documentaries that position users as outsiders to U.S. mass incarceration. Users are noir-style co-investigators into the crime of the prison-industrial complex in Fremont County, Colorado in Prison Valley: The Prison Industry (David Dufresne and Philippe Brault 2009) and co-riders on a bus transporting prison inmates’ loved ones for visitations to correctional facilities in Upstate New York in A Temporary Contact (Nirit Peled and Sara Kolster 2017). Both projects construct an experience of carceral constraint for users to reinscribe seeming “outside” places, people, and experiences as within the continuation of the racialized and classed politics of state control through mass incarceration. These projects utilize interfaces that create a tension between replicating an exploitative hierarchy between non-incarcerated users and those subject to mass incarceration while also de-immersing users in these experiences to mirror back the user’s supposed distance from this mode of state regulation. Chapter three investigates a type of digital game I term dataveillance simulation games, which position users as surveillance agents in ambiguously dystopian nation-states and force users to use their own critical thinking and judgment to construct the criminality of state-sanctioned surveillance targets. Project Perfect Citizen (Bad Cop Studios 2016), Orwell: Keeping an Eye on You (Osmotic Studios 2016), and Papers, Please (Lucas Pope 2013) all create a dual empathy: players empathize with bureaucratic surveillance agents while empathizing with surveillance targets whose emails, text messages, documents, and social media profiles reveal them to be “normal” people. I argue that while these games show criminality to be a construct, they also utilize a racialized fear of the loss of one’s individual privacy to make players feel like they too could be surveillance targets. Chapter four examines personalized digital documentaries that turn users and their data into the subject matter. Do Not Track (Brett Gaylor 2015), A Week with Wanda (Joe Derry Hall 2019), Stealing Ur Feelings (Noah Levenson 2019), Alfred Premium (Joël Ronez, Pierre Corbinais, and Émilie F. Grenier 2019), How They Watch You (Nick Briz 2021), and Fairly Intelligent™ (A.M. Darke 2021) track, monitor, and confront users with their own online behavior to reflect back a corporate surveillance that collects, analyzes, and exploits user data for profit. These digital documentaries utilize emotional fear- and humor-based appeals to persuade users that these technologies are controlling them, shaping their desires and needs, and dehumanizing them through algorithmic surveillance

    The Neural Correlates of Bodily Self-Consciousness in Virtual Worlds

    Full text link
    Bodily Self-Consciousness (BSC) is the cumulative integration of multiple sensory modalities that contribute to our sense of self. Sensory modalities, which include proprioception, vestibulation, vision, and touch are updated dynamically to map the specific, local representation of ourselves in space. BSC is closely associated with bottom-up and top-down aspects of consciousness. Recently, virtual- and augmented-reality technology have been used to explore perceptions of BSC. These recent achievements are partly attributed to advances in modern technology, and partly due to the rise of virtual and augmented reality markets. Virtual reality head-mounted displays can alter aspects of perception and consciousness unlike ever before. Consequently, many strides have been made regarding BSC research. Previous research suggests that BSC results from the perceptions of embodiment (i.e., the feeling of ownership towards a real or virtual extremity) and presence (i.e., feeling physically located in a real or virtual space). Though physiological mechanisms serving embodiment and presence in the real world have been proposed by others, how these perceptual experiences interact and whether they can be dissociated is still poorly understood. Additionally, less is known about the physiological mechanisms underlying the perception of presence and embodiment in virtual environments. Therefore, five experiments were conducted to examine the perceptions of embodiment and presence in virtual environments to determine which physiological mechanisms support these perceptions. These studies compared performance between normal or altered embodiment/presence conditions. Results from a novel experimental paradigm using virtual reality (Experiment 4) are consistent with studies in the literature that reported synchronous sensorimotor feedback corresponded with greater strength of the embodiment illusion. In Experiment 4, participants recorded significantly faster reaction times and better accuracy in correlated feedback conditions compared to asynchronous feedback conditions. Reaction times were also significantly faster, and accuracy was higher for conditions where participants experienced the game from a first- versus third-person perspective. Functional magnetic resonance imaging (fMRI) data from Experiment 5 revealed that many frontoparietal networks contribute to the perception of embodiment, which include premotor cortex (PMC) and intraparietal sulcus (IPS). fMRI data revealed that activity in temporoparietal networks, including the temporoparietal junction and right precuneus, corresponded with manipulations thought to affect the perception of presence. Furthermore, data suggest that networks associated with embodiment and presence overlap, and brain areas that support perception may be predicated upon those that support embodiment. The results of these experiments offer further clues into the psychophysiological mechanisms underlying BSC

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Predicting Adoption of Virtual Reality Technology by College of Agricultural Faculty in Association of American Universities

    Get PDF
    The use of virtual platforms is a phenomenon that has only begun to expand throughout the COVID-19 pandemic. Modern technologies such as drones, Oculus Rift, GoPro, and PlayStation VR are popular versions and, in most cases, are used primarily for recreational purposes. Recreational use provides many immersive experiences, but these technologies can provide meaningful educational opportunities in agriculture. Agriculture education coupled with virtual reality technology has yet to be fully explored, which is indicated by the lack of availability of published literature exploring virtual reality adoption in agriculture. This research focuses on three Association of American University (AAU) Agricultural and Life Sciences departments across the United States. A Qualtrics survey was administered to assess stakeholder perceptions and those results were analyzed to determine data outcomes. The findings from these data provide revealed perceptions of virtual reality by AAU Colleges of Agricultural and Life Sciences faculty. Faculties behavioral intentions to adopt and use virtual reality were low, which provides a foundation for future research. Mean scores across constructs enabled the researchers to conclude that faculty need to become aware of the educational value provided by virtual reality technology. Through the provision of new innovative learning opportunities, we can strive to solve future agricultural problems, such as the expected food crisis of 2050

    Children's encounters with urban woodlands, digital technologies and materialities

    Get PDF
    This research considers children’s encounters and learning with ‘natures’, places and digital technologies. It is situated within an urban woodland in Birmingham and is a collaborative project working with two primary schools participating in six-months of walking and filming events, website creation and creative workshops. Drawing on embodied, multi-sensory and socio-material approaches to children’s geographies and interdisciplinary environmental education research, it works with the ‘technique’ of research-creation to explore children’s learning with ‘natures’ and digital platforms such as YouTube. It also examines creative responses to the more-than-human, including water, weather, soils, trees, mud, bricks and minerals. Through inclusion of the GoPro wearable technology as part of the research assemblage, the project draws on notions of the entanglement of the digital and physical in techno-naturecultures (following Haraway’s naturecultures and Latour’s common worlds), arguing for the productive inclusion of the digital within environmental education practices

    New interactive interface design for STEM museums: a case study in VR immersive technology

    Get PDF
    Novel technologies are used to develop new museum exhibits, aiming to attract visitors’ attention. However, using new technology is not always successful, perhaps because the design of a new exhibit was inappropriate, or users were unfamiliar with interacting with a new device. As a result, choosing alternative technology to create a unique interactive display is critical. The results of using technology best practices enable the designer to help reduce failures. This research uses virtual reality (VR) immersive technology as a case study to explore how to design a new interactive exhibit in science, technology, engineering and mathematics (STEM) museums. VR has seen increased use in Thailand museums, but people are unfamiliar with it, and few use it daily. It had problems with health concerns such as motion sickness, and the virtual reality head-mounted display (VR HMD) restricts social interaction, which is essential for museum visitors. This research focuses on improving how VR is deployed in STEM museums by proposing a framework for designing a new VR exhibit that supports social interaction. The research question is, how do we create a new interactive display using VR immersive technology while supporting visitor social interaction? The investigation uses mixed methods to construct the proposed framework, including a theoretical review, museum observational study, and experimental study. The in-the-wild study and workshop were conducted to evaluate the proposed framework. The suggested framework provides guidelines for designing a new VR exhibit. The component of a framework has two main parts. The first part is considering factors for checking whether VR technology suit for creating a new exhibit. The second part is essential components for designing a new VR exhibit includes Content Design, Action Design, Social Interaction Design, System Design, and Safety and Health. Various kinds of studies were conducted to answer the research question. First, a museum observational study led to an understanding of the characteristics of interactive exhibits in STEM museums, the patterns of social interaction, the range of immersive technology that museums use and the practice of using VR technology in STEM museums. Next, the alternative design for an interactive exhibit study investigates the effect on the user experience of tangible, gesture and VR technologies. It determines the factors that make the user experience different and suggests six aspects to consider when choosing technology. Third, social interaction design in VR for museum study explores methods to connect players; single player, symmetric connection (VR HMD and VR HMD) and asymmetric connection (VR HMD and PC), to provide social interaction while playing the VR exhibit and investigates social features and social mechanics for visitors to communicate and exchange knowledge. It found that the symmetric connection provides better social interaction than others. However, the asymmetric link is also a way for visitors to exchange knowledge. The study recommends using mixed symmetric and asymmetric connections when deploying VR exhibits in a museum. This was confirmed by the in-the-wild research and validated the framework that indicated it helped staff manage the VR exhibit and provided a co-presence and co-player experience. Fourth, the content design of a display in the virtual environment study examines the effect of design content between 2D and 3D on visitors' learning and memory. It showed that content design with 2D and 3D did not influence visitors to gain knowledge and remember the exhibit’s story. However, the 3D view offers more immersion and emotion than the 2D view. The research proposes using 3D when designing content to evoke a player’s emotion; designing content for a VR exhibit should deliver experience rather than text-based learning. Furthermore, the feedback on the qualitative results of each study provided insight into the design user experience. Evaluation of the proposed framework is the last part of this research. A study in the wild was conducted to validate the proposed framework in museums. Two VR exhibits were adjusted with features that matched the proposed framework’s suggested components and were deployed in the museum to gather visitors' feedback. It received positive feedback from the visitors, and visitors approved of using VR technology in the museum. The results of user feedback from a workshop to evaluate the helpfulness of the framework showed that the framework's components are appropriate, and the framework is practical when designing a new VR exhibit, particularly for people unfamiliar with VR technology. In addition, the proposed framework of this research may be applied to study emerging technology to create a novel exhibit

    Co-Creating with the Senses: Towards an Embodiment Grammar for Conceptualising Virtual Reality (VR) Narrative Design

    Full text link
    This creative practice thesis comprises two components, a dissertation titled Co-Creating with the Senses: Towards an Embodiment Grammar for Conceptualising Virtual Reality (VR) Narrative Design and a creative work, The Recluse, a fictional VR script written in the Maria Vargas Immersive Play template, available through Final Draft. The advent of publicly available virtual reality (VR) technologies has led to the emergence of a new genre of storytelling, henceforth referred to as ‘VR narratives’. There has therefore been a need to articulate its defining grammar and to contribute insights born out of artistic experimentation in a scholarly field which until recently was dominated by scientific points of view. Employing a somaesthetics approach outlined by researcher Kristina Höök, the dissertation draws on a qualitative study into 10 VR narrative works in order to propose an embodiment grammar through which the art form may be conceptualised. The study’s findings, a group of eight embodied states organised into a framework, urge for the relenting of authorial control in order to instead frame affective potential, thus echoing a Deleuzian concept of the assemblage. In particular, the framework draws attention to the way that VR’s deeper affective dimensions may be elucidated by framing co-creation through the medium’s distinct sensory possibilities. As interest gathers in a future metaverse, the insights raised by the study are significant, with potential applications in a range of affective design contexts. The Recluse is my original contribution to this emerging art form and a case study through which to interrogate the framework’s findings. A mystery with supernatural elements, the VR script aims to communicate an experience that transports the participant to the world of Alma Cohen, a famous artist turned recluse, where they are invited to experience the strange occurrences in Alma’s life through their own embodied actions. The VR script explores the potential for intimate encounters with virtual characters and the sensory, co-creational and affective possibilities which arise through these dynamics. It highlights, following the framework, the way that more open structuring approaches are required in order to access the medium’s deeper affective possibilities and also the present technological constraints in achieving this

    Malay sound arts::Reimagining biophony and geophony materials. Commentary of original composition portfolio 2019-2023

    Get PDF
    This PhD takes the research theme of Nada Bumi or Voice of the Earth: exploring andaccentuating hidden Malaysia biophonic and geophonic materials, for expressing self-cultural identity and narrative through sound arts practice. The portfolio and accompanying commentary present eleven sound-art works ranging from instrumental electroacoustic music to Web-Audio API based sound installation. The main idea for this portfolio research is to explore the association of folklore, tales, myths, legends and art cultural narrative of the Malay race and the ancestors of Malay (proto-Malay), with the selected hidden and unheard Malaysia natural soundscape, in producing new sound art works. Therefore, I proposed two major compositional themes each comprising several works; Miroirs of Malay Rebab (MiMaR), and Seed of Life (SoL). The works in Miroirs of Malay Rebab reimagine selected unheard biophonic and geophonic materials as mirrors of several Malay performing art-cultural narrative and their stories, such as Makyung theater dance, Malay Gamelan music dance, Ulek Mayang dance and their stories that I have been exposed to during my undergraduate music studies in Malaysia. The works in Seed of Life (SoL) take a similar approach but focused more on local Malay and proto-Malay folklores, tales, legends and myths associated with my childhood experience. Furthermore, as I delved into the conceptual and compositional aspects of creating the Miroirs of Malay Rebab (MiMaR) set and Seed of Life I (SoL) set, I had the privilege to engage in an enriching journey of (self-) exploration through the creation of sound art within the vibrant Bristol soundscape with support from the local sound art community. This experience was part of my involvement in the Hidden Bristol Soundwalks project, which provided a unique platform for my creative endeavors. I have decided to include this project in this portfolio, which has similar compositional approach with Seed of Life (SoL). Both the major cycles, Miroirs of Malay Rebab (MiMaR) and Seed of Life (SoL), include Western classical music instrumentations with electronics; fixed media; and interactive media. This portfolio was composed and developed at the Studio One, Department of Music in the Faculty of Arts, University of Bristol; the Bristol Interaction Group (B.I.G.) Lab in the Faculty of Engineering, University of Bristol; and my home studios in Clevedon, UK, during the period of October 2019 until September 2022. The portfolio consists of scores, studio-audio production recordings, and several live performance recordings. The commentary comprises a set of philosophical considerations about my compositions and intent for creation based on the Nada Bumi theme and sub themes. Further chapters are dedicated to compositional techniques, related traditions and piece-specific documentations. The portfolio is supplied as a set of digital media, containing pdf files of musical scores in notation, associated software or media components of the works, recordings of the studio-based music, and recordings of several live public performances made in mid-2022 after the period of covid-19 lock-downs
    corecore