392 research outputs found

    MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

    Full text link
    Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.Comment: Accepted IEEE VR 2023 conference pape

    An Evaluation of Radar Metaphors for Providing Directional Stimuli Using Non-Verbal Sound

    Get PDF
    We compared four audio-based radar metaphors for providing directional stimuli to users of AR headsets. The metaphors are clock face, compass, white noise, and scale. Each metaphor, or method, signals the movement of a virtual arm in a radar sweep. In a user study, statistically significant differences were observed for accuracy and response time. Beat-based methods (clock face, compass) elicited responses biased to the left of the stimulus location, and non-beat-based methods (white noise, scale) produced responses biased to the right of the stimulus location. The beat methods were more accurate than the non-beat methods. However, the non-beat methods elicited quicker responses. We also discuss how response accuracy varies along the radar sweep between methods. These observations contribute design insights for non-verbal, nonvisual directional prompting

    PokerFace Mask: Exploring Augmenting Masks with Captions through an Interactive, Mixed-Reality Prototype

    Get PDF
    The COVID-19 pandemic in early 2020 made masks a daily wearable for personal protective equipment as a public health precaution. Traditional mask designs obscure communication by obstructing the face and muffling the voice which can make communication especially difficult for users who are deaf or hard of hearing (DHH). PokerFace uses a commodity smartphone and recycled materials to display a live-stream of a user’s mouth and nose on the mask surface. This maintains the safety precautions afforded by the mask, while mitigating the obfuscation of traditional mask designs. To compare PokerFace’s ability to facilitate communication with traditional masks, we conducted a user study with 18 participants, who played a collaborative communication game similar to charades. Participants performed better at this collaborative communication task with our prototype than with traditional masks, and even non-DHH users became aware of the importance of lip-reading and facial cues in communication due to study participation

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction and reports on fourteen research projects.National Institutes of Health Grant RO1 DC00117National Institutes of Health Grant RO1 DC02032National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant R01 DC00126National Institutes of Health Grant R01 DC00270National Institutes of Health Contract N01 DC52107U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-95-K-0014U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0003U.S. Navy - Office of Naval Research Grant N00014-96-1-0379U.S. Air Force - Office of Scientific Research Grant F49620-95-1-0176U.S. Air Force - Office of Scientific Research Grant F49620-96-1-0202U.S. Navy - Office of Naval Research Subcontract 40167U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0002National Institutes of Health Grant R01-NS33778U.S. Navy - Office of Naval Research Grant N00014-92-J-184

    Mixed Reality Interfaces for Augmented Text and Speech

    Get PDF
    While technology plays a vital role in human communication, there still remain many significant challenges when using them in everyday life. Modern computing technologies, such as smartphones, offer convenient and swift access to information, facilitating tasks like reading documents or communicating with friends. However, these tools frequently lack adaptability, become distracting, consume excessive time, and impede interactions with people and contextual information. Furthermore, they often require numerous steps and significant time investment to gather pertinent information. We want to explore an efficient process of contextual information gathering for mixed reality (MR) interfaces that provide information directly in the user’s view. This approach allows for a seamless and flexible transition between language and subsequent contextual references, without disrupting the flow of communication. ’Augmented Language’ can be defined as the integration of language and communication with mixed reality to enhance, transform, or manipulate language-related aspects and various forms of linguistic augmentations (such as annotation/referencing, aiding social interactions, translation, localization, etc.). In this thesis, our broad objective is to explore mixed reality interfaces and their potential to enhance augmented language, particularly in the domains of speech and text. Our aim is to create interfaces that offer a more natural, generalizable, on-demand, and real-time experience of accessing contextually relevant information and providing adaptive interactions. To better address this broader objective, we systematically break it down to focus on two instances of augmented language. First, enhancing augmented conversation to support on-the-fly, co-located in-person conversations using embedded references. And second, enhancing digital and physical documents using MR to provide on-demand reading support in the form of different summarization techniques. To examine the effectiveness of these speech and text interfaces, we conducted two studies in which we asked the participants to evaluate our system prototype in different use cases. The exploratory usability study for the first exploration confirms that our system decreases distraction and friction in conversation compared to smartphone search while providing highly useful and relevant information. For the second project, we conducted an exploratory design workshop to identify categories of document enhancements. We later conducted a user study with a mixed-reality prototype to highlight five board themes to discuss the benefits of MR document enhancement

    Adult Learning Sign Language by combining video, interactivity and play

    Get PDF
    One in every six persons in the UK suffers a hearing loss, either as a condition they have been born with or a disorder they acquired during their life. 900,000 people in the UK are severely or profoundly deaf and based on a study by Action On Hearing Loss UK in 2013 only 17 percent of this population, can use the British Sign Language (BSL). That leaves a massive proportion of people with a hearing impediment who do not use sign language struggling in social interaction and suffering from emotional distress, and an even larger proportion of Hearing people who cannot communicate with those of the deaf community. This paper presents a theoretical framework for the design of interactive games to support learning BSL supporting the entire learning cycle, instruction, practice and assessment. It then describes the proposed design of a game based on this framework aiming to close the communication gap between able hearing people and people with a hearing impediment, by providing a tool that facilitates BSL learning targeting adult population. The paper concludes with the planning of a large scale study and directions for further development of this educational resource

    Rafigh: A Living Media System for Motivating Target Application Use for Children

    Get PDF
    Digital living media systems combine living media such as plants, animals and fungi with computational components. In this dissertation, I respond to the question of how can digital living media systems better motivate children to use target applications (i.e., learning and/or therapeutic applications)? To address this question, I employed a participatory design approach where I incorporated input from children, parents, speech language pathologists and teachers into the design of a new system. Rafigh is a digital embedded system that uses the growth of a living mushrooms colony to provide positive reinforcements to children when they conduct target activities. The growth of the mushrooms is affected by the amount of water administered to them, which in turn corresponds to the time children spend on target applications. I used an iterative design process to develop and evaluate three Rafigh prototypes. The evaluations showed that the system must be robust, customizable, and should include compelling engagement mechanisms to keep the children interested. I evaluated Rafigh using two case studies conducted in participants homes. In each case study, two siblings and their parent interacted with Rafigh over two weeks and the parents identified a series of target applications that Rafigh should motivate the children to use. The study showed that Rafigh motivated the children to spend significantly more time on target applications during the intervention phase and that it successfully engaged one out of two child participants in each case study who showed signs of responsibility, empathy and curiosity towards the living media. The study showed that the majority of participants described the relationship between using target applications and mushrooms growth correctly. Further, Rafigh encouraged more communication and collaboration between the participants. Rafighs slow responsivity did not impact the engagement of one out of two child participants in each case study and might even have contributed to their investment in the project. Finally, Rafighs presence as an ambient physical object allowed users to interact with it freely and as part of their home environment
    corecore