1,150 research outputs found

    Interactive natural user interfaces

    Get PDF
    For many years, science fiction entertainment has showcased holographic technology and futuristic user interfaces that have stimulated the world\u27s imagination. Movies such as Star Wars and Minority Report portray characters interacting with free-floating 3D displays and manipulating virtual objects as though they were tangible. While these futuristic concepts are intriguing, it\u27s difficult to locate a commercial, interactive holographic video solution in an everyday electronics store. As used in this work, it should be noted that the term holography refers to artificially created, free-floating objects whereas the traditional term refers to the recording and reconstruction of 3D image data from 2D mediums. This research addresses the need for a feasible technological solution that allows users to work with projected, interactive and touch-sensitive 3D virtual environments. This research will aim to construct an interactive holographic user interface system by consolidating existing commodity hardware and interaction algorithms. In addition, this work studies the best design practices for human-centric factors related to 3D user interfaces. The problem of 3D user interfaces has been well-researched. When portrayed in science fiction, futuristic user interfaces usually consist of a holographic display, interaction controls and feedback mechanisms. In reality, holographic displays are usually represented by volumetric or multi-parallax technology. In this work, a novel holographic display is presented which leverages a mini-projector to produce a free-floating image onto a fog-like surface. The holographic user interface system will consist of a display component: to project a free-floating image; a tracking component: to allow the user to interact with the 3D display via gestures; and a software component: which drives the complete hardware system. After examining this research, readers will be well-informed on how to build an intuitive, eye-catching holographic user interface system for various application arenas

    Facilitating Keyboard Use While Wearing a Head-Mounted Display

    Get PDF
    Virtual reality (VR) headsets are becoming more common and will require evolving input mechanisms to support a growing range of applications. Because VR devices require users to wear head-mounted displays, there are accomodations that must be made in order to support specific input devices. One such device, a keyboard, serves as a useful tool for text entry. Many users will require assistance towards using a keyboard when wearing a head-mounted display. Developers have explored new mechanisms to overcome the challenges of text-entry for virtual reality. Several games have toyed with the idea of using motion controllers to provide a text entry mechanism, however few investigations have made on how to assist users in using a physical keyboard while wearing a head-mounted display. As an alternative to controller based text input, I propose that a software tool could facilitate the use of a physical keyboard in virtual reality. Using computer vision, a user€™s hands could be projected into the virtual world. With the ability to see the location of their hands relative to the keyboard, users will be able to type despite the obstruction caused by the head-mounted display (HMD). The viability of this approach was tested and the tool released as a plugin for the Unity development platform. The potential uses for the plugin go beyond text entry, and the project can be expanded to include many physical input devices

    Ubiquitous Computing in a Home Environment, Controlling Consumer Electronics

    Get PDF
    Building interaction prototypes for ubiquitous computing is inherently difficult, since it involves a number of different devices and systems. Prototyping is an important step in developing and evaluating interaction concepts. The ideal prototyping methodology should offer high fidelity at a relatively low cost. This thesis describes the development of interaction concepts for controlling consumer electronics in a ubiquitous computing home environment, as well as the setup, based on immersive virtual reality, used to develop and evaluate the interaction concepts. Off-the-shelf input/output devices and a game engine are used for developing two concepts for device discovery and two concepts for device interaction. The interaction concepts are compared in a controlled experiment in order to evaluate the concepts as well as the virtual reality setup. Statistically significant differences and subjective preferences could be observed in the quantitative and qualitative data respectively. Overall, the results suggest that the interaction concepts could be acceptable to some users for some use cases and that the virtual reality setup offers the possibility to quickly build interaction concepts which can be evaluated and compared in a controlled experiment

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    Ability of head-mounted display technology to improve mobility in people with low vision: a systematic review

    Get PDF
    Purpose: The purpose of this study was to undertake a systematic literature review on how vision enhancements, implemented using head-mounted displays (HMDs), can improve mobility, orientation, and associated aspects of visual function in people with low vision. Methods: The databases Medline, Chinl, Scopus, and Web of Science were searched for potentially relevant studies. Publications from all years until November 2018 were identified based on predefined inclusion and exclusion criteria. The data were tabulated and synthesized to produce a systematic review. Results: The search identified 28 relevant papers describing the performance of vision enhancement techniques on mobility and associated visual tasks. Simplifying visual scenes improved obstacle detection and object recognition but decreased walking speed. Minification techniques increased the size of the visual field by 3 to 5 times and improved visual search performance. However, the impact of minification on mobility has not been studied extensively. Clinical trials with commercially available devices recorded poor results relative to conventional aids. Conclusions: The effects of current vision enhancements using HMDs are mixed. They appear to reduce mobility efficiency but improved obstacle detection and object recognition. The review highlights the lack of controlled studies with robust study designs. To support the evidence base, well-designed trials with larger sample sizes that represent different types of impairments and real-life scenarios are required. Future work should focus on identifying the needs of people with different types of vision impairment and providing targeted enhancements. Translational Relevance: This literature review examines the evidence regarding the ability of HMD technology to improve mobility in people with sight loss

    Virtual Reality as Navigation Tool: Creating Interactive Environments For Individuals With Visual Impairments

    Get PDF
    Research into the creation of assistive technologies is increasingly incorporating the use of virtual reality experiments. One area of application is as an orientation and mobility assistance tool for people with visual impairments. Some of the challenges are developing useful knowledge of the user’s surroundings and effectively conveying that information to the user. This thesis examines the feasibility of using virtual environments conveyed via auditory feedback as part of an autonomous mobility assistance system. Two separate experiments were conducted to study key aspects of a potential system: navigation assistance and map generation. The results of this research include mesh models that were fitted to the walk pathways of an environment, and collected data that provide insights on the viability of virtual reality based guidance systems

    Mixed Reality Interfaces for Augmented Text and Speech

    Get PDF
    While technology plays a vital role in human communication, there still remain many significant challenges when using them in everyday life. Modern computing technologies, such as smartphones, offer convenient and swift access to information, facilitating tasks like reading documents or communicating with friends. However, these tools frequently lack adaptability, become distracting, consume excessive time, and impede interactions with people and contextual information. Furthermore, they often require numerous steps and significant time investment to gather pertinent information. We want to explore an efficient process of contextual information gathering for mixed reality (MR) interfaces that provide information directly in the user’s view. This approach allows for a seamless and flexible transition between language and subsequent contextual references, without disrupting the flow of communication. ’Augmented Language’ can be defined as the integration of language and communication with mixed reality to enhance, transform, or manipulate language-related aspects and various forms of linguistic augmentations (such as annotation/referencing, aiding social interactions, translation, localization, etc.). In this thesis, our broad objective is to explore mixed reality interfaces and their potential to enhance augmented language, particularly in the domains of speech and text. Our aim is to create interfaces that offer a more natural, generalizable, on-demand, and real-time experience of accessing contextually relevant information and providing adaptive interactions. To better address this broader objective, we systematically break it down to focus on two instances of augmented language. First, enhancing augmented conversation to support on-the-fly, co-located in-person conversations using embedded references. And second, enhancing digital and physical documents using MR to provide on-demand reading support in the form of different summarization techniques. To examine the effectiveness of these speech and text interfaces, we conducted two studies in which we asked the participants to evaluate our system prototype in different use cases. The exploratory usability study for the first exploration confirms that our system decreases distraction and friction in conversation compared to smartphone search while providing highly useful and relevant information. For the second project, we conducted an exploratory design workshop to identify categories of document enhancements. We later conducted a user study with a mixed-reality prototype to highlight five board themes to discuss the benefits of MR document enhancement

    WEARABLE ROBOTIC SYSTEM FOR INTERACTIVE DIGITAL MEDIA

    Get PDF
    Master'sMASTER OF ENGINEERIN

    HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments

    Get PDF
    Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, and the inability to access that information can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, few have attempted to provide access to nearby visual information through touch. Touch is a highly attuned means of acquiring tactile and spatial information, especially for people with vision impairments. By supporting touch-based access to information, we may help users to better understand how a surface appears (e.g., document layout, clothing patterns), thereby improving the quality of life. To address this gap in research, this dissertation explores methods to augment a visually impaired user’s sense of touch with interactive, real-time computer vision to access information about the physical world. These explorations span three application areas: reading and exploring printed documents, controlling mobile devices, and identifying colors and visual textures. At the core of each application is a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger. To create HandSight, we designed and implemented the physical hardware, developed signal processing and computer vision algorithms, and designed real-time feedback that enables users to interpret visual or digital content. We involve visually impaired users throughout the design and development process, conducting several user studies to assess usability and robustness and to improve our prototype designs. The contributions of this dissertation include: (i) developing and iteratively refining HandSight, a novel wearable system to assist visually impaired users in their daily lives; (ii) evaluating HandSight across a diverse set of tasks, and identifying tradeoffs of a finger-worn approach in terms of physical design, algorithmic complexity and robustness, and usability; and (iii) identifying broader design implications for future wearable systems and for the fields of accessibility, computer vision, augmented and virtual reality, and human-computer interaction

    INDUSTRIAL SAFETY USING AUGMENTED REALITY AND ARTIFICIAL INTELLIGENCE

    Get PDF
    Industrialization brought benefits to the development of societies, albeit at the cost of the safety of industrial workers. Industrial operators were often severely injured or lost their lives during the working process. The causes can be cuts or lacerations resulting from moving machine parts, burns or scalds resulting from touch, or mishandling of thermal, electrical, and chemical objects. Fatigue, distraction, or inattention can exacerbate the risk of industrial accidents. The accidents can cause service downtime of manufacturing machinery, leading to lower productivity and significant financial losses. Therefore, regulations and safety measures were formulated and overseen by the government and local authorities. Safety measures include effective training of workers, an inspection of the workplace, safety rules, safeguarding, and safety warning systems. For instance, safeguarding prevents contact with hazardous moving parts by isolating or stopping them, whereas a safety warning system detects accident risks and issues an alert warning. Warning systems were mostly mounted detection sensors and alerting systems. Mobile alerting devices can be gadgets such as phones, tablets, smartwatches, or smart glasses. Smart goggles can be utilized for industrial safety to protect, detect, and warn about potential risks. Adopting new technologies such as augmented reality and artificial intelligence can enhance the safety of workers in the industry. Augmented reality systems developed for head-mounted displays can extend workers’ perception of the environment. Artificial intelligence utilizing state-of-the-art sensors can improve industrial safety by making workers aware of potential hazards in the environment. For instance, thermal or infrared sensors can detect hot objects in the workplace. Built-in infrared sensors in smart glasses can detect the state of attention of users. Using smart glasses, potential hazards can be conveyed to industrial workers using various modalities, such as audial, visual, or tactile. We have successfully developed advanced safety systems for industrial workers. Our innovative approach incorporates cutting-edge technologies such as eye tracking, spatial mapping, and thermal imaging. By utilizing eye tracking, we are able to identify instances of user inattention, while spatial mapping allows us to analyze the user’s behavior and surroundings. Furthermore, the integration of thermal imaging enables us to detect hot objects within the user’s field of view. The first system we developed is a warning system that harnesses the power of augmented reality and artificial intelligence. This system effectively issues alerts and presents holographic warnings to combat instances of inattention or distraction. By utilizing visual cues and immersive technology, we aim to proactively prevent accidents and promote worker safety. The second safety system we designed involves the integration of a third-party thermal imaging system into smart glasses. Through this integration, our safety system overlays false-color holograms onto hot objects, enabling workers to easily identify and avoid potential hazards. To evaluate the effectiveness of our systems, we conducted comprehensive experiments with human participants. These experiments involved both qualitative and quantitative measurements, and we further conducted semi-structured interviews with the participants to gather their insights. The results and subsequent discussions from our experiments have provided valuable insights for the future implementation of safety systems. Through this research, we envision the continued advancement and refinement of safety technologies to further enhance worker safety in industrial settings
    corecore