1,550 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Using Sound to Enhance Users’ Experiences of Mobile Applications

    Get PDF
    The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    Molecular Docking With Haptic Guidance and Path Planning

    Get PDF
    Molecular docking drives many important biological processes including immune system recognition and cellular signalling. Molecular docking occurs when molecules interact and form complexes. Predicting how specific molecules dock with each other using computational methods has several applications including understanding diseases and virtual drug design. The goal of molecular docking prediction is to find the lowest energy ligand states. The lower the energy state, the more probable the state is docked and biologically feasible. Existing automated computational methods can be time intensive, especially when using direct molecular dynamic simulation. One way to reduce this computational cost is to use more coarse-grained models that approximate molecular docking. Coarse-grained molecular docking prediction is generally performed first by sampling ligand states using a rigid body model or a partial flexibility model to reduce computation, then by screening the states. The ligand states are screened using a scoring function, usually a potential energy function for interactions between the atoms in each molecule. Ligand state search algorithms still have a significant computational cost if a large portion of the state space is to be explored. Instead of an automated ligand state search method, a human operator can explore the state space instead. Haptic force feedback devices providing guidance based off the energy function can aid the human operator. Haptic-guidance has been used for immersive semi-automatic and manual molecular docking on a single operator scale. A large amount of ligand state space can be explored with many human operators in a crowdsourced effort. Players in an interactive crowdsourced protein folding puzzle game have aided in finding protein folding prediction solutions, but without haptic feedback. Interactive crowdsourced methods for molecular docking prediction is not well-explored, although non-interactive crowdsourced systems such as Folding@home can be adapted for molecular docking. This thesis presents a molecular docking game that produces low potential energy ligand states and motion paths with crowdsource scale potential. In an exploratory user study, participants were assigned four different types of devices with varying levels of haptic guidance to search for a potentially docked ligand state. The results demonstrate some effect on the type of device and haptic guidance seen in the study. However, differences are minimal thus potentially enabling the use of commonly available input devices in a crowdsourced setting

    Designing Tactile Interfaces for Abstract Interpersonal Communication, Pedestrian Navigation and Motorcyclists Navigation

    Get PDF
    The tactile medium of communication with users is appropriate for displaying information in situations where auditory and visual mediums are saturated. There are situations where a subject's ability to receive information through either of these channels is severely restricted by the environment they are in or through any physical impairments that the subject may have. In this project, we have focused on two groups of users who need sustained visual and auditory focus in their task: Soldiers on the battle field and motorcyclists. Soldiers on the battle field use their visual and auditory capabilities to maintain awareness of their environment to guard themselves from enemy assault. One of the major challenges to coordination in a hazardous environment is maintaining communication between team members while mitigating cognitive load. Compromise in communication between team members may result in mistakes that can adversely affect the outcome of a mission. We have built two vibrotactile displays, Tactor I and Tactor II, each with nine actuators arranged in a three-by-three matrix with differing contact areas that can represent a total of 511 shapes. We used two dimensions of tactile medium, shapes and waveforms, to represent verb phrases and evaluated ability of users to perceive verb phrases the tactile code. We evaluated the effectiveness of communicating verb phrases while the users were performing two tasks simultaneously. The results showed that performing additional visual task did not affect the accuracy or the time taken to perceive tactile codes. Another challenge in coordinating Soldiers on a battle field is navigating them to respective assembly areas. We have developed HaptiGo, a lightweight haptic vest that provides pedestrians both navigational intelligence and obstacle detection capabilities. HaptiGo consists of optimally-placed vibro-tactile sensors that utilize natural and small form factor interaction cues, thus emulating the sensation of being passively guided towards the intended direction. We evaluated HaptiGo and found that it was able to successfully navigate users with timely alerts of incoming obstacles without increasing cognitive load, thereby increasing their environmental awareness. Additionally, we show that users are able to respond to directional information without training. The needs of motorcyclists are di erent from those of Soldiers. Motorcyclists' need to maintain visual and auditory situational awareness at all times is crucial since they are highly exposed on the road. Route guidance systems, such as the Garmin, have been well tested on automobilists, but remain much less safe for use by motorcyclists. Audio/visual routing systems decrease motorcyclists' situational awareness and vehicle control, and thus increase the chances of an accident. To enable motorcyclists to take advantage of route guidance while maintaining situational awareness, we created HaptiMoto, a wearable haptic route guidance system. HaptiMoto uses tactile signals to encode the distance and direction of approaching turns, thus avoiding interference with audio/visual awareness. Evaluations show that HaptiMoto is intuitive for motorcyclists, and a safer alternative to existing solutions

    OpenMPD: A Low-Level Presentation Engine for Multimodal Particle-Based Displays

    Get PDF
    Phased arrays of transducers have been quickly evolving in terms of software and hardware with applications in haptics (acoustic vibrations), display (levitation), and audio. Most recently, Multimodal Particle-based Displays (MPDs) have even demonstrated volumetric content that can be seen, heard, and felt simultaneously, without additional instrumentation. However, current software tools only support individual modalities and they do not address the integration and exploitation of the multi-modal potential of MPDs. This is because there is no standardized presentation pipeline tackling the challenges related to presenting such kind of multi-modal content (e.g., multi-modal support, multi-rate synchronization at 10 KHz, visual rendering or synchronization and continuity). This article presents OpenMPD, a low-level presentation engine that deals with these challenges and allows structured exploitation of any type of MPD content (i.e., visual, tactile, audio). We characterize OpenMPD’s performance and illustrate how it can be integrated into higher-level development tools (i.e., Unity game engine). We then illustrate its ability to enable novel presentation capabilities, such as support of multiple MPD contents, dexterous manipulations of fast-moving particles, or novel swept-volume MPD content

    Digital fabrication of custom interactive objects with rich materials

    Get PDF
    As ubiquitous computing is becoming reality, people interact with an increasing number of computer interfaces embedded in physical objects. Today, interaction with those objects largely relies on integrated touchscreens. In contrast, humans are capable of rich interaction with physical objects and their materials through sensory feedback and dexterous manipulation skills. However, developing physical user interfaces that offer versatile interaction and leverage these capabilities is challenging. It requires novel technologies for prototyping interfaces with custom interactivity that support rich materials of everyday objects. Moreover, such technologies need to be accessible to empower a wide audience of researchers, makers, and users. This thesis investigates digital fabrication as a key technology to address these challenges. It contributes four novel design and fabrication approaches for interactive objects with rich materials. The contributions enable easy, accessible, and versatile design and fabrication of interactive objects with custom stretchability, input and output on complex geometries and diverse materials, tactile output on 3D-object geometries, and capabilities of changing their shape and material properties. Together, the contributions of this thesis advance the fields of digital fabrication, rapid prototyping, and ubiquitous computing towards the bigger goal of exploring interactive objects with rich materials as a new generation of physical interfaces.Computer werden zunehmend in Geräten integriert, mit welchen Menschen im Alltag interagieren. Heutzutage basiert diese Interaktion weitgehend auf Touchscreens. Im Kontrast dazu steht die reichhaltige Interaktion mit physischen Objekten und Materialien durch sensorisches Feedback und geschickte Manipulation. Interfaces zu entwerfen, die diese Fähigkeiten nutzen, ist allerdings problematisch. Hierfür sind Technologien zum Prototyping neuer Interfaces mit benutzerdefinierter Interaktivität und Kompatibilität mit vielfältigen Materialien erforderlich. Zudem sollten solche Technologien zugänglich sein, um ein breites Publikum zu erreichen. Diese Dissertation erforscht die digitale Fabrikation als Schlüsseltechnologie, um diese Probleme zu adressieren. Sie trägt vier neue Design- und Fabrikationsansätze für das Prototyping interaktiver Objekte mit reichhaltigen Materialien bei. Diese ermöglichen einfaches, zugängliches und vielseitiges Design und Fabrikation von interaktiven Objekten mit individueller Dehnbarkeit, Ein- und Ausgabe auf komplexen Geometrien und vielfältigen Materialien, taktiler Ausgabe auf 3D-Objektgeometrien und der Fähigkeit ihre Form und Materialeigenschaften zu ändern. Insgesamt trägt diese Dissertation zum Fortschritt der Bereiche der digitalen Fabrikation, des Rapid Prototyping und des Ubiquitous Computing in Richtung des größeren Ziels, der Exploration interaktiver Objekte mit reichhaltigen Materialien als eine neue Generation von physischen Interfaces, bei
    • …
    corecore