4 research outputs found

    Expanding the User Interactions and Design Process of Haptic Experiences in Virtual Reality

    Get PDF
    Virtual reality can be a highly immersive experience due to its realistic visual presentation. This immersive state is useful for applications including education, training, and entertainment. To enhance the state of immersion provided by virtual reality further, devices capable of simulating touch and force have been researched to allow not only a visual and audio experience but a haptic experience as well. Such research has investigated many approaches to generating haptics for virtual reality but often does not explore how to create an immersive haptic experience using them. In this thesis, we present a discussion on four proposed areas of the virtual reality haptic experience design process using a demonstration methodology. To investigate the application of haptic devices, we designed a modular ungrounded haptic system which was used to create a general-purpose device capable of force-based feedback and used it in the three topics of exploration. The first area explored is the application of existing haptic theory for aircraft control to the field of virtual reality drone control. The second area explored is the presence of the size-weight sensory illusion within virtual reality when using a simulated haptic force. The third area explored is how authoring within a virtual reality medium can be used by a designer to create VR haptic experiences. From these explorations, we begin a higher-level discussion of the broader process of creating a virtual reality haptic experience. Using the results of each project as a representation of our proposed design steps, we discuss not only the broader concepts the steps contribute to the process and their importance, but also draw connections between them. By doing this, we present a more holistic approach to the large-scale design of virtual reality haptic experiences and the benefits we believe it provides

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Dynamical systems : mathematical and numerical approaches

    Get PDF
    Proceedings of the 13th Conference „Dynamical Systems - Theory and Applications" summarize 164 and the Springer Proceedings summarize 60 best papers of university teachers and students, researchers and engineers from whole the world. The papers were chosen by the International Scientific Committee from 315 papers submitted to the conference. The reader thus obtains an overview of the recent developments of dynamical systems and can study the most progressive tendencies in this field of science

    THE COUPLING OF PERCEPTION AND ACTION IN REPRESENTATION

    Get PDF
    This thesis examines how the objects that we visually perceive in the world are coupled to the actions that we make towards them. For example, a whole hand grasp might be coupled with an object like an apple, but not with an object like a pea. It has been claimed that the coupling of what we see and what we do is not simply associative, but is fundamental to the way the brain represents visual objects. More than association, it is thought that when an object is seen (even if there is no intention to interact with it), there is a partial and automatic activation of the networks in the brain that plan actions (such as reaches and grasps). The central aim of this thesis was to investigate how specific these partial action plans might be, and how specific the properties of objects that automatically activate them might be. In acknowledging that perception and action are dynamically intertwining processes (such that in catching a butterfly the eye and the hand cooperate with a fluid and seamless efficiency), it was supposed that these couplings of perception and action in the brain might be loosely constrained. That is, they should not be rigidly prescribed (such that a highly specific action is always and only coupled with a specific object property) but they should instead involve fairly general components of actions that can adapt to different situations. The experimental work examined the automatic coupling of simplistic left and right actions (e.g. key presses) to pictures of oriented objects. Typically a picture of an object was shown and the viewer responded as fast as possible to some object property that was not associated with action (such as its colour). Of interest was how the performance of these left or right responses related to the task irrelevant left or right orientation of the object. The coupling of a particular response to a particular orientation could be demonstrated by the response performance (speed and accuracy). The more tightly coupled a response was to a particular object orientation, the faster and more accurate it was. The results supported the idea of loosely constrained action plans. Thus it appeared that a range of different actions (even foot responses) could be coupled with an object's orientation. These actions were coupled by default to an object's X-Z orientation (e.g. orientation in the depth plane). In further reflecting a loosely constrained perception-action mechanism, these couplings were shown to change in different situations (e.g. when the object moved towards the viewer, or when a key press made the object move in a predictable way). It was concluded that the kinds of components of actions that are automatically activated when viewing an object are not very detailed or fixed, but are initially quite general and can change and become more specific when circumstances demand it
    corecore