494 research outputs found

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Perceptual Visibility Model for Temporal Contrast Changes in Periphery

    Get PDF
    Modeling perception is critical for many applications and developments in computer graphics to optimize and evaluate content generation techniques. Most of the work to date has focused on central (foveal) vision. However, this is insufficient for novel wide-field-of-view display devices, such as virtual and augmented reality headsets. Furthermore, the perceptual models proposed for the fovea do not readily extend to the off-center, peripheral visual field, where human perception is drastically different. In this paper, we focus on modeling the temporal aspect of visual perception in the periphery. We present new psychophysical experiments that measure the sensitivity of human observers to different spatio-temporal stimuli across a wide field of view. We use the collected data to build a perceptual model for the visibility of temporal changes at different eccentricities in complex video content. Finally, we discuss, demonstrate, and evaluate several problems that can be addressed using our technique. First, we show how our model enables injecting new content into the periphery without distracting the viewer, and we discuss the link between the model and human attention. Second, we demonstrate how foveated rendering methods can be evaluated and optimized to limit the visibility of temporal aliasing

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    LoCoMoTe – a framework for classification of natural locomotion in VR by task, technique and modality

    Get PDF
    Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions

    Temporal Asynchrony but Not Total Energy Nor Duration Improves the Judgment of Numerosity in Electrotactile Stimulation

    Get PDF
    Stroke patients suffer from impairments of both motor and somatosensory functions. The functional recovery of upper extremities is one of the primary goals of rehabilitation programs. Additional somatosensory deficits limit sensorimotor function and significantly affect its recovery after the neuromotor injury. Sensory substitution systems, providing tactile feedback, might facilitate manipulation capability, and improve patient's dexterity during grasping movements. As a first step toward this aim, we evaluated the ability of healthy subjects in exploiting electrotactile feedback on the shoulder to determine the number of perceived stimuli in numerosity judgment tasks. During the experiment, we compared four different stimulation patterns (two simultaneous: short and long, intermittent and sequential) differing in total duration, total energy, or temporal synchrony. The experiment confirmed that the subject ability to enumerate electrotactile stimuli decreased with increasing the number of active electrodes. Furthermore, we found that, in electrotactile stimulation, the temporal coding schemes, and not total energy or duration modulated the accuracy in numerosity judgment. More precisely, the sequential condition resulted in significantly better numerosity discrimination than intermittent and simultaneous stimulation. These findings, together with the fact that the shoulder appeared to be a feasible stimulation site to communicate tactile information via electrotactile feedback, can serve as a guide to deliver tactile feedback to proximal areas in stroke survivors who lack sensory integrity in distal areas of their affected arm, but retain motor skills

    Detection and response to critical lead vehicle deceleration events with peripheral vision: Glance response times are independent of visual eccentricity

    Get PDF
    Studies show high correlations between drivers’ off-road glance duration or pattern and the frequency of crashes. Understanding drivers’ use of peripheral vision to detect and react to threats is essential to modelling driver behavior and, eventually, preventing crashes caused by visual distraction. A between-group experiment with 83 participants was conducted in a high-fidelity driving simulator. Each driver in the experiment was exposed to an unexpected, critical, lead vehicle deceleration, when performing a self-paced, visual-manual, tracking task at different horizontal visual eccentricity angles (12\ub0, 40\ub0 and 60\ub0). The effect of visual eccentricity on threat detection, glance and brake response times was analyzed. Contrary to expectations, the driver glance response time was found to be independent of the eccentricity angle of the secondary task. However, the brake response time increased with increasing task eccentricity, when measured from the driver’s gaze redirection to the forward roadway. High secondary task eccentricity was also associated with a low threat detection rate and drivers were predisposed to perform frequent on-road check glances while executing the task. These observations indicate that drivers use peripheral vision to collect evidence for braking during off-road glances. The insights will be used in extensions of existing driver models for virtual testing of critical longitudinal situations, to improve the representativeness of the simulation results

    Literature review - Energy saving potential of user-centered integrated lighting solutions

    Get PDF
    Measures for the reduction of electric energy loads for lighting have predominantly focussed on increasing the efficiency of lighting systems. This efficiency has now reached levels unthinkable a few decades ago. However, a focus on mere efficiency is physically limiting, and does not necessarily ensure that the anticipated energy savings actually materialize. There are technical and non-technical reasons because of which effective integration of lighting solutions and their controls, and thus a reduction in energy use, does not happen. This literature review aims to assess the energy saving potential of integrated daylight and electric lighting design and controls, especially with respect to user preferences and behaviour. It does so by collecting available scientific knowledge and experience on daylighting, electric lighting, and related control systems, as well as on effective strategies for their integration. Based on this knowledge, the review suggests design processes, innovative design strategies and design solutions which – if implemented appropriately – could improve user comfort, health, well-being and productivity, while saving energy as well as the operation and maintenance of lighting systems. The review highlights also regulatory, technical, and design challenges hindering energy savings. Potential energy savings are reported from the retrieved studies. However, these savings derived from separate studies are dependent on their specific contexts, which lowers the ecological validity of the findings. Studies on strategies based on behavioural interventions, like information, feedback, and social norms, did not report energy saving performance. This is an interesting conclusion, since the papers indicate high potentials that deserve further exploration. Quantifying potential savings is fundamental to fostering large scale adoption of user-driven strategies, since this would allow at least a rough estimation of returns for the investors. However, such quantification requires that studies are designed with an inter-disciplinary approach. The literature also shows that strategies, where there is more communication between façade and lighting designers, are more successful in integrated design, which calls for more communication between stakeholders in future building processes

    An Investigation of Sensory Percepts Elicited by Macro-Sieve Electrode Stimulation of the Rat Sciatic Nerve

    Get PDF
    Intuitive control of conventional prostheses is hampered by their inability to replicate the rich tactile and proprioceptive feedback afforded by natural sensory pathways. Electrical stimulation of residual nerve tissue is a promising means of reintroducing sensory feedback to the central nervous system. The macro-sieve electrode (MSE) is a candidate interface to amputees’ truncated peripheral nerves whose unique geometry enables selective control of the complete nerve cross-section. Unlike previously studied interfaces, the MSE’s implantation entails transection and subsequent regeneration of the target nerve. Therefore, a key determinant of the MSE’s suitability for this task is whether it can elicit sensations at low current levels in the face of altered axon morphology and caliber distribution inherent to nerve regeneration. This dissertation describes a combined rat sciatic nerve and behavioral model that was developed to answer this question. Four rats learned a go/no-go detection task with auditory stimuli and then underwent surgery to implant the MSE in the sciatic nerve. After healing, they returned to behavioral training and transferred their attention to monopolar electrical stimuli presented in one multi-channel and eight single-channel stimulus configurations. Current amplitudes varied based on the method of constant stimuli (MCS). A subset of single-channel configurations was tested longitudinally at two timepoints spaced three weeks apart. Psychometric curves generated for each dataset enabled the calculation of 50% detection thresholds and associated slopes. For a given rat, the multi-channel configuration’s per-channel current requirement for stimulus detection was lower than all corresponding single-channel thresholds. Single-channel thresholds for leads located near the nerve’s center were, on average, half those of leads located more peripherally. Of the five leads tested longitudinally, three had thresholds that decreased or remained stable over the three-week span. The remaining two leads’ thresholds showed a significant increase, possibly due to scarring or device failure. Overall, thresholds for stimulus detection were comparable with more traditional penetrative electrode implants, suggesting that the MSE is indeed viable as a sensory feedback interface. These results represent an important first step in establishing the MSE’s suitability as a sensory feedback interface for integration with prosthetic systems. More broadly, it lays the groundwork for future experiments that will extend the described model to the study of other devices, stimulus parameters, and task paradigms
    • …
    corecore