94 research outputs found

    Deep neural network model of haptic saliency

    Get PDF
    Haptic exploration usually involves stereotypical systematic movements that are adapted to the task. Here we tested whether exploration movements are also driven by physical stimulus features. We designed haptic stimuli, whose surface relief varied locally in spatial frequency, height, orientation, and anisotropy. In Experiment 1, participants subsequently explored two stimuli in order to decide whether they were same or different. We trained a variational autoencoder to predict the spatial distribution of touch duration from the surface relief of the haptic stimuli. The model successfully predicted where participants touched the stimuli. It could also predict participants' touch distribution from the stimulus' surface relief when tested with two new groups of participants, who performed a different task (Exp. 2) or explored different stimuli (Exp. 3). We further generated a large number of virtual surface reliefs (uniformly expressing a certain combination of features) and correlated the model's responses with stimulus properties to understand the model's preferences in order to infer which stimulus features were preferentially touched by participants. Our results indicate that haptic exploratory behavior is to some extent driven by the physical features of the stimuli, with e.g. edge-like structures, vertical and horizontal patterns, and rough regions being explored in more detail

    Deep neural network model of haptic saliency

    Get PDF
    Haptic exploration usually involves stereotypical systematic movements that are adapted to the task. Here we tested whether exploration movements are also driven by physical stimulus features. We designed haptic stimuli, whose surface relief varied locally in spatial frequency, height, orientation, and anisotropy. In Experiment 1, participants subsequently explored two stimuli in order to decide whether they were same or different. We trained a variational autoencoder to predict the spatial distribution of touch duration from the surface relief of the haptic stimuli. The model successfully predicted where participants touched the stimuli. It could also predict participants’ touch distribution from the stimulus’ surface relief when tested with two new groups of participants, who performed a different task (Exp. 2) or explored different stimuli (Exp. 3). We further generated a large number of virtual surface reliefs (uniformly expressing a certain combination of features) and correlated the model’s responses with stimulus properties to understand the model’s preferences in order to infer which stimulus features were preferentially touched by participants. Our results indicate that haptic exploratory behavior is to some extent driven by the physical features of the stimuli, with e.g. edge-like structures, vertical and horizontal patterns, and rough regions being explored in more detail

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Integrating Vision and Physical Interaction for Discovery, Segmentation and Grasping of Unknown Objects

    Get PDF
    In dieser Arbeit werden Verfahren der Bildverarbeitung und die FĂ€higkeit humanoider Roboter, mit ihrer Umgebung physisch zu interagieren, in engem Zusammenspiel eingesetzt, um unbekannte Objekte zu identifizieren, sie vom Hintergrund und anderen Objekten zu trennen, und letztendlich zu greifen. Im Verlauf dieser interaktiven Exploration werden außerdem Eigenschaften des Objektes wie etwa sein Aussehen und seine Form ermittelt

    Development of a Multiple Contact Haptic Display with Texture-Enhanced Graphics

    Get PDF
    This dissertation presents work towards the development of a multiple finger, worn, dynamic display device, which utilizes a method of texture encoded information to haptically render graphical images for individuals who are blind or visually impaired. The device interacts directly with the computer screen, using the colors and patterns displayed by the image as a means to encode complex patterns of vibrotactile output, generating the texture feedback to render the image. In turn, the texture feedback was methodically designed to enable parallel processing of certain coarse information, speeding up the exploration of the diagram and improving user performance. The design choices were validated when individuals who are blind or visually impaired, using the multi-fingered display system, performed three-times better using textured image representations versus outline representations. Furthermore, in an open-ended object identification task, the display device saw on average two-times better performance accuracy than that previously observed for raised-line diagrams, the current standard for tactile diagrams

    Tactile perception of randomly rough surfaces

    Get PDF
    Most everyday surfaces are randomly rough and self-similar on sufficiently small scales. We investigated the tactile perception of randomly rough surfaces using 3D-printed samples, where the topographic structure and the statistical properties of scale-dependent roughness were varied independently. We found that the tactile perception of similarity between surfaces was dominated by the statistical micro-scale roughness rather than by their topographic resemblance. Participants were able to notice differences in the Hurst roughness exponent of 0.2, or a difference in surface curvature of 0.8 mm−1 for surfaces with curvatures between 1 and 3 mm−1. In contrast, visual perception of similarity between color-coded images of the surface height was dominated by their topographic resemblance. We conclude that vibration cues from roughness at the length scale of the finger ridge distance distract the participants from including the topography into the judgement of similarity. The interaction between surface asperities and fingertip skin led to higher friction for higher micro-scale roughness. Individual friction data allowed us to construct a psychometric curve which relates similarity decisions to differences in friction. Participants noticed differences in the friction coefficient as small as 0.035 for samples with friction coefficients between 0.34 and 0.45

    Tactile perception of randomly rough surfaces

    Get PDF
    Most everyday surfaces are randomly rough and self-similar on sufficiently small scales. We investigated the tactile perception of randomly rough surfaces using 3D-printed samples, where the topographic structure and the statistical properties of scale-dependent roughness were varied independently. We found that the tactile perception of similarity between surfaces was dominated by the statistical micro-scale roughness rather than by their topographic resemblance. Participants were able to notice differences in the Hurst roughness exponent of 0.2, or a difference in surface curvature of 0.8 mm−1 for surfaces with curvatures between 1 and 3 mm−1. In contrast, visual perception of similarity between color-coded images of the surface height was dominated by their topographic resemblance. We conclude that vibration cues from roughness at the length scale of the finger ridge distance distract the participants from including the topography into the judgement of similarity. The interaction between surface asperities and fingertip skin led to higher friction for higher micro-scale roughness. Individual friction data allowed us to construct a psychometric curve which relates similarity decisions to differences in friction. Participants noticed differences in the friction coefficient as small as 0.035 for samples with friction coefficients between 0.34 and 0.45

    Fusing Multimedia Data Into Dynamic Virtual Environments

    Get PDF
    In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotemporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identified several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education
    • 

    corecore