1,398 research outputs found

    Integration and disruption effects of shape and texture in haptic search

    Get PDF
    In a search task, where one has to search for the presence of a target among distractors, the target is sometimes easily found, whereas in other searches it is much harder to find. The performance in a search task is influenced by the identity of the target, the identity of the distractors and the differences between the two. In this study, these factors were manipulated by varying the target and distractors in shape (cube or sphere) and roughness (rough or smooth) in a haptic search task. Participants had to grasp a bundle of items and determine as fast as possible whether a predefined target was present or not. It was found that roughness and edges were relatively salient features and the search for the presence of these features was faster than for their absence. If the task was easy, the addition of these features could also disrupt performance, even if they were irrelevant for the search task. Another important finding was that the search for a target that differed in two properties from the distractors was faster than a task with only a single property difference, although this was only found if the two target properties were non-salient. This means that shape and texture can be effectively integrated. Finally, it was found that edges are more beneficial to a search task than disrupting, whereas for roughness this was the other way round

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Deep neural network model of haptic saliency

    Get PDF
    Haptic exploration usually involves stereotypical systematic movements that are adapted to the task. Here we tested whether exploration movements are also driven by physical stimulus features. We designed haptic stimuli, whose surface relief varied locally in spatial frequency, height, orientation, and anisotropy. In Experiment 1, participants subsequently explored two stimuli in order to decide whether they were same or different. We trained a variational autoencoder to predict the spatial distribution of touch duration from the surface relief of the haptic stimuli. The model successfully predicted where participants touched the stimuli. It could also predict participants' touch distribution from the stimulus' surface relief when tested with two new groups of participants, who performed a different task (Exp. 2) or explored different stimuli (Exp. 3). We further generated a large number of virtual surface reliefs (uniformly expressing a certain combination of features) and correlated the model's responses with stimulus properties to understand the model's preferences in order to infer which stimulus features were preferentially touched by participants. Our results indicate that haptic exploratory behavior is to some extent driven by the physical features of the stimuli, with e.g. edge-like structures, vertical and horizontal patterns, and rough regions being explored in more detail

    An exploration of visuomotor and perceptual mechanisms in humans and rats.

    Get PDF
    Neuropsychological, neurophysiological and psychophysical evidence support the notion of two separate and largely independent cortical visual systems: a dorsal system mediating visually guided action and a ventral system mediating object perception and recognition (Goodale & Milner, 1992). This thesis is divided into three parts that explore questions related to the two-visual-systems model, two in humans and one in rats. The first part explores whether dorsal representations are based on the veridical properties of the stimuli or whether they include information produced by filling-in mechanisms of cortical visual areas. All human experiments were carried out with the ELITE and SMART motion tracking systems. Kinematic analysis showed that grasping Kanizsa illusory squares and partly-occluded objects was as accurate as grasping luminance-defined targets and it is concluded that information about interpolated regions is available to the dorsal system for the calibration of the movement parameters. A Vernier acuity task confirmed that the perceptual localization of Kanizsa and luminance-defined contours is not equally accurate in the ventral visual system. The second part explores the effect of target dimensionality on grasping, focusing on the possibility that actions aimed at targets that contain two-dimensional information could be modulated by ventral visual mechanisms. The Diagonal Illusion (DI) was chosen to investigate this possibility because it is entirely the product of three- dimensional objects. The DI exerted an effect on both perception and action, although the latter was smaller, suggesting that the effects of illusions on action previously reported are not attributable to the presence of 2D information and, by implication, that 2D information in the target array does not elicit modulation by the ventral visual system. These conclusions were confirmed by a study that found similar kinematic profiles from grasps aimed at 3D, 2D and 2D-enhanced targets. Control studies ruled out potential confounding effects resulting from curvatures of the stimuli that could have acted as obstacles and from differences in haptic feedback. It is concluded that object-directed action is mediated by dorsal visual mechanisms, irrespective of target dimensionality. The third part seeks to find evidence of ventral visual processing in rats by measuring the perception of visual illusions and object recognition in this species. The aim is to establish whether rats could provide a suitable model to further investigate the dorsal and ventral visual systems. An automated apparatus with a touch-screen and computer generated stimuli was developed to train the animals. The results from the illusion studies are not conclusive as only one out of three groups of rats was able to solve a discrimination with Kanizsa illusory figures. The preliminary results from the object recognition studies are however clearer. Rats were able to use aspect ratio to solve a discrimination with stimuli that varied in size and location suggesting that size- and location-independent object recognition occurs in this species. Probe trials confirmed these results. It is concluded that rats may have visual processes comparable to those occurring in the ventral visual system of humans and primates

    Doctor of Philosophy

    Get PDF
    dissertationVirtual environments provide a consistent and relatively inexpensive method of training individuals. They often include haptic feedback in the form of forces applied to a manipulandum or thimble to provide a more immersive and educational experience. However, the limited haptic feedback provided in these systems tends to be restrictive and frustrating to use. Providing tactile feedback in addition to this kinesthetic feedback can enhance the user's ability to manipulate and interact with virtual objects while providing a greater level of immersion. This dissertation advances the state-of-the-art by providing a better understanding of tactile feedback and advancing combined tactilekinesthetic systems. The tactile feedback described within this dissertation is provided by a finger-mounted device called the contact location display (CLD). Rather than displaying the entire contact surface, the device displays (feeds back) information only about the center of contact between the user's finger and a virtual surface. In prior work, the CLD used specialized two-dimensional environments to provide smooth tactile feedback. Using polygonal environments would greatly enhance the device's usefulness. However, the surface discontinuities created by the facets on these models are rendered through the CLD, regardless of traditional force shading algorithms. To address this issue, a haptic shading algorithm was developed to provide smooth tactile and kinesthetic interaction with general polygonal models. Two experiments were used to evaluate the shading algorithm. iv To better understand the design requirements of tactile devices, three separate experiments were run to evaluate the perception thresholds for cue localization, backlash, and system delay. These experiments establish quantitative design criteria for tactile devices. These results can serve as the maximum (i.e., most demanding) device specifications for tactile-kinesthetic haptic systems where the user experiences tactile feedback as a function of his/her limb motions. Lastly, a revision of the CLD was constructed and evaluated. By taking the newly evaluated design criteria into account, the CLD device became smaller and lighter weight, while providing a full two degree-of-freedom workspace that covers the bottom hemisphere of the finger. Two simple manipulation experiments were used to evaluate the new CLD device
    • …
    corecore