140 research outputs found

    On sensorimotor function and the relationship between proprioception and motor learning

    Get PDF
    Research continues to explore the mechanisms that mediate successful motor control. Behaviourally-relevant modulation of muscle commands is dependent on sensory signals. Proprioception -- the sense of body position -- is one signal likely to be crucial for motor learning. The present thesis explores the relationship between human proprioception and motor learning. First we investigated changes to sensory function during the adaptation of arm movements to novel forces. Subjects adapted movements in the presence of directional loads over the course of learning. Psychophysical estimates of perceived hand position showed that motor learning resulted in sensed hand position becoming \emph{biased} in the direction of the experienced load. This biasing of perception occurred for four different perturbation directions and remained even after washout movements. Therefore, motor learning can result in systematic changes to proprioceptive function. In a second experiment we investigated proprioceptive changes after subjects learned highly accurate movements to targets. Subjects demonstrated improved acuity of the hand\u27s position following this type of motor learning. Interestingly, improved acuity did not generalize to the entire workspace but was instead restricted to local positions within the region of the workspace where motor learning occurred. These results provide evidence that altered sensory function from motor learning may also include sensory acuity improvements. Subsequently the duration of acuity improvements was assessed. Improved acuity of hand position was observed immediately after motor learning and 24h later, but was not reliably different from baseline at 1h or 4h. Persistent sensory change may thus be similar to retention of motor learning and may involve a sleep-dependent component. In the fourth study we investigated the ability of proprioceptive training to improve motor learning. Subjects had to match the position and speed of desired trajectories. At regular intervals during motor motor learning, subjects were presented with the desired trajectory either only visually, or with both vision and and passive proprioceptive movement through the desired trajectory using a robot. Subjects who received proprioceptive guidance indeed performed better in matching both velocity and position of desired movements, suggesting a role for passive proprioceptive training in improving motor learning

    The neural basis of visual material properties in the human brain

    Get PDF
    Three independent studies with human functional magnetic resonance imaging (fMRI) measurements were designed to investigate the neural basis of visual glossiness processing in the human brain. The first study is to localize brain areas preferentially responding to glossy objects defined by specular reflectance. We found activations related to gloss in the posterior fusiform (pFs) and in area V3B/KO. The second study is to investigate how the visual-induced haptic sensation is achieved in our brain. We found that in secondary somatosensory area (S2) was distinguishable between glossy and rough surfaces, suggesting that visual information about object surfaces may be transformed into tactile information in S2. In the third study we investigate how the brain processes surface gloss information conveyed by disparity of specular reflections on stereo mirror objects and compared it with the processing of specular reflectance. We found that both dorsal and ventral areas were involving in this processing. The result implicates that in this region the processing of stereoscopic gloss information has a pattern of activation that is additional to the representation of specular reflectance. Overall, the three studies contribute to our understanding about the neural basis of visual glossiness and material processing in the human brain

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Large-Scale Dynamics in the Mouse Neocortex underlying Sensory Discrimination and Short-Term Memory

    Full text link
    Sensorimotor integration (SMI) is a fundamental process that allows for an advantageous interaction with the environment, in which key external stimuli are transformed into apt action. In mammals, SMI requires quick and synchronized activity across sensory, association and motor brain areas of the neocortex. In some situations, the key stimulus and its corresponding action are separated by a delay. In such scenario, behaviour-relevant information must be held in short-term memory (STM) until a cue signals the adequate context to transform it into action. This thesis aims to uncover key determinants of brain activity that underlie SMI with a STM component. The first chapter offers a general introduction to the work presented in this thesis. To understand the principles of SMI, I will follow an evolutionary approach. I will explain how during SMI information flows across sensory and association areas. I will also introduce STM and the neocortical areas involved in delay activity. Next, I will emphasize the different sensing and behavioural strategies that animals use to extract action-guiding information from the world. Finally, I will propose different behavioural paradigms to study SMI and STM. Once this foundation is laid, I will introduce the methodological approach of this thesis, in particular genetically-encoded calcium indicators and wide-field imaging. I will end this chapter by stating the specific aims of this thesis. The second chapter is a published manuscript in which I contributed during the first two years of my doctoral thesis work. We studied large-scale dynamics in mice trained to solve a tactile discrimination task with a STM component. We found that mice follow an active and/or passive strategy to solve this task, defined by the presence or absence of whole body movements during tactile stimulation. The movement strategy influenced ongoing brain activity, with higher and more widespread activity in active versus passive trials. Surprisingly, this influence continued into the STM period even in the absence of movements. Active trials elicited activity during the delay period in frontomedial secondary motor cortex. In contrast, passive trials were linked with activity in posterior lateral association areas (PLA). We found these areas to be necessary for task completion in a strategy-dependent manner

    A Kinematic Analysis of Visual and Haptic Contributions to Precision Grasping in a Patient With Visual Form Agnosia and in Normally-Sighted Populations

    Get PDF
    Skilled arm and hand movments designed to obtain and manipulate objects (prehension) is one of the defining features of primates. According to the two visual system hypothesis (TVSH) vision can be parsed into two systems: (1) the ventral ‘stream’ of the occipital and inferotemporal cortex which services visual perception and other cognitive functions and (2) the ‘dorsal stream’ of the occipital and posterior parietal cortex which services skilled, goal-directed actions such as prehension. A cornerstone of the TVSH is the ‘perception-action’ dissociation observed in patient DF who suffers from visual form agnosia following bilateral damage to her ventral stream. DF cannot discriminate amongst objects on the basis of their visual form. Remarkably, however, her hand preshapes in-flight to suit the sizes of the goal objects she fails to discriminate amongst when she reaches out to pick them up; That is, unless she is denied the opportunity to touch the object at the end of her reach. This latter finding has led some to question the TVSH, advancing an alternative account that is centered on visuo-haptic calibration. The current work examines this alternative view. First, the validity of the measurements that have underlined this line of investigation is tested, rejecting some measures while affirming others. Next, the visuo-haptic calibration account is tested and ultimately rejected on the basis of four key pieces of evidence: Haptics and vision need not correlate to show DF’s ‘perception-action’ dissociation; Haptic input does not potentiate DF’s deficit in visual form perception; DF’s grasp kinematics are normal as long as she is provided a target proxy; and denying tactile feedback induces a shift in grasp kinematics away from natural grasps and towards pantomimed (simulated) ones in normally-sighted populations

    Predicting room acoustical behavior with the ODEON computer model

    Get PDF

    Análisis y experimentación de efectos de degradación del rendimiento visual debido a estímulos auditivos en entornos virtuales erosivos.

    Get PDF
    In recent years, both consumption and people’s interest in Virtual Reality (VR) are increasing dizzily. This innovative technology provides a number of groundbreaking capabilities while has lately become more accessible due to continued hardware development. In VR, the user turns into an active element that can interact in many ways with the virtual environment, which differs from the user passive role settled in traditional media. This interaction occurs naturally once the user is immersed in the virtual world and senses detect what is happening around. Similarly to reality, human perception can be deceived or altered under certain conditions where our senses gather contradictory or too much information. In fact, an audiovisual suppression effect was reported by Malpica et al. (2020) in which it was proved how auditory stimuli can cause loss of visual information. User’s visual performance degrades when spatially incongruent but temporally consistent sounds are listened at once. Our brain perceives both visual and auditory stimuli although some visual data is lost due to neural interactions. The main goal of this project is to analyze and get a better insight of this audiovisual suppression effect, more concretely, its auditory part. Using the publication previously mentioned as baseline, we create a virtual environment in which both auditory and visual stimuli will be presented to the user. Regarding auditory stimuli, we research how sounds located at the limits of our hearing range can influence the appearance of this effect. Therefore, frequency values associated to hearing limits are obtained for each user and will be used after in the sounds generated throughout the experiment. The participant will encounter not only unimodal stimuli (auditory or visual only) but bimodal (auditory and visual at the same moment) stimuli as well. Bimodal stimuli are dynamically generated in fixed locations keeping temporally consistency, creating the proper conditions under which the audiovisual suppression effect occurs. By keeping stimuli apparition record as well as user performance regarding the moments when any stimuli was perceived, it is possible to check if the user has suffered the suppression effect. The experiments and the frequency test have been performed by a group of 20 participants. The achieved results manifest that detection and recognition rates of visual stimuli are indeed decreased by almost inaudible sounds. Thereby, the audiovisual suppression effect still occurs with auditory stimuli located at the limits of our hearing range. Surveys fulfilled by the participants demonstrated how the majority of them experimented a great feeling of immersion and presence in the virtual world. Besides, it is not appreciated that the experiment has significant side effects neither drawbacks that disturb participants while making the virtual experience worse. Lastly, it is suggested as future work the analysis of the eye-tracking data recorded during the experiment, in order to study how users behave when barely audible sounds are perceived in VR. With the aim of researching the impact that other factors, such as personal emotions and state of mind, may have on the suppression effect, a couple of appropriate gadgets are proposed as well.<br /
    • …
    corecore