89 research outputs found

    Preserved Haptic Shape Processing after Bilateral LOC Lesions.

    Get PDF
    UNLABELLED: The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.\u27s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. SIGNIFICANCE STATEMENT: The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.\u27s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch

    Distinct Visual Processing of Real Objects and Pictures of Those Objects in 7- to 9-month-old Infants.

    Get PDF
    The present study examined 7- and 9-month-old infants\u27 visual habituation to real objects and pictures of the same objects and their preferences between real and pictorial versions of the same objects following habituation. Different hypotheses would predict that infants may habituate faster to pictures than real objects (based on proposed theoretical links between behavioral habituation in infants and neuroimaging adaptation in adults) or to real objects vs. pictures (based on past infant electrophysiology data). Sixty-one 7-month-old infants and fifty-nine 9-month-old infants were habituated to either a real object or a picture of the same object and afterward preference tested with the habituation object paired with either the novel real object or its picture counterpart. Infants of both age groups showed basic information-processing advantages for real objects. Specifically, during the initial presentations, 9-month-old infants looked longer at stimuli in both formats than the 7-month olds but more importantly both age groups looked longer at real objects than pictures, though with repeated presentations, they habituated faster for real objects such that at the end of habituation, they looked equally at both types of stimuli. Surprisingly, even after habituation, infants preferred to look at the real objects, regardless of whether they had habituated to photos or real objects. Our findings suggest that from as early as 7-months of age, infants show strong preferences for real objects, perhaps because real objects are visually richer and/or enable the potential for genuine interactions

    Disentangling Representations of Object and Grasp Properties in the Human Brain

    Get PDF
    Contains fulltext : 159037.pdf (publisher's version ) (Open Access)The properties of objects, such as shape, influence the way we grasp them. To quantify the role of different brain regions during grasping, it is necessary to disentangle the processing of visual dimensions related to object properties from the motor aspects related to the specific hand configuration. We orthogonally varied object properties (shape, size, and elongation) and task (passive viewing, precision grip with two or five digits, or coarse grip with five digits) and used representational similarity analysis of functional magnetic resonance imaging data to infer the representation of object properties and hand configuration in the human brain. We found that object elongation is the most strongly represented object feature during grasping and is coded preferentially in the primary visual cortex as well as the anterior and posterior superior-parieto-occipital cortex. By contrast, primary somatosensory, motor, and ventral premotor cortices coded preferentially the number of digits while ventral-stream and dorsal-stream regions coded a mix of visual and motor dimensions. The representation of object features varied with task modality, as object elongation was less relevant during passive viewing than grasping. To summarize, this study shows that elongation is a particularly relevant property of the object to grasp, which along with the number of digits used, is represented within both ventral-stream and parietal regions, suggesting that communication between the two streams about these specific visual and motor dimensions might be relevant to the execution of efficient grasping actions. SIGNIFICANCE STATEMENT: To grasp something, the visual properties of an object guide preshaping of the hand into the appropriate configuration. Different grips can be used, and different objects require different hand configurations. However, in natural actions, grip and object type are often confounded, and the few experiments that have attempted to separate them have produced conflicting results. As such, it is unclear how visual and motor properties are represented across brain regions during grasping. Here we orthogonally manipulated object properties and grip, and revealed the visual dimension (object elongation) and the motor dimension (number of digits) that are more strongly coded in ventral and dorsal streams. These results suggest that both streams play a role in the visuomotor coding essential for grasping.15 p

    The toolish hand illusion: embodiment of a tool based on similarity with the hand

    Get PDF
    A tool can function as a body part yet not feel like one: Putting down a fork after dinner does not feel like losing a hand. However, studies show fake body-parts are embodied and experienced as parts of oneself. Typically, embodiment illusions have only been reported when the fake body-part visually resembles the real one. Here we reveal that participants can experience an illusion that a mechanical grabber, which looks scarcely like a hand, is part of their body. We found changes in three signatures of embodiment: the real hand’s perceived location, the feeling that the grabber belonged to the body, and autonomic responses to visible threats to the grabber. These findings show that artificial objects can become embodied even though they bear little visual resemblance to the hand

    Counting on the motor system: Rapid action planning reveals the format- and magnitude-dependent extraction of numerical quantity

    Get PDF
    Symbolic numbers (e.g., 2 ) acquire their meaning by becoming linked to the core nonsymbolic quantities they represent (e.g., two items). However, the extent to which symbolic and nonsymbolic information converges onto the same internal core representations of quantity remains a point of considerable debate. As nearly all previous work on this topic has employed perceptual tasks requiring the conscious reporting of numerical magnitudes, here we question the extent to which numerical processing via the visual-motor system might shed further light on the fundamental basis of how different number formats are encoded.We show, using a rapid reaching task and a detailed analysis of initial arm trajectories, that there are key differences in how the quantity information extracted from symbolic Arabic numerals and nonsymbolic collections of discrete items are used to guide action planning. In particular, we found that the magnitude derived from discrete dots resulted in movements being biased by an amount directly proportional to the actual quantities presented whereasthe magnitude derived from numerals resulted in movements being biased only by the relative (e.g., larger than) quantities presented. In addition, we found that initial motor plans were more sensitive to changes in numerical quantity within small (1-3) than large (5-15) number ranges, irrespective of their format (dots or numerals). In light of previous work, our visual-motor results clearly show that the processing of numerical quantity information is both format and magnitude dependent. © 2014 ARVO

    Adaptable Categorization of Hands and Tools in Prosthesis Users.

    Get PDF
    Some theories propose that tools become incorporated into the neural representation of the hands (a process known as tool embodiment; Maravita & Iriki, 2004). Others suggest that conceptual body representation is rigid and that experience with one’s own body is insufficient for adapting bodily cognition, as shown in individuals born without hands (Vannuscorps & Caramazza, 2016) and in amputees with persistent phantom hand representation (Kikkert et al., 2016). How sharp is the conceptual boundary between hands and tools? This question is particularly relevant for individuals who have lost one hand and use prosthetic hands as tools to supplement their missing hand function. Although both congenital one-handers (i.e., amelia patients) and one-handed amputees are encouraged to use prostheses, the former show a greater tendency than the latter to use prosthetic hands in daily tasks (Jang et al., 2011). One-handers have a fully functional remaining hand (allowing them to use handheld tools, etc.), which makes them less likely to show semantic distortions in hand and tool representation. However, their bodies and their interactions with their environment are fundamentally altered by their disability (Makin et al., 2013; Makin, Wilf, Schwartz, & Zohary, 2010). To determine how real-world experience shapes conceptual categorization of hands, tools, and prostheses, we recruited one-handers with congenital or acquired unilateral hand loss to take part in a study involving a priming task. We predicted that one-handers, particularly congenital one-handers, would show more conceptual blurring between hands and tools than control participants would, as a result of less experience with a hand and more reliance on prostheses (which are essentially tools) for typical hand functions. We further predicted that individual differences in prosthesis usage would be reflected in implicit categorization of hands, manual tools, and prostheses

    Representation of Multiple Body Parts in the Missing-Hand Territory of Congenital One-Handers.

    Get PDF
    Individuals born without one hand (congenital one-handers) provide a unique model for understanding the relationship between focal reorganization in the sensorimotor cortex and everyday behavior. We previously reported that the missing hand\u27s territory of one-handers becomes utilized by its cortical neighbor (residual arm representation), depending on residual arm usage in daily life to substitute for the missing hand\u27s function [1, 2]. However, the repertoire of compensatory behaviors may involve utilization of other body parts that do not cortically neighbor the hand territory. Accordingly, the pattern of brain reorganization may be more extensive [3]. Here we studied unconstrained compensatory strategies under ecological conditions in one-handers, as well as changes in activation, connectivity, and neurochemical profile in their missing hand\u27s cortical territory. We found that compensatory behaviors in one-handers involved multiple body parts (residual arm, lips, and feet). This diversified compensatory profile was associated with large-scale cortical reorganization, regardless of cortical proximity to the hand territory. Representations of those body parts used to substitute hand function all mapped onto the cortical territory of the missing hand, as evidenced by task-based and resting-state fMRI. The missing-hand territory also exhibited reduced GABA levels, suggesting a reduction in connectional selectivity to enable the expression of diverse cortical inputs. Because the same body parts used for compensatory purposes are those showing increased representation in the missing hand\u27s territory, we suggest that the typical hand territory may not necessarily represent the hand per se, but rather any other body part that shares the functionality of the missing hand [4]

    Artificial limb representation in amputees

    Get PDF
    The human brain contains multiple hand-selective areas, in both the sensorimotor and visual systems. Could our brain repurpose neural resources, originally developed for supporting hand function, to represent and control artificial limbs? We studied individuals with congenital or acquired hand-loss (hereafter one-handers) using functional MRI. We show that the more one-handers use an artificial limb (prosthesis) in their everyday life, the stronger visual hand-selective areas in the lateral occipitotemporal cortex respond to prosthesis images. This was found even when one-handers were presented with images of active prostheses that share the functionality of the hand but not necessarily its visual features (e.g. a \u27hook\u27 prosthesis). Further, we show that daily prosthesis usage determines large-scale inter-network communication across hand-selective areas. This was demonstrated by increased resting state functional connectivity between visual and sensorimotor hand-selective areas, proportional to the intensiveness of everyday prosthesis usage. Further analysis revealed a 3-fold coupling between prosthesis activity, visuomotor connectivity and usage, suggesting a possible role for the motor system in shaping use-dependent representation in visual hand-selective areas, and/or vice versa. Moreover, able-bodied control participants who routinely observe prosthesis usage (albeit less intensively than the prosthesis users) showed significantly weaker associations between degree of prosthesis observation and visual cortex activity or connectivity. Together, our findings suggest that altered daily motor behaviour facilitates prosthesis-related visual processing and shapes communication across hand-selective areas. This neurophysiological substrate for prosthesis embodiment may inspire rehabilitation approaches to improve usage of existing substitutionary devices and aid implementation of future assistive and augmentative technologies

    Direct comparisons of hand and mouth kinematics during grasping, feeding and fork-feeding actions

    No full text
    While a plethora of studies have examined the kinematics of human reach-to-grasp actions, few have investigated feeding, another ethologically important real-world action. Two seminal studies concluded that the kinematics of the mouth during feeding are comparable to those of the hand during grasping (Castiello, 1997; Churchill et al., 1999); however, feeding was done with a fork or spoon, not with the hand itself. Here we directly compared grasping and feeding kinematics under equivalent conditions. Participants were presented with differently sized cubes of cheese (10-, 20- or 30-mm on each side) and asked to use the hand to grasp them or to use a fork to spear them and then bring them to the mouth to bite. We measured the apertures of the hand during grasping and the teeth during feeding, as well as reaching kinematics of the arm in both tasks. As in many past studies, we found that the hand oversized considerably larger (~11-27 mm) than the food item during grasping and the amount of oversizing scaled with food size. Surprisingly, regardless of whether the hand or fork was used to transport the food, the mouth oversized only slightly larger (~4-11 mm) than the food item during biting and the oversizing did not increase with food size. Total movement times were longer when using the fork compared to the hand, particularly when using the fork to bring food to the mouth. While reach velocity always peaked approximately halfway through the movement, relative to the reach the mouth opened more slowly than the hand, perhaps because less time was required for the smaller oversizing. Taken together, our results show that while many aspects of kinematics share some similarity between grasping and feeding, oversizing may reflect strategies unique to the hand vs. mouth (such as the need to have the digits approach the target surface perpendicularly for grip stability during lifting) and differences in the neural substrates of grasping and feeding
    • …
    corecore