8,090 research outputs found

    Studying the Behaviour of Model of Mirror Neuron System in Case of Autism

    Get PDF
    Several experiment done by the researchers conducted that autism is caused by the dysfunctional mirror neuron system and the dysfunctions of mirror neuron system is proportional to the symptom severity of autism. In the present work those experiments were studied as well as studying a model of mirror neuron system called MNS2 developed by a research group. This research examined the behavior of the model in case of autism and compared the result with those studies conducting dysfunctions of mirror neuron system in autism. To perform this, a neural network employing the model was developed which recognized the three types of grasping (faster, normal and slower). The network was implemented with back propagation through time learning algorithm. The whole grasping process was divided into 30 time steps and different hand and object states at each time step was used as the input of the network. Normally the network successfully recognized all of the three types of grasps. The network required more times as the number of inactive neurons increased. And in case of maximum inactive neurons of the mirror neuron system the network became unable to recognize the types of grasp. As the time to recognize the types of grasp is proportional to the number of inactive neurons, the experiment result supports the hypothesis that dysfunctions of MNS is proportional to the symptom severity of autism

    The dissipative quantum model of brain: how do memory localize in correlated neuronal domains

    Full text link
    The mechanism of memory localization in extended domains is described in the framework of the parametric dissipative quantum model of brain. The size of the domains and the capability in memorizing depend on the number of links the system is able to establish with the external world.Comment: 19 PostScript pages, in press on a special issue of Information Science Journal, S. Kak and D. Ventura Ed

    The very same thing: Extending the object token concept to incorporate causal constraints on individual identity

    Get PDF
    The contributions of feature recognition, object categorization, and recollection of episodic memories to the re-identification of a perceived object as the very same thing encountered in a previous perceptual episode are well understood in terms of both cognitive-behavioral phenomenology and neurofunctional implementation. Human beings do not, however, rely solely on features and context to re-identify individuals; in the presence of featural change and similarly-featured distractors, people routinely employ causal constraints to establish object identities. Based on available cognitive and neurofunctional data, the standard object-token based model of individual re-identification is extended to incorporate the construction of unobserved and hence fictive causal histories (FCHs) of observed objects by the pre-motor action planning system. Cognitive-behavioral and implementation-level predictions of this extended model and methods for testing them are outlined. It is suggested that functional deficits in the construction of FCHs are associated with clinical outcomes in both Autism Spectrum Disorders and later-stage stage Alzheimer's disease.\u

    Neural correlates of the processing of co-speech gestures

    Get PDF
    In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process

    Point-light biological motion perception activates human premotor cortex

    Get PDF
    Motion cues can be surprisingly powerful in defining objects and events. Specifically, a handful of point-lights attached to the joints of a human actor will evoke a vivid percept of action when the body is in motion. The perception of point-light biological motion activates posterior cortical areas of the brain. On the other hand, observation of others' actions is known to also evoke activity in motor and premotor areas in frontal cortex. In the present study, we investigated whether point-light biological motion animations would lead to activity in frontal cortex as well. We performed a human functional magnetic resonance imaging study on a high-field-strength magnet and used a number of methods to increase signal, as well as cortical surface-based analysis methods. Areas that responded selectively to point-light biological motion were found in lateral and inferior temporal cortex and in inferior frontal cortex. The robust responses we observed in frontal areas indicate that these stimuli can also recruit action observation networks, although they are very simplified and characterize actions by motion cues alone. The finding that even point-light animations evoke activity in frontal regions suggests that the motor system of the observer may be recruited to "fill in" these simplified displays

    Prefrontal involvement in imitation learning of hand actions : effects of practice and expertise.

    Get PDF
    In this event-related fMRI study, we demonstrate the effects of a single session of practising configural hand actions (guitar chords) on cortical activations during observation, motor preparation, and imitative execution. During the observation of non-practised actions, the mirror neuron system (MNS), consisting of inferior parietal and ventral premotor areas, was more strongly activated than for the practised actions. This finding indicates a strong role of the MNS in the early stages of imitation learning. In addition, the dorsolateral prefrontal cortex (DLPFC) was selectively involved during observation and motor preparation of the non-practised chords. This finding confirms Buccino et al.’s (2004a) model of imitation learning: for actions that are not yet part of the observer’s motor repertoire, DLPFC engages in operations of selection and combination of existing, elementary representations in the MNS. The pattern of prefrontal activations further supports Shallice’s (2004) proposal of a dominant role of the left DLPFC in modulating lower-level systems, and of a dominant role of the right DLPFC in monitoring operations

    Attention modulates the specificity of automatic imitation to human actors

    Get PDF
    The perception of actions performed by others activates one’s own motor system. Recent studies disagree as to whether this effect is specific to actions performed by other humans, an issue complicated by differences in perceptual salience between human and non-human stimuli. We addressed this issue by examining the automatic imitation of actions stimulated by viewing a virtual, computer generated, hand. This stimulus was held constant across conditions, but participants’ attention to the virtualness of the hand was manipulated by informing some participants during instructions that they would see a “computer-generated model of a hand,” while making no mention of this to others. In spite of this attentional manipulation, participants in both conditions were generally aware of the virtualness of the hand. Nevertheless, automatic imitation of the virtual hand was significantly reduced––but not eliminated––when participants were told they would see a virtual hand. These results demonstrate that attention modulates the “human bias” of automatic imitation to non-human actors
    corecore