27,176 research outputs found

    Temporal issues of animate response

    Get PDF

    An introduction to time-resolved decoding analysis for M/EEG

    Full text link
    The human brain is constantly processing and integrating information in order to make decisions and interact with the world, for tasks from recognizing a familiar face to playing a game of tennis. These complex cognitive processes require communication between large populations of neurons. The non-invasive neuroimaging methods of electroencephalography (EEG) and magnetoencephalography (MEG) provide population measures of neural activity with millisecond precision that allow us to study the temporal dynamics of cognitive processes. However, multi-sensor M/EEG data is inherently high dimensional, making it difficult to parse important signal from noise. Multivariate pattern analysis (MVPA) or "decoding" methods offer vast potential for understanding high-dimensional M/EEG neural data. MVPA can be used to distinguish between different conditions and map the time courses of various neural processes, from basic sensory processing to high-level cognitive processes. In this chapter, we discuss the practical aspects of performing decoding analyses on M/EEG data as well as the limitations of the method, and then we discuss some applications for understanding representational dynamics in the human brain

    Are language production problems apparent in adults who no longer meet diagnostic criteria for attention-deficit/hyperactivity disorder?

    Get PDF
    In this study, we examined sentence production in a sample of adults (N = 21) who had had attention-deficit/hyperactivity disorder (ADHD) as children, but as adults no longer met DSM-IV diagnostic criteria (APA, 2000). This “remitted” group was assessed on a sentence production task. On each trial, participants saw two objects and a verb. Their task was to construct a sentence using the objects as arguments of the verb. Results showed more ungrammatical and disfluent utterances with one particular type of verb (i.e., participle). In a second set of analyses, we compared the remitted group to both control participants and a “persistent” group, who had ADHD as children and as adults. Results showed that remitters were more likely to produce ungrammatical utterances and to make repair disfluencies compared to controls, and they patterned more similarly to ADHD participants. Conclusions focus on language output in remitted ADHD, and the role of executive functions in language production

    Lost in semantic space: a multi-modal, non-verbal assessment of feature knowledge in semantic dementia

    Get PDF
    A novel, non-verbal test of semantic feature knowledge is introduced, enabling subordinate knowledge of four important concept attributes--colour, sound, environmental context and motion--to be individually probed. This methodology provides more specific information than existing non-verbal semantic tests about the status of attribute knowledge relating to individual concept representations. Performance on this test of a group of 12 patients with semantic dementia (10 male, mean age: 64.4 years) correlated strongly with their scores on more conventional tests of semantic memory, such as naming and word-to-picture matching. The test's overlapping structure, in which individual concepts were probed in two, three or all four modalities, provided evidence of performance consistency on individual items between feature conditions. Group and individual analyses revealed little evidence for differential performance across the four feature conditions, though sound and colour correlated most strongly, and motion least strongly, with other semantic tasks, and patients were less accurate on the motion features of living than non-living concepts (with no such conceptual domain differences in the other conditions). The results are discussed in the context of their implications for the place of semantic dementia within the classification of progressive aphasic syndromes, and for contemporary models of semantic representation and organization

    What do we perceive in a glance of a real-world scene?

    Get PDF
    What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition

    Classifying types of gesture and inferring intent

    Get PDF
    In order to infer intent from gesture, a rudimentary classification of types of gestures into five main classes is introduced. The classification is intended as a basis for incorporating the understanding of gesture into human-robot interaction (HRI). Some requirements for the operational classification of gesture by a robot interacting with humans are also suggested

    Event-related brain potential evidence for animacy processing asymmetries during sentence comprehension

    Get PDF
    The animacy distinction is deeply rooted in the language faculty. A key example is differential object marking, the phenomenon where animate sentential objects receive specific marking. We used event-related potentials to examine the neural processing consequences of case-marking violations on animate and inanimate direct objects in Spanish. Inanimate objects with incorrect prepositional case marker ‘a’ (‘al suelo’) elicited a P600 effect compared to unmarked objects, consistent with previous literature. However, animate objects without the required prepositional case marker (‘el obispo’) only elicited an N400 effect compared to marked objects. This novel finding, an exclusive N400 modulation by a straightforward grammatical rule violation, does not follow from extant neurocognitive models of sentence processing, and mirrors unexpected “semantic P600” effects for thematically problematic sentences. These results may reflect animacy asymmetry in competition for argument prominence: following the article, thematic interpretation difficulties are elicited only by unexpectedly animate objects

    Trajectory recognition as the basis for object individuation: A functional model of object file instantiation and object token encoding

    Get PDF
    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially-disconnected stimuli as coherent objects. Based on relevant anatomical, functional, and developmental data, a functional model is developed that bases object individuation on the specific recognition of visual trajectories. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual variations of the model parameters are expected to generate distinct trajectory and object recognition abilities. Over-encoding of trajectory information in stored object tokens in early infancy, in particular, is expected to disrupt the ability to re-identify individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders

    Semantic and pragmatic motivations for constructional preferences: a corpus-based study of provide, supply, and present

    Get PDF
    A select group of transfer verbs can enter into four different constructions: the ditransitive construction (He provided John the money), the prepositional-dative construction (He provided the money to John), a construction with a prepositional theme (He provided John with the money), and a construction with a recipient realized by a for-phrase (He provided the money for John). In this article, we take a close look at three such verbs: provide, supply, and present. Corpus analysis shows that these three verbs display different structural preferences with respect to the for-, to-, and with-patterns. To explain these preferences, the study investigates pragmatic principles (following Mukherjee 2001 on provide) and the role played by semantic factors. An examination of the semantics of the verbs and the lexically motivated constructional semantics of the to, for, and with-patterns shows (i) that the three constructions are not interchangeable, and (ii) that the preferential differences between the three verbs find an explanation in the compatibility between lexical and constructional semantics. The description is mainly based on data from the British National Corpus

    Publishing Time Dependent Oceanographic Visualizations using VRML

    Get PDF
    Oceanographic simulations generate time dependent data; thus, visualizations of this data should include and realize the variable `time'. Moreover, the oceanographers are located across the world and they wish to conveniently communicate and exchange these temporal realizations. This publication of material may be achieved using different methods and languages. VRML provides one convenient publication medium that allows the visualizations to be easily viewed and exchanged between users. Using VRML as the implementation language, we describe five categories of operation. The strategies are determined by the level of calculation that is achieved at the generation stage compared to the playing of the animation. We name the methods: 2D movie, 3D spatial, 3D flipbook, key frame deformation and visualization program
    • 

    corecore