451 research outputs found

    The embodied penman: Effector-specific motor-language integration during handwriting

    Get PDF
    Several studies have yielded fine-grained insights on the embodied dynamics of language by revealing how processing of manual action verbs (MaVs) affects the programming or execution of concurrent hand movements. However, virtually all extant studies have relied on highly contrived dual tasks in which independent motoric and linguistic processes are arbitrarily related. To circumvent potential attentional confounds, we conducted the first assessment of motor-language integration during handwriting, an early acquired skill that necessarily integrates both types of processes. Using a digital pen, participants copied carefully matched MaVs, non-manual action verbs, and non-action verbs as we collected measures of motor programming (the time needed to start the writing routine after verb presentation) and motor execution (the time needed to write the whole verb). Whereas motor programming latencies were similar across conditions, the unfolding of motor routines was faster for MaVs than for the other two categories, irrespective of the subjects’ daily writing time. Moreover, this effect remained consistent regardless of whether word meanings were accessed implicitly or explicitly. In line with the Hand-Action-Network Dynamic Language Embodiment (HANDLE) model, such findings suggest that everyday manual movements can be primed by effector-congruent verbs, even in a highly automatized task that seamlessly combines linguistic and motoric processes. In addition, this effect differs from that observed for MaVs in a previous (keyboard-based) typing experiment, suggesting that language-induced sensorimotor resonance during writing depends on the motoric particularities of each production modality. More generally, our paradigm opens new avenues for fine-grained explorations of embodied language processes

    Active cooling control of the CLEO detector using a hydrocarbon coolant farm

    Full text link
    We describe a novel approach to particle-detector cooling in which a modular farm of active coolant-control platforms provides independent and regulated heat removal from four recently upgraded subsystems of the CLEO detector: the ring-imaging Cherenkov detector, the drift chamber, the silicon vertex detector, and the beryllium beam pipe. We report on several aspects of the system: the suitability of using the aliphatic-hydrocarbon solvent PF(TM)-200IG as a heat-transfer fluid, the sensor elements and the mechanical design of the farm platforms, a control system that is founded upon a commercial programmable logic controller employed in industrial process-control applications, and a diagnostic system based on virtual instrumentation. We summarize the system's performance and point out the potential application of the design to future high-energy physics apparatus.Comment: 21 pages, LaTeX, 5 PostScript figures; version accepted for publication in Nuclear Instruments and Methods in Physics Research

    Mind in Action: Action Representation and the Perception of Biological Motion

    Get PDF
    The ability to understand and communicate about the actions of others is a fundamental aspect of our daily activity. How can we talk about what others are doing? What qualities do different actions have such that they cause us to see them as being different or similar? What is the connection between what we see and the development of concepts and words or expressions for the things that we see? To what extent can two different people see and talk about the same things? Is there a common basis for our perception, and is there then a common basis for the concepts we form and the way in which the concepts become lexicalized in language? The broad purpose of this thesis is to relate aspects of perception, categorization and language to action recognition and conceptualization. This is achieved by empirically demonstrating a prototype structure for action categories and by revealing the effect this structure has on language via the semantic organization of verbs for natural actions. The results also show that implicit access to categorical information can affect the perceptual processing of basic actions. These findings indicate that our understanding of human actions is guided by the activation of high level information in the form of dynamic action templates or prototypes. More specifically, the first two empirical studies investigate the relation between perception and the hierarchical structure of action categories, i.e., subordinate, basic, and superordinate level action categories. Subjects generated lists of verbs based on perceptual criteria. Analyses based on multidimensional scaling showed a significant correlation for the semantic organization of a subset of the verbs for English and Swedish speaking subjects. Two additional experiments were performed in order to further determine the extent to which action categories exhibit graded structure, which would indicate the existence of prototypes for action categories. The results from typicality ratings and category verification showed that typicality judgments reliably predict category verification times for instances of different actions. Finally, the results from a repetition (short-term) priming paradigm suggest that high level information about the categorical differences between actions can be implicitly activated and facilitates the later visual processing of displays of biological motion. This facilitation occurs for upright displays, but appears to be lacking for displays that are shown upside down. These results show that the implicit activation of information about action categories can play a critical role in the perception of human actions

    How the brain grasps tools: fMRI & motion-capture investigations

    Get PDF
    Humans’ ability to learn about and use tools is considered a defining feature of our species, with most related neuroimaging investigations involving proxy 2D picture viewing tasks. Using a novel tool grasping paradigm across three experiments, participants grasped 3D-printed tools (e.g., a knife) in ways that were considered to be typical (i.e., by the handle) or atypical (i.e., by the blade) for subsequent use. As a control, participants also performed grasps in corresponding directions on a series of 3D-printed non-tool objects, matched for properties including elongation and object size. Project 1 paired a powerful fMRI block-design with visual localiser Region of Interest (ROI) and searchlight Multivoxel Pattern Analysis (MVPA) approaches. Most remarkably, ROI MVPA revealed that hand-selective, but not anatomically overlapping tool-selective, areas of the left Lateral Occipital Temporal Cortex and Intraparietal Sulcus represented the typicality of tool grasping. Searchlight MVPA found similar evidence within left anterior temporal cortex as well as right parietal and temporal areas. Project 2 measured hand kinematics using motion-capture during a highly similar procedure, finding hallmark grip scaling effects despite the unnatural task demands. Further, slower movements were observed when grasping tools, relative to non-tools, with grip scaling also being poorer for atypical tool, compared to non-tool, grasping. Project 3 used a slow-event related fMRI design to investigate whether representations of typicality were detectable during motor planning, but MVPA was largely unsuccessful, presumably due to a lack of statistical power. Taken together, the representations of typicality identified within areas of the ventral and dorsal, but not ventro-dorsal, pathways have implications for specific predictions made by leading theories about the neural regions supporting human tool-use, including dual visual stream theory and the two-action systems model

    On the role of informativeness in spatial language comprehension

    Get PDF
    People need to know where objects are located in order to be able to interact with the world, and spatial language provides the main linguistic means of facilitating this. However, the information contained in the description about objects locations is not the only message conveyed; there is evidence in fact that people carry out inferences that go beyond the simple geometric relation specified (Coventry & Garrod, 2004; Tyler & Evans, 2003). People draw inferences about objects dynamic and objects interaction, and these information become critical for the apprehension of spatial language. Among the inferences people draw from spatial language the property of the converseness is particularly appealing; this principle states that given the description "A is above B" one can also infers "B is below A" (Leveit, 1984, 1996). Thus if the speaker says "the book is above the telephone" implicitly the listener also knows that the telephone is below the book. However this extra information does not necessary facilitate the apprehension of spatial descriptions. If it is true that inferences increase the amount of information the description conveys (Johnson-Laird & Byrne, 1991), it is also true that this "extra-information" can be a disadvantage. In fact the spatial preposition used in the description can end up in being ambiguous because it suits more than one interpretation: The consequence is a reduction of the informativeness (Bar-Hillel, 1964). Tyler and Evans (2003) called this inferential process Best Fit. Speakers choose the spatial preposition which offers the best fit between the conceptual spatial relation and the speaker's communicative needs. This principle can be considered a logical extension of the notion of relevance (Grice. 1975; Sperber & Wilson, 1986) and an integration for the Q-Principle (Asher & Lascarides, 2003; Levinson, 2000a) according to which speakers have the duty to avoid statements that are informationally weaker than their knowledge of the world allows. This dissertation explores whether the inferences people draw on spatial representations, in particular those based on the converseness principle (Levelt, 1996), will affect the process that drive the speaker to choose the most informative description, that is the description that best fit spatial relations and speaker needs (Tyler & Evans, 2003). Experiment 1 and 2 study whether converseness, tested by manipulating the orientation of the located object, affects the extent to which a spatial description based on the preposition over, under, above, below is regarded as a good description of those scenes. Experiment 3 shows that the acceptability for a projective spatial preposition is affected by the orientation of both the object presented in the scene. Experiment 4 and 5 replicate the results achieved in the previous experiments using polyoriented objects (Leek, 1998b) in order to exclude the possibility that the decrease of acceptability was due to the fact that one object was shown in a non-canonical orientation. Experiment 6, 7 and 8 will provide evidence that converseness generates ambiguous descriptions also with spatial prepositions such as in front of, behind, on the left and to the right. Finally Experiment 9 and 10 show that for proximity terms such as near and far informativeness is not that relevant, but rather it seems that people simply use contextual information to set a scale for their judgments

    Modular Mechanistic Networks: On Bridging Mechanistic and Phenomenological Models with Deep Neural Networks in Natural Language Processing

    Get PDF
    Natural language processing (NLP) can be done using either top-down (theory driven) and bottom-up (data driven) approaches, which we call mechanistic and phenomenological respectively. The approaches are frequently considered to stand in opposition to each other. Examining some recent approaches in deep learning we argue that deep neural networks incorporate both perspectives and, furthermore, that leveraging this aspect of deep learning may help in solving complex problems within language technology, such as modelling language and perception in the domain of spatial cognition

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    Social perception in the real world : employing visual adaptation paradigms in the investigation of mechanisms underlying emotion and trustworthiness perception

    Get PDF
    Social context can substantially influence our perception and understanding of emotion and action of observed individuals. However, less is known about how temporal context can affect our judgement of behaviour of other people. The aim of this thesis was to explore how immediate perceptual history influences social perception. Further aims were: (i) to examine whether prior visual experience influences the perception of behaviour of other individuals in a naturalistic virtual environment resembling the real world; (ii) to determine whether our judgement of emotional state or trustworthiness of observed individuals is influenced by perceptual history, and (iii) by cognitive processes such as mental state attribution to the observer; (iv) to investigate whether processing of emotion information from dynamic, whole-body action is dependent on the processing of body identity, and (v) dependent on the body part that conveys it. Here, visual adaptation paradigms were used to examine systematic biases in social perception following prior visual experience, and to infer potential neural mechanisms underlying social perception. The results presented in this thesis suggest that perception and understanding of behaviour of other individuals in the naturalistic virtual environment are influenced by the behaviour of other individuals within the shared social environment. Specifically, in Chapter 3, I presented data suggesting that visual adaptation mechanisms examined thus far in laboratory settings may influence our everyday perception and judgement of behaviour of other people. In Chapters 4 and 5, I showed that these biases in social perception can be attributed to visual adaptation mechanisms, which code emotions and intentions derived from actions with respect to specific action kinematics and the body part that conveyed the given emotion. The results of experiments presented in Chapter 4 demonstrated that emotions conveyed by actions are represented with respect to, and independently of, actors’ identity. These finding suggest that the mechanisms underlying processing of action emotion may operate in parallel with the mechanisms underlying processing emotion from other social signals such as face and voice. In Chapter 6, I showed that cognitive processes underlying Theory of Mind, such as mental state attribution, can also influence perceptual processing of emotional signals. Finally, results presented in Chapter 7 suggest that judgments of complex social traits such as trustworthiness derived from faces are also influenced by perceptual history. These results also yielded strong sex differences in assessing trustworthiness of an observed individual; female observers showed a strong bias in perception resulting from adaptation to (un)trustworthiness, while male observers were less influenced by prior visual context. Together these findings suggest that social perception in the real world may be sensitive not only to the social context in which an observed act is embedded, but also to the prior visual context and the observer’s beliefs regarding the observed individual. Visual adaptation mechanisms may therefore operate during our everyday perception, in order to adjust our visual system to allow for efficient and accurate judgement of socially meaningful stimuli. The findings presented in this thesis highlight the importance of studying social perception using naturalistic stimuli embedded in a meaningful social scene, in order to gain a better understanding of the mechanisms that underlie our judgement of behaviour of other people. They also demonstrate the utility of visual adaptation paradigms in studying social perception and social cognition
    • …
    corecore