1,989 research outputs found

    Diminished Medial Prefrontal Activity behind Autistic Social Judgments of Incongruent Information

    Get PDF
    Individuals with autism spectrum disorders (ASD) tend to make inadequate social judgments, particularly when the nonverbal and verbal emotional expressions of other people are incongruent. Although previous behavioral studies have suggested that ASD individuals have difficulty in using nonverbal cues when presented with incongruent verbal-nonverbal information, the neural mechanisms underlying this symptom of ASD remain unclear. In the present functional magnetic resonance imaging study, we compared brain activity in 15 non-medicated adult males with high-functioning ASD to that of 17 age-, parental-background-, socioeconomic-, and intelligence-quotient-matched typically-developed (TD) male participants. Brain activity was measured while each participant made friend or foe judgments of realistic movies in which professional actors spoke with conflicting nonverbal facial expressions and voice prosody. We found that the ASD group made significantly less judgments primarily based on the nonverbal information than the TD group, and they exhibited significantly less brain activity in the right inferior frontal gyrus, bilateral anterior insula, anterior cingulate cortex/ventral medial prefrontal cortex (ACC/vmPFC), and dorsal medial prefrontal cortex (dmPFC) than the TD group. Among these five regions, the ACC/vmPFC and dmPFC were most involved in nonverbal-information-biased judgments in the TD group. Furthermore, the degree of decrease of the brain activity in these two brain regions predicted the severity of autistic communication deficits. The findings indicate that diminished activity in the ACC/vmPFC and dmPFC underlies the impaired abilities of individuals with ASD to use nonverbal content when making judgments regarding other people based on incongruent social information

    Connections between articulations and grasping

    Get PDF
    The idea that hand gestures and speech are connected is quite old. Some of these theories even suggest that language is primarily based on a manual communication system. In this thesis, I present four studies in which we studied the connections between articulatory gestures and manual grasps. The work is based on an earlier finding showing systematic connections between specific articulatory gestures and grasp types. For example, uttering a syllable such as [kɑ] can facilitate power grip responses, whereas uttering a syllable such as [ti] can facilitate precision grip responses. I will refer to this phenomenon as the articulation-grip congruency effect. Similarly, to the original work, we used special power and precision grip devices that the participants held in their hand to perform responses. In Study I, we measured response times and accuracy of grip responses and vocalisations to investigate whether the effect can be also observed in vocal responses, and to which extent the effect operates in the action selection processes. In Study II, grip response times were measured to investigate whether the effect persists when the syllables are only heard or read silently. Study III investigated the influence of grasp planning and/or execution on categorizing perceived syllables. In Study IV, we measured electrical activity in the brain during listening of syllables that were either congruent or incongruent with the precision or power grip, and we investigated how performing different grips affected the auditory processing of the heard syllables. The results of Study I showed that besides manual facilitation, the effect is observed also in vocal responses, both when a simultaneous grip is executed and when it is only prepared, meaning that overt execution is not needed for the effect. This suggests that the effect operates in action planning. In addition, the effect was also observed when the participants knew beforehand which response they should execute, suggesting that the effect is not based on the action selection processes. Study II showed that the effect was also observed when the syllables were heard or read silently, supporting the view that articulatory simulation of a perceived syllable can activate the motor program of the grasp which is congruent with the syllable. Study III revealed that grip preparation can influence categorization of perceived syllables. The participants were biased to categorize noise-masked syllables as being [ke] rather than [te] when they were prepared to execute the power grip, and vice versa when they were prepared to execute the precision grip. Finally, Study IV showed that grip performance also modulates early auditory processing of heard syllables. These results support the view that articulatory and hand motor representations form a partly shared network, where activity from one domain can induce activity in the other. This is in line with earlier studies that have shown more general linkage between mouth and manual processes and expands this notion of hand-mouth interaction by showing that these connections can also operate between very specific hand and articulatory gestures.Ajatus käden eleiden ja puheen välisistä yhteyksistä on melko vanha. Jotkut teoriat jopa ehdottavat, että kieli pohjautuu pääosin käsillä tapahtuvaan kommunikointijärjestelmään. Tässä väitöskirjassa esittelen neljä osatyötä, joissa tutkimme artikulatoristen eleiden ja tarttumisotteiden välisiä yhteyksiä. Työ perustuu aiempaan löydökseen, joka paljasti systemaattisia yhteyksiä tiettyjen artikulatoristen eleiden ja tarttumisotteiden välillä. Esimerkiksi [kɑ] tavun lausuminen nopeuttaa voimaotteen tekemistä, kun taas esimerkiksi [ti] tavun lausuminen nopeuttaa pinsettiotteen tekemistä. Väitöskirjan osatyöt hyödynsivät tätä perusefektiä muokkaamalla koeasetelmaa kuhunkin tutkimuskysymykseen sopivaksi. Osatyön I tulokset osoittivat, että yhteensopivuusefekti on havaittavissa myös lausutuissa vastauksissa. Efekti havaittiin myös, kun otteen suorittamiseen oli vain valmistauduttu. Tämä viittaa siihen, että efekti toimii toimintojen suunnittelun tasolla. Lisäksi efekti havaittiin silloinkin, kun osallistujat tiesivät etukäteen, mikä vastaus heidän tulisi suorittaa, mikä viittaa siihen, ettei efekti perustu toimintojen valintaan liittyviin prosesseihin. Osatyössä II efekti havaittiin, vaikka tavut vain kuultiin tai luettiin äänettömästä. Tämä tukee näkemystä, että havaittujen tavujen artikulatorinen simulointi voi aktivoida tavun kanssa yhteensopivan otteen motorista ohjelmaa. Osatyö III osoitti, että käden otteet voivat vaikuttaa havaittujen tavujen luokitteluun. Osallistujat olivat biasoituneet luokittelemaan esitettyjen kohinaisten tavujen olevan ennemmin [ke] kuin [te], kun he olivat valmistautuneet suorittamaan voimaotteen ja päinvastoin, kun he olivat valmistautuneet pinsettiotteen suorittamiseen. Viimeisimpänä osatyö IV osoitti, että otteiden suorittaminen vaikuttaa myös havaittujen tavujen varhaiseen auditoriseen prosessointiin. Nämä tulokset tukevat näkemystä, että artikulatoriset ja käden motoriset edustukset muodostavat osittain jaetun verkoston, jossa aktiivisuus yhdellä osa-alueella voi aiheuttaa aktiivisuutta myös toisella. Tämä on linjassa aiheen aiempien tutkimusten kanssa, jotka ovat osoittaneet yleisempiä yhteyksiä käden ja suun toimintojen välillä. Nämä tulokset laajentavat käden ja suun välisen yhteyden ajatusta osoittamalla, että yhteydet voivat toimia myös hyvin tarkasti rajattujen artikulatoristen ja käden eleiden välillä

    Balancing the Self

    Get PDF
    The vestibular system is composed of otolith organs and semi-circular canals that encode linear and angular accelerations, as well as the position of the head with respect to gravity. Thus, the detection of self-motion, the distinction between self- and object-motion, as well as gaze stabilisation and maintenance of postural stability are the core vestibular functions. Recent research shows that vestibular information interacts with higher-level cognitive processes, such as space perception, attention orienting, body schema and bodily self-consciousness. In order to contribute to these faculties, vestibular information is dynamically combined with visual, somatosensory and proprioceptive signals. In the present thesis we explore such multimodal interactions using a human centrifuge (rotating chair). In Part 1 we show that visual and vestibular cues are integrated in accordance with statistical optimality even when large directional conflicts are introduced between these modalities. Participants were significantly better in discriminating rotation magnitude when simultaneously presented with visual and vestibular cues, as compared to each modality independently, despite the fact that the axes of rotation implied by the two cues were different (Study 1). We also demonstrate that visuo-vestibular integration is present and optimal in patients with unilateral vestibular loss (Study 2). Part 2 of the present thesis examined vestibularsomatosensory interactions. We show that vestibular stimulation in the form of passive whole-body rotations increases tactile sensitivity at the fingertips, as compared to a no-rotation baseline (Study 3). We also demonstrate that the effect of vestibular stimulation on touch is not direct, but mediated by visual information about self-motion: visual and vestibular cues first combine, and only subsequently influence tactile sensitivity (Study 4). In Part 3 of this thesis, we explored how vestibular stimulation affects visual attention and awareness. We show that when acting as an exogenous cue, vestibular stimulation orients attention at short cue-to-target delays. When acting as an endogenous cue, vestibular stimulation strongly orients attention at all cue-to-target delays (Study 5). Vestibular stimulation also affects visual awareness. Using a continuous flash suppression paradigm to suppress an optic flow stimulus during passive whole-body rotations, we show that optic flow that is congruent (i.e. counterdirectional) with the direction of the vestibular rotation breaks suppression faster than incongruent optic flow (Study 6). In sum, our findings refine the existing knowledge on multisensory processing in general and vestibular interactions with other senses in particular. Our results are of relevance for the understanding of how visual, vestibular, proprioceptive and somatosensory information are combined by the brain in order to form a coherent representation of the self in space

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    THE POTENTIATION OF ACTIONS BY VISUAL OBJECTS

    Get PDF
    This thesis examines the relation between visual objects and the actions they afford. It is proposed that viewing an object results in the potentiation of the actions that can be made towards it. The proposal is consistent with neurophysiological evidence that suggests that no clear divide exists between visual and motor representation in the dorsal visual pathway, a processing stream that neuropsychological evidence strongly implicates in the visual control of actions. The experimental work presented examines motor system involvement in visual representation when no intention to perform a particular action is present. It is argued that the representation of action-relevant visual object properties, such as size and orientation, has a motor component. Thus representing the location of a graspable object involves representations of the motor commands necessary to bring the hand to the object. The proposal was examined in a series of eight experiments that employed a Stimulus- Response Compatibility paradigm in which the relation between responses and stimulus properties was never made explicit. Subjects had to make choice reaction time responses that mimicked a component of an action that a viewed object afforded. The action-relevant stimulus property was always irrelevant to response determination and consisted of components of the reach and grasp movement. The results found are not consistent with explanations based on the abstract coding of stimulus-response properties and strongly implicate the involvement of the action system. They provide evidence that merely viewing an object results in the activation of the motor patterns necessary to interact with them. The actions an object affords are an intrinsic part of its visual representation, not merely on account of the association between objects and familiar actions but because the motor system is directly involved in the representation of visuo-spatial object properties

    A quantitative investigation of natural head movement and its contribution to spatial orientation perception.

    Get PDF
    Movement is ubiquitous in everyday life. As we exist in a physical world, we constantly account for our position in it relative to other physical features: both at a conscious, volitional level and an unconscious one. Our experience estimating our own position accumulates over the lifespan, and it is thought that this experience (often referred to as a prior) informs current perception of spatial orientation. Broadly, this perception of spatial orientation is rapidly performed by the nervous system by monitoring, interpreting, and integrated sensory information from multiple sense organs. To do this efficiently, the nervous system likely represents this sensory information in a statistically optimal manner. Some of the most important information for spatial orientation perception comes from visual and vestibular sensation, which rely on sensory organs located in the head. While statistical information about natural visual and vestibular stimuli have been characterized, natural head movement and position, which likely drives correlated dynamics across head-located senses, has not. Furthermore, sensory cues essential to spatial orientation perception are directly affected by head movement specifically. It is likely that measurement of these sensory cues taken during natural behaviors sample a significant portion of the total behaviors that comprise ones’ prior. In this dissertation, I present work quantifying characteristics of head orientation and heading, two dimensions of spatial orientation, over long-duration recordings of natural behavior in humans. Then, I use these to generate priors for Bayesian modeling frameworks which successfully predict observed patterns of orientation and heading perception bias. Given the ability to predict some patterns of bias (head roll and heading azimuth) particularly well, it is likely our data are representative of real behaviors that comprise previous experience the nervous system may have. Natural head orientation and heading distributions reveal several interesting trends that open future lines of research. First, head pitch demonstrates large amounts of inter-subject variability; likely this is due to biomechanical differences, but as these remain relatively stable over the lifespan these should bias head movements. Second, heading azimuth appears to vary significantly as a function of task. Heading azimuth distributions during low velocities (which predominantly consist of stationary activities like standing or sitting) are strongly multimodal across all subjects, while azimuth distributions during high velocities (predominantly consisting of locomotion) are unimodal with relatively low variance. Future work investigating these trends, as well as implications these trends and data have for sensory processing and other applications is discussed

    A heuristic mathematical model for the dynamics of sensory conflict and motion sickness

    Get PDF
    By consideration of the information processing task faced by the central nervous system in estimating body spatial orientation and in controlling active body movement using an internal model referenced control strategy, a mathematical model for sensory conflict generation is developed. The model postulates a major dynamic functional role for sensory conflict signals in movement control, as well as in sensory-motor adaptation. It accounts for the role of active movement in creating motion sickness symptoms in some experimental circumstance, and in alleviating them in others. The relationship between motion sickness produced by sensory rearrangement and that resulting from external motion disturbances is explicitly defined. A nonlinear conflict averaging model is proposed which describes dynamic aspects of experimentally observed subjective discomfort sensation, and suggests resulting behaviours. The model admits several possibilities for adaptive mechanisms which do not involve internal model updating. Further systematic efforts to experimentally refine and validate the model are indicated

    Early multisensory attention as a foundation for learning in multicultural Switzerland

    Get PDF
    Traditional laboratory research on visual attentional control has largely focused on adults, treated one sensory modality at a time, and neglected factors that are a constituent part of information processing in real-world contexts. Links between visual-only attentional control and children’s educational skills have emerged, but they still do not provide enough information about school learning. The present thesis addressed these gaps in knowledge through the following aims: 1) to shed light on the development of the neuro-cognitive mechanisms of attention engaged by multisensory objects in a bottom-up fashion, together with attentional control over visual objects in a top-down fashion, 2) to investigate the links between developing visual and multisensory attentional control and children’s basic literacy and numeracy attainment, and 3) to explore how contextual factors, such as the temporal predictability of a stimulus or the semantic relationships between stimulus features, further influence attentional control mechanisms. To investigate these aims, 115 primary school children and 39 adults from the French-speaking part of Switzerland were tested on their behavioural performance on a child-friendly, multisensory version of the Folk et al. (1992) spatial cueing paradigm, while 129-channel EEG was recorded. EEG data were analysed in a traditional framework (the N2pc ERP component) and a multivariate Electrical Neuroimaging (EN) framework. Taken together, our results demonstrated that children’s visual attentional control reaches adult-like levels at around 7 years of age, or 3rd grade, although children as young as 5 (at school entry) may already be sensitive to the goal- relevance of visual objects. Multisensory attentional control may develop only later. Namely, while 7-year-old children (3rd grade) can be sensitive to the multisensory nature of objects, such sensitivity may only reach an adult-like state at 9 years of age (5th grade). As revealed by EN, both bottom-up multisensory control of attention and top-down visual control of attention are supported by the recruitment of distinct networks of brain generators at each level of schooling experience. Further, at each level of schooling, the involvement of specific sets of brain generators was correlated with literacy and numeracy attainment. In adults, visual and multisensory attentional control were further jointly influenced by contextual factors. The semantic relationship between stimulus features directly influenced visual and multisensory attentional control. In the absence of such semantic links, however, it was the predictability of stimulus onset that influenced visual and multisensory attentional control. Throughout this work, the N2pc component was not sensitive to multisensory or contextual effects in adults, or even traditional visual attention effects in children, and it was owing to EN that the mechanisms of visual and multisensory attentional control were clarified. The present thesis demonstrates the strength of combining behavioural and EEG/ERP markers of attentional control with advanced EEG analytical techniques for investigating the development of attentional control in settings that closely approximate those that we encounter in everyday life

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
    corecore