322 research outputs found

    Emotion based attentional priority for storage in visual short-term memory

    Get PDF
    A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as ‘emotional superiority’). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands

    Does the Reading of Different Orthographies Produce Distinct Brain Activity Patterns? An ERP Study

    Get PDF
    Orthographies vary in the degree of transparency of spelling-sound correspondence. These range from shallow orthographies with transparent grapheme-phoneme relations, to deep orthographies, in which these relations are opaque. Only a few studies have examined whether orthographic depth is reflected in brain activity. In these studies a between-language design was applied, making it difficult to isolate the aspect of orthographic depth. In the present work this question was examined using a within-subject-and-language investigation. The participants were speakers of Hebrew, as they are skilled in reading two forms of script transcribing the same oral language. One form is the shallow pointed script (with diacritics), and the other is the deep unpointed script (without diacritics). Event-related potentials (ERPs) were recorded while skilled readers carried out a lexical decision task in the two forms of script. A visual non-orthographic task controlled for the visual difference between the scripts (resulting from the addition of diacritics to the pointed script only). At an early visual-perceptual stage of processing (∼165 ms after target onset), the pointed script evoked larger amplitudes with longer latencies than the unpointed script at occipital-temporal sites. However, these effects were not restricted to orthographic processing, and may therefore have reflected, at least in part, the visual load imposed by the diacritics. Nevertheless, the results implied that distinct orthographic processing may have also contributed to these effects. At later stages (∼340 ms after target onset) the unpointed script elicited larger amplitudes than the pointed one with earlier latencies. As this latency has been linked to orthographic-linguistic processing and to the classification of stimuli, it is suggested that these differences are associated with distinct lexical processing of a shallow and a deep orthography

    Abstract sounds and their applications in audio and perception research

    No full text
    International audienceRecognition of sound sources and events is an important pro- cess in sound perception and has been studied in many research domains. Conversely sounds that cannot be recognized are not often studied except by electroacoustic music composers. Besides, considerations on recogni- tion of sources might help to address the problem of stimulus selection and categorization of sounds in the context of perception research. This paper introduces what we call abstract sounds with the existing musical background and shows their relevance for different applications

    Atypical disengagement from faces and its modulation by the control of eye fixation in children with Autism Spectrum Disorder

    Get PDF
    By using the gap overlap task, we investigated disengagement from faces and objects in children (9–17 years old) with and without autism spectrum disorder (ASD) and its neurophysiological correlates. In typically developing (TD) children, faces elicited larger gap effect, an index of attentional engagement, and larger saccade-related event-related potentials (ERPs), compared to objects. In children with ASD, by contrast, neither gap effect nor ERPs differ between faces and objects. Follow-up experiments demonstrated that instructed fixation on the eyes induces larger gap effect for faces in children with ASD, whereas instructed fixation on the mouth can disrupt larger gap effect in TD children. These results suggest a critical role of eye fixation on attentional engagement to faces in both groups

    A comparison of polarized and non-polarized human endometrial monolayer culture systems on murine embryo development

    Get PDF
    BACKGROUND: Co-culture of embryos with various somatic cells has been suggested as a promising approach to improve embryo development. Despite numerous reports regarding the beneficial effects of epithelial cells from the female genital tract on embryo development in a co-culture system, little is known about the effect of these cells when being cultured under a polarized condition on embryo growth. Our study evaluated the effects of in vitro polarized cells on pre-embryo development. METHODS: Human endometrial tissue was obtained from uterine specimens excised at total hysterectomy performed for benign indications. Epithelial cells were promptly isolated and cultured either on extra-cellular matrix gel (ECM-Gel) coated millipore filter inserts (polarized) or plastic surfaces (non-polarized). The epithelial nature of the cells cultured on plastic was confirmed through immunohistochemistry, and polarization of cells cultured on ECM-Gel was evaluated by transmission electron microscopy (TEM). One or two-cell stage embryos of a superovulated NMRI mouse were then flushed and placed in culture with either polarized or non-polarized cells and medium alone. Development rates were determined for all embryos daily and statistically compared. At the end of the cultivation period, trophectoderm (TE) and inner cell mass (ICM) of expanded blastocysts from each group were examined microscopically. RESULTS: Endometrial epithelial cells cultured on ECM-Gel had a highly polarized columnar shape as opposed to the flattened shape of the cells cultured on a plastic surface. The two-cell embryos cultured on a polarized monolayer had a higher developmental rate than those from the non-polarized cells. There was no statistically significant difference; still, the blastocysts from the polarized monolayer, in comparison with the non-polarized group, had a significantly higher mean cell number. The development of one-cell embryos in the polarized and non-polarized groups showed no statistically significant difference. CONCLUSION: Polarized cells could improve in vitro embryo development from the two-cell stage more in terms of quality (increasing blastocyst cellularity) than in terms of developmental rate

    Chinese and Korean Characters Engage the Same Visual Word Form Area in Proficient Early Chinese-Korean Bilinguals

    Get PDF
    A number of recent studies consistently show an area, known as the visual word form area (VWFA), in the left fusiform gyrus that is selectively responsive for visual words in alphabetic scripts as well as in logographic scripts, such as Chinese characters. However, given the large difference between Chinese characters and alphabetic scripts in terms of their orthographic rules, it is not clear at a fine spatial scale, whether Chinese characters engage the same VWFA in the occipito-temporal cortex as alphabetic scripts. We specifically compared Chinese with Korean script, with Korean script serving as a good example of alphabetic writing system, but matched to Chinese in the overall square shape. Sixteen proficient early Chinese-Korean bilinguals took part in the fMRI experiment. Four types of stimuli (Chinese characters, Korean characters, line drawings and unfamiliar Chinese faces) were presented in a block-design paradigm. By contrasting characters (Chinese or Korean) to faces, presumed VWFAs could be identified for both Chinese and Korean characters in the left occipito-temporal sulcus in each subject. The location of peak response point in these two VWFAs were essentially the same. Further analysis revealed a substantial overlap between the VWFA identified for Chinese and that for Korean. At the group level, there was no significant difference in amplitude of response to Chinese and Korean characters. Spatial patterns of response to Chinese and Korean are similar. In addition to confirming that there is an area in the left occipito-temporal cortex that selectively responds to scripts in both Korean and Chinese in early Chinese-Korean bilinguals, our results show that these two scripts engage essentially the same VWFA, even at the level of fine spatial patterns of activation across voxels. These results suggest that similar populations of neurons are engaged in processing the different scripts within the same VWFA in early bilinguals

    Collaborative Brain-Computer Interface for Aiding Decision-Making

    Get PDF
    We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making

    Face recognition and visual search strategies in autism spectrum disorders: Amending and extending a recent review by Weigelt et al.

    Get PDF
    The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD) and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed
    corecore