87 research outputs found

    The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: an ERP study

    Get PDF
    The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.This work was supported by Grant Numbers IF/00334/2012, PTDC/PSI-PCL/116626/2010, and PTDC/MHN-PCN/3606/2012, funded by the Fundacao para a Ciencia e a Tecnologia (FCT, Portugal) and the Fundo Europeu de Desenvolvimento Regional through the European programs Quadro de Referencia Estrategico Nacional and Programa Operacional Factores de Competitividade, awarded to A.P.P., and by FCT Doctoral Grant Number SFRH/BD/77681/2011, awarded to T.C.info:eu-repo/semantics/publishedVersio

    Phonological and orthographic influences in the bouba–kiki effect

    Get PDF
    We examine a high-profile phenomenon known as the bouba–kiki effect, in which non-word names are assigned to abstract shapes in systematic ways (e.g. rounded shapes are preferentially labelled bouba over kiki). In a detailed evaluation of the literature, we show that most accounts of the effect point to predominantly or entirely iconic cross-sensory mappings between acoustic or articulatory properties of sound and shape as the mechanism underlying the effect. However, these accounts have tended to confound the acoustic or articulatory properties of non-words with another fundamental property: their written form. We compare traditional accounts of direct audio or articulatory-visual mapping with an account in which the effect is heavily influenced by matching between the shapes of graphemes and the abstract shape targets. The results of our two studies suggest that the dominant mechanism underlying the effect for literate subjects is matching based on aligning letter curvature and shape roundedness (i.e. non-words with curved letters are matched to round shapes). We show that letter curvature is strong enough to significantly influence word–shape associations even in auditory tasks, where written word forms are never presented to participants. However, we also find an additional phonological influence in that voiced sounds are preferentially linked with rounded shapes, although this arises only in a purely auditory word–shape association task. We conclude that many previous investigations of the bouba–kiki effect may not have given appropriate consideration or weight to the influence of orthography among literate subjects

    In praise of arrays

    Get PDF
    Microarray technologies have both fascinated and frustrated the transplant community since their introduction roughly a decade ago. Fascination arose from the possibility offered by the technology to gain a profound insight into the cellular response to immunogenic injury and the potential that this genomic signature would be indicative of the biological mechanism by which that stress was induced. Frustrations have arisen primarily from technical factors such as data variance, the requirement for the application of advanced statistical and mathematical analyses, and difficulties associated with actually recognizing signature gene-expression patterns and discerning mechanisms. To aid the understanding of this powerful tool, its versatility, and how it is dramatically changing the molecular approach to biomedical and clinical research, this teaching review describes the technology and its applications, as well as the limitations and evolution of microarrays, in the field of organ transplantation. Finally, it calls upon the attention of the transplant community to integrate into multidisciplinary teams, to take advantage of this technology and its expanding applications in unraveling the complex injury circuits that currently limit transplant survival

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Aarhus Regenerative Orthopaedics Symposium (AROS)

    Get PDF
    The combination of modern interventional and preventive medicine has led to an epidemic of ageing. While this phenomenon is a positive consequence of an improved lifestyle and achievements in a society, the longer life expectancy is often accompanied by decline in quality of life due to musculoskeletal pain and disability. The Aarhus Regenerative Orthopaedics Symposium (AROS) 2015 was motivated by the need to address regenerative challenges in an ageing population by engaging clinicians, basic scientists, and engineers. In this position paper, we review our contemporary understanding of societal, patient-related, and basic science-related challenges in order to provide a reasoned roadmap for the future to deal with this compelling and urgent healthcare problem
    corecore