8,995 research outputs found

    IMPULSE moment-by-moment test:An implicit measure of affective responses to audiovisual televised or digital advertisements

    Get PDF
    IMPULSE is a novel method for detecting affective responses to dynamic audiovisual content. It is an implicit reaction time test that is carried out while an audiovisual clip (e.g., a television commercial) plays in the background and measures feelings that are congruent or incongruent with the content of the clip. The results of three experiments illustrate the following four advantages of IMPULSE over self-reported and biometric methods: (1) being less susceptible to typical confounds associated with explicit measures, (2) being easier to measure deep-seated and often nonconscious emotions, (3) being better able to detect a broad range of emotions and feelings, and (4) being more efficient to implement as an online method.Published versio

    Evolutionary and Cognitive Approaches to Voice Perception in Humans: Acoustic Properties, Personality and Aesthetics

    Get PDF
    Voices are used as a vehicle for language, and variation in the acoustic properties of voices also contains information about the speaker. Listeners use measurable qualities, such as pitch and formant traits, as cues to a speaker’s physical stature and attractiveness. Emotional states and personality characteristics are also judged from vocal stimuli. The research contained in this thesis examines vocal masculinity, aesthetics and personality, with an emphasis on the perception of prosocial traits including trustworthiness and cooperativeness. I will also explore themes which are more cognitive in nature, testing aspects of vocal stimuli which may affect trait attribution, memory and the ascription of identity. Chapters 2 and 3 explore systematic differences across vocal utterances, both in types of utterance using different classes of stimuli and across the time course of perception of the auditory signal. These chapters examine variation in acoustic measurements in addition to variation in listener attributions of commonly-judged speaker traits. The most important result from this work was that evaluations of attractiveness made using spontaneous speech correlated with those made using scripted speech recordings, but did not correlate with those made of the same persons using vowel stimuli. This calls into question the use of sustained vowel sounds for the attainment of ratings of subjective characteristics. Vowel and single-word stimuli are also quite short – while I found that attributions of masculinity were reliable at very short exposure times, more subjective traits like attractiveness and trustworthiness require a longer exposure time to elicit reliable attributions. I conclude with recommending an exposure time of at least 5 seconds in duration for such traits to be reliably assessed. Chapter 4 examines what vocal traits affect perceptions of pro-social qualities using both natural and manipulated variation in voices. While feminine pitch traits (F0 and F0-SD) were linked to cooperativeness ratings, masculine formant traits (Df and Pf) were also associated with cooperativeness. The relative importance of these traits as social signals is discussed. Chapter 5 questions what makes a voice memorable, and helps to differentiate between memory for individual voice identities and for the content which was spoken by administering recognition tests both within and across sensory modalities. While the data suggest that experimental manipulation of voice pitch did not influence memory for vocalised stimuli, attractive male voices were better remembered than unattractive voices, independent of pitch manipulation. Memory for cross-modal (textual) content was enhanced by raising the voice pitch of both male and female speakers. I link this pattern of results to the perceived dominance of voices which have been raised and lowered in pitch, and how this might impact how memories are formed and retained. Chapter 6 examines masculinity across visual and auditory sensory modalities using a cross-modal matching task. While participants were able to match voices to muted videos of both male and female speakers at rates above chance, and to static face images of men (but not women), differences in masculinity did not influence observers in their judgements, and voice and face masculinity were not correlated. These results are discussed in terms of the generally-accepted theory that masculinity and femininity in faces and voices communicate the same underlying genetic quality. The biological mechanisms by which vocal and facial masculinity could develop independently are speculated

    Variables influencing executive functioning in preschool hearing-impaired children implanted within 24 months of age: an observational cohort study

    Get PDF
    Executive Functions (EFs) are fundamental to every aspect of life. The present study was implemented to evaluate factors influencing their development in a group of preschools orally educated profoundly deaf children of hearing parents, who received CI within two years of age. Methods Twenty-five preschool CI children were tested using the Battery for Assessment of Executive Functions (BAFE) to assess their flexibility, inhibition and non-verbal visuo-spatial working memory skills. The percentage of children performing in normal range was reported for each of the EF subtests. Mann-Whitney and Kruskal-Wallis were performed to assess differences between gender, listening mode and degree of parents’ education subgroups. The Spearman Rank Correlation Coefficient was calculated to investigate the relationship between EF scores audiological and linguistic variables. Results Percentages ranging from 76% to 92% of the children reached adequate EF scores at BAFE. Significant relations (p<0.05) were found between EFs and early intervention, listening and linguistic skills. Further, CI children from families with higher education level performed better at the response shifting, inhibitory control and attention flexibility tasks. Economic income correlated significantly with flexibility and inhibitory skills. Females performed better than males only in the attention flexibility task. Conclusions The present study is one of the first to focus attention on the development of EFs in preschool CI children, providing an initial understanding of the characteristics of EFs at the age when these skills emerge. Clinical practice must pay increasing attention to these aspects which are becoming the new emerging challenge of rehabilitation programs

    The importance of "scaffolding" in clinical approach to deafness across the lifespan

    Get PDF
    Throughout the present work of thesis, the concept of scaffolding will be used as a fil rouge through the chapters. What I mean for “scaffolding approach”, therefore, is an integrated and multidisciplinary clinical and research methodology to hearing impairments that could take into account persons as a whole; an approach that needs to be continuously adapted and harmonized with the individuals, pursuant to their progress, their limits and resources, in consideration of their audiological, cognitive, emotional, personal, and social characteristics. The following studies of our research group will be presented: A study (2020) designed to assess the effects of parent training (PT) on enhancing children’s communication development (chapter two); Two studies of our research group (2016; 2020) concerning variables influencing comprehension of emotions and nuclear executive functions in deaf children with cochlear implant (chapter three and chapter four) In chapter five a presentation and description of our Mind-Active Communication program, main topics and aims, multidisciplinary organizations of group and individual sessions with a description of used materials and methodology is given. Finally, a preliminary evaluation to explore the use of this multidisciplinary rehabilitative program on quality of life, psychological wellbeing, and hearing abilities in a sample of cochlear implanted elderly persons is reported

    Multisensory integration of musical emotion perception in singing

    Get PDF
    We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings

    How Psychological Stress Affects Emotional Prosody

    Get PDF
    We explored how experimentally induced psychological stress affects the production and recognition of vocal emotions. In Study 1a, we demonstrate that sentences spoken by stressed speakers are judged by naive listeners as sounding more stressed than sentences uttered by non-stressed speakers. In Study 1b, negative emotions produced by stressed speakers are generally less well recognized than the same emotions produced by non-stressed speakers. Multiple mediation analyses suggest this poorer recognition of negative stimuli was due to a mismatch between the variation of volume voiced by speakers and the range of volume expected by listeners. Together, this suggests that the stress level of the speaker affects judgments made by the receiver. In Study 2, we demonstrate that participants who were induced with a feeling of stress before carrying out an emotional prosody recognition task performed worse than non-stressed participants. Overall, findings suggest detrimental effects of induced stress on interpersonal sensitivity

    Working memory in children with reading and/or mathematical disabilities

    Get PDF
    Elementary school children with reading disabilities (RD; n = 17), mathematical disabilities (MD; n = 22), or combined reading and mathematical disabilities (RD+MD; n = 28) were compared to average achieving (AA; n = 45) peers on working memory measures. On all working memory components, 2 (RD vs. no RD) x 2 (MD vs. no MD) factorial ANCOVAs revealed clear differences between children with and without RD. Children with MD had lower span scores than the AA children on measures of the phonological loop and the central executive. A significant interaction effect between RD and MD was found only for listening recall and had a small, partial effect size. In addition, analyses showed that the best logistic regression model consisted of a visuospatial and a central executive task. The model significantly distinguished between the AA and clinical groups and between the MD and RD+MD groups. Evidence was found for domain-general working memory problems in children with learning disabilities. Management of working memory loads in structured learning activities in the classroom, at home, or during therapy may help these children to cope with their problems in a more profound manner

    Examining Relationships Between Basic Emotion Perception and Musical Training in the Prosodic, Facial, and Lexical Channels of Communication and in Music

    Full text link
    Research has suggested that intensive musical training may result in transfer effects from musical to non-musical domains. There is considerable research on perceptual and cognitive transfer effects associated with music, but, comparatively, fewer studies examined relationships between musical training and emotion processing. Preliminary findings, though equivocal, suggested that musical training is associated with enhanced perception of emotional prosody, consistent with a growing body of research demonstrating relationships between music and speech. In addition, few studies directly examined the relationship between musical training and the perception of emotions expressed in music, and no studies directly evaluated this relationship in the facial and lexical channels of emotion communication. In an effort to expand on prior findings, the current study characterized emotion perception differences between musicians and non-musicians in the prosodic, lexical, and facial channels of communication and in music. A total of 119 healthy adults (18-40 years old) completed the study. Fifty-eight were musicians and 61 were controls. Participants were screened for neurological and psychiatric illness. They completed emotion perception tasks from the New York Emotion Battery (Borod, Welkowitz, & Obler, 1992) and a music emotion perception task, created for this project, using stimuli developed by Eerola and Vuoskoski (2011). They also completed multiple non-emotional control measures, as well as neuropsychological and self-report measures, in order to control for any relevant participant group differences. Parametric and non-parametric statistical procedures were employed to evaluate for group differences in emotion perception accuracy for each of the emotional control tasks. Parametric and non-parametric procedures were also used to evaluate whether musicians and non-musicians differed with regard to their perception of basic emotions. There was evidence for differences in emotion perception between musicians and non- musicians. Musicians were more accurate than non-musicians for the prosodic channel and for musical emotions. There were no group differences for the lexical or facial channels of emotion communication. When error patterns were examined, musicians and non-musicians were found to make similar patterns of misidentifications, suggesting that musicians and non-musicians were processing emotions similarly. Results are discussed in the context of theories of music and speech, emotion perception processing, and learning transfer. This work serves to clarify and strengthen prior research demonstrating relationships between music and speech. It also has implications for understanding emotion perception as well as potential clinical implications, particularly for neurorehabilitation. Lastly, this work serves to guide future research on music and emotion processing
    • …
    corecore