14 research outputs found

    Differential attribution of personality based on multi-channel presentation of verbal and nonverbal cues

    No full text
    Subjects made personality judgements of stimulus persons on the basis of auditory and visual cues presented in isolation and/or combination. In a 3 × 4 factorial design either no visual cues or photos or video clips were presented in the visual channel, whereas in the auditory channel either transcript excerpts (content cues) or electronically filtered speech (sequence cues) or random spliced speech (frequency cues) or normal speech samples were presented. The results show that presence or absence of visual cues affects the attribution of conscientiousness and emotional stability. Except for some within-channel cue combinations with overlapping information content (cue generality), personality inferences seem to be cue-specific. The predictive power of these inferences for three types of personality attribution (relationship-based peer ratings, interaction-based coparticipants' ratings, and observation-based judge ratings) was explored. For some types of cues within and across channels and for some traits, cue additivity effects were found (increase of predictive power for cue summation) whereas for some cue combinations (mostly those involving static physiognomic cues) an attenuation of predictive power seemed to result from discrepant inferences from auditory and visual cues. Implications for person perception and nonverbal communication are discussed
    corecore