11 research outputs found

    I think, therefore I am: Teaching critical thinking

    No full text

    Do you Hear What I See? The Voice and Face of a Talker Similarly Influence the Speech of Multiple Listeners

    No full text
    Speech alignment occurs when interlocutors shift their speech to become more similar to each other. Alignment can also be found when one is asked to shadow (quickly say out-loud) perceived words recorded from a model. Prior investigations on alignment have addressed whether shadowers of auditory (e.g. Goldinger, 1998) or visual (e.g. Miller, Sanchez, & Rosenblum, 2010) speech would shift in the direction of a model. However, it is unknown whether multiple shadowers align to a specific model in the same ways or uniquely. This Dissertation addressed two questions: Are utterances of shadowers of the same model more similar to each other than they are to the utterances of shadowers of a different model? Does the sensory modality of the shadowed speech affect the perceptual similarity between the shadowers of the same model? In Experiment Series 1, evidence that shadowers similarly aligned to the auditory speech of a model was obtained. In Experiment 1a perceptual raters judged the utterances of shadowers of the same heard model as being more similar than utterances from shadowers of another heard model. In Experiment 1b it was found that the results from Experiment 1a were due to speech style shifts towards those of the shadowed model and that the shadowers were not similar before exposure to the model. Acoustical analyses of the shadowed words also revealed that shadowers of the same model were more similar along some acoustic dimensions to each other than words from shadowers of a different model. The articulatory dimensions behind these similar acoustic dimensions could also potentially be perceived in visible articulation, suggesting that the results from Experiment 1a might also be found for shadowers of visual speech (lip-reading). In Experiment Series 2, evidence that shadowers similarly aligned to the visual speech of a specific model was obtained. In Experiment 2a perceptual raters judged the utterances of shadowers of the same lip-read model as being more similar than the shadowed utterances of the other lip-read model. Experiment 2b compared auditory and visually shadowed speech of shadowers of the same or a different model. Utterances of multiple shadowers of the same model were judged as being more similar than those of shadowers of another model, regardless of whether the model's speech was shadowed auditorily or visually. These results suggest that shadowers align to similar properties of a specific model's speech even when doing so based on different modalities. Implications for episodic encoding and gestural theories are discussed

    The relationship between personality characteristics and creativity on judgments of facial attractiveness

    No full text
    https://digitalcommons.humboldt.edu/ideafest_posters/1240/thumbnail.jp

    Reserve, Symptoms, Sex and Outcome Following a Single Sports-Related Concussion

    No full text
    https://digitalcommons.humboldt.edu/ideafest_posters/1175/thumbnail.jp

    The Role of Encoding Specificity in Incidental Learning: Implications for Explicit and Implicit False Memories

    No full text
    https://digitalcommons.humboldt.edu/ideafest_posters/1158/thumbnail.jp

    Experience with a talker can transfer across modalities to facilitate lipreading

    No full text
    Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words

    Abstract social categories facilitate access to socially skewed words.

    No full text
    Recent work has shown that listeners process words faster if said by a member of the group that typically uses the word. This paper further explores how the social distributions of words affect lexical access by exploring whether access is facilitated by invoking more abstract social categories. We conduct four experiments, all of which combine an Implicit Association Task with a Lexical Decision Task. Participants sorted real and nonsense words while at the same time sorting older and younger faces (exp. 1), male and female faces (exp. 2), stereotypically male and female objects (exp. 3), and framed and unframed objects, which were always stereotypically male or female (exp. 4). Across the experiments, lexical decision to socially skewed words is facilitated when the socially congruent category is sorted with the same hand. This suggests that the lexicon contains social detail from which individuals make social abstractions that can influence lexical access

    Is speech alignment to talkers or tasks?

    No full text
    Speech alignment, the tendency of individuals to subtly imitate each other’s speaking style, is often assessed by comparing a subject’s baseline and shadowed utterances to a model’s utterances often through perceptual ratings. These types of comparisons provide information about the occurrence of a change in subject’s speech, but do not indicate that this change is towards the specific shadowed model. Three studies investigated whether alignment is specific to a shadowed model. Experiment 1 involved the classic baseline to shadowed comparison to confirm that subjects did, in fact, sound more like their model when they shadowed, relative to any pre-existing similarities between a subject and model. Experiment 2 tested whether subjects’ utterances sounded more similar to the model they had shadowed or to another unshadowed model. Experiment 3 examined whether subjects’ utterances sounded more similar to the model they had shadowed or to another subject who shadowed a different model. Results of all experiments revealed that subjects sounded more similar to the model they had shadowed. This suggests that shadowing-based speech alignment is not just a change; it is a change in the direction of the shadowed model, specifically
    corecore