188 research outputs found

    Communication responses to positive or neutral facial expressions between the genders

    Get PDF
    The purpose of this thesis was to determine whether or not college students would respond with any communication to a stimulus of nonverbal communication of either eye contact alone or paired with smiling to the same or opposite gender. The characteristics of this study may have implications on how successful nonverbal communication can be. One-hundred-sixty students, 8 groups within, were randomly stimulated by either a female or male with either a positive or neutral facial expression and their natural responses were recorded. The responses were coded on a Likert scale and analyzed with a 3 way ANOVA. The data presented in this study allows this researcher to reject the second null hypothesis and accept the alternate hypothesis that a particular gender providing the nonverbal communication stimuli will gain more responses from the participants. A significant difference was found among the female providing nonverbal communication stimuli

    Bias, the Brain, and Student Evaluations of Teaching

    Get PDF

    Development and evaluation of an interactive virtual audience for a public speaking training application

    Get PDF
    Einleitung: Eine der häufigsten sozialen Ängste ist die Angst vor öffentlichem Sprechen. Virtual-Reality- (VR-) Trainingsanwendungen sind ein vielversprechendes Instrument, um die Sprechangst zu reduzieren und die individuellen Sprachfähigkeiten zu verbessern. Grundvoraussetzung hierfür ist die Implementierung eines realistischen und interaktiven Sprecher-Publikum-Verhaltens. Ziel: Die Studie zielte darauf ab, ein realistisches und interaktives Publikum für eine VR-Anwendung zu entwickeln und zu bewerten, welches für die Trainingsanwendung von öffentlichem Sprechen angewendet wird. Zunächst wurde eine Beobachtungsstudie zu den Verhaltensmustern von Sprecher und Publikum durchgeführt. Anschließend wurden die identifizierten Muster in eine VR-Anwendung implementiert. Die Wahrnehmung der implementierten Interaktionsmuster wurde in einer weiteren Studie aus Sicht der Nutzer evaluiert. Beobachtungsstudie (1): Aufgrund der nicht ausreichenden Datengrundlage zum realen interaktiven Verhalten zwischen Sprecher und Publikum lautet die erste Forschungsfrage "Welche Sprecher-Publikums-Interaktionsmuster können im realen Umfeld identifiziert werden?". Es wurde eine strukturierte, nicht teilnehmende, offene Beobachtungsstudie durchgeführt. Ein reales Publikum wurde auf Video aufgezeichnet und die Inhalte analysiert. Die Stichprobe ergab N = 6484 beobachtete Interaktionsmuster. Es wurde festgestellt, dass Sprecher mehr Dialoge als das Publikum initiieren und wie die Zuschauer auf Gesichtsausdrücke und Gesten der Sprecher reagieren. Implementierungsstudie (2): Um effiziente Wege zur Implementierung der Ergebnisse der Beobachtungsstudie in die Trainingsanwendung zu finden, wurde die Forschungsfrage wie folgt formuliert: "Wie können Interaktionsmuster zwischen Sprecher und Publikum in eine virtuelle Anwendung implementiert werden?". Das Hardware-Setup bestand aus einer CAVE, Infitec-Brille und einem ART Head-Tracking. Die Software wurde mit 3D-Excite RTT DeltaGen 12.2 realisiert. Zur Beantwortung der zweiten Forschungsfrage wurden mehrere mögliche technische Lösungen systematisch untersucht, bis effiziente Lösungen gefunden wurden. Infolgedessen wurden die selbst erstellte Audioerkennung, die Kinect-Bewegungserkennung, die Affectiva-Gesichtserkennung und die selbst erstellten Fragen implementiert, um das interaktive Verhalten des Publikums in der Trainingsanwendung für öffentliches Sprechen zu realisieren. Evaluationsstudie (3): Um herauszufinden, ob die Implementierung interaktiver Verhaltensmuster den Erwartungen der Benutzer entsprach, wurde die dritte Forschungsfrage folgendermaßen formuliert: “Wie beeinflusst die Interaktivität einer virtuellen Anwendung für öffentliches Reden die Benutzererfahrung?”. Eine experimentelle Benutzer-Querschnittsstudie wurde mit N = 57 Teilnehmerinnen (65% Männer, 35% Frauen; Durchschnittsalter = 25.98, SD = 4.68) durchgeführt, die entweder der interaktiven oder nicht-interaktiven VR-Anwendung zugewiesen wurden. Die Ergebnisse zeigten, dass, es einen signifikanten Unterschied in der Wahrnehmung zwischen den beiden Anwendungen gab. Allgemeine Schlussfolgerungen: Interaktionsmuster zwischen Sprecher und Publikum, die im wirklichen Leben beobachtet werden können, wurden in eine VR-Anwendung integriert, die Menschen dabei hilft, Angst vor dem öffentlichen Sprechen zu überwinden und ihre öffentlichen Sprechfähigkeiten zu trainieren. Die Ergebnisse zeigten eine hohe Relevanz der VR-Anwendungen für die Simulation öffentlichen Sprechens. Obwohl die Fragen des Publikums manuell gesteuert wurden, konnte das neu gestaltete Publikum mit den Versuchspersonen interagieren. Die vorgestellte VR-Anwendung zeigt daher einen hohen potenziellen Nutzen, Menschen beim Trainieren von Sprechfähigkeiten zu unterstützen. Die Fragen des Publikums wurden immer noch manuell von einem Bediener reguliert und die Studie wurde mit Teilnehmern durchgeführt, die nicht unter einem hohen Grad an Angst vor öffentlichem Sprechen leiden. Bei zukünftigen Studien sollten fortschrittlichere Technologien eingesetzt werden, beispielsweise Spracherkennung, 3D-Aufzeichnungen oder 3D-Livestreams einer realen Person und auch Teilnehmer mit einem hohen Grad an Angst vor öffentlichen Ansprachen beziehungsweise Sprechen in der Öffentlichkeit.Introduction: Fear of public speaking is the most common social fear. Virtual reality (VR) training applications are a promising tool to improve public speaking skills. To be successful, applications should feature a high scenario fidelity. One way to improve it is to implement realistic speaker-audience interactive behavior. Objective: The study aimed to develop and evaluate a realistic and interactive audience for a VR public speaking training application. First, an observation study on real speaker-audience interactive behavior patterns was conducted. Second, identified patterns were implemented in the VR application. Finally, an evaluation study identified users’ perceptions of the training application. Observation Study (1): Because of the lack of data on real speaker-audience interactive behavior, the first research question to be answered was “What speaker-audience interaction patterns can be identified in real life?”. A structured, non-participant, overt observation study was conducted. A real audience was video recorded, and content analyzed. The sample resulted in N = 6,484 observed interaction patterns. It was found that speakers, more often than audience members, initiate dialogues and how audience members react to speakers’ facial expressions and gestures. Implementation Study (2): To find efficient ways of implementing the results of the observation study in the training application, the second research question was formulated as: “How can speaker-audience interaction patterns be implemented into the virtual public speaking application?”. The hardware setup comprised a CAVE, Infitec glasses, and ART head tracking. The software was realized with 3D-Excite RTT DeltaGen 12.2. To answer the second research question, several possible technical solutions were explored systematically, until efficient solutions were found. As a result, self-created audio recognition, Kinect motion recognition, Affectiva facial recognition, and manual question generation were implemented to provide interactive audience behavior in the public speaking training application. Evaluation Study (3): To find out if implementing interactive behavior patterns met users’ expectations, the third research question was formulated as “How does interactivity of a virtual public speaking application affect user experience?”. An experimental, cross-sectional user study was conducted with (N = 57) participants (65% men, 35% women; Mage = 25.98, SD = 4.68) who used either an interactive or a non-interactive VR application condition. Results revealed that there was a significant difference in users’ perception of the two conditions. General Conclusions: Speaker-audience interaction patterns that can be observed in real life were incorporated into a VR application that helps people to overcome the fear of public speaking and train their public speaking skills. The findings showed a high relevance of interactivity for VR public speaking applications. Although questions from the audience were still regulated manually, the newly designed audience could interact with the speakers. Thus, the presented VR application is of potential value in helping people to train their public speaking skills. The questions from the audience were still regulated manually by an operator and we conducted the study with participants not suffering from high degrees of public speaking fear. Future work may use more advanced technology, such as speech recognition, 3D-records, or live 3D-streams of an actual person and include participants with high degrees of public speaking fear

    Do Women Adapt More? The Gender Effects of the Speaker on Classroom Speech Evaluations

    Get PDF
    The presentational style of females and the connection with the audience members was examined. 95 male and female subjects gathered the results from five different sections of speech communication classes. The rating scale that was completed by the subjects contained traits such as, organization, language, material, delivery, analysis and voice. The results have indicated that the female presentational style does have more of a connection and adaptation with the audience than do males

    Do Women Adapt More? The Gender Effects of the Speaker on Classroom Speech Evaluations

    Get PDF
    The presentational style of females and the connection with the audience members was examined. 95 male and female subjects gathered the results from five different sections of speech communication classes. The rating scale that was completed by the subjects contained traits such as, organization, language, material, delivery, analysis and voice. The results have indicated that the female presentational style does have more of a connection and adaptation with the audience than do males

    Volume XI, 1984 Speech Association of Minnesota Journal

    Get PDF
    Complete digitized volume (volume 11, 1984) of Speech Association of Minnesota Journal

    Correlates of the joint attention disturbance in autism

    Get PDF
    Deficits in joint attention, imitation, and pretense are believed to contribute to subsequent difficulty in the development of a theory of mind in children with autism (Baron-Cohen, 1991; Mundy, 1995). Joint attention and other early social skills of children with autism (34 male, 4 female; ages 4 to 18 years) were correlated with measures of nonverbal cognitive ability (Leiter International Performance Scale), receptive and expressive language skills (Peabody Picture Vocabulary Test-Revised and Expressive One-Word Picture Vocabulary Test-Revised), and the severity of autism (Childhood Autism Rating Scale) to gain a better understanding of these developmental relationships. Joint attention and other early social skills were measured with the Social Interest Inventory (SII), a questionnaire developed for this study and completed by Parents and Teachers, Subjects with autism at all levels of cognitive and language ability were found to have deficits in joint attention, imitation, and pretense. Joint attention deficits were not correlated to the acquisition of language or to the cognitive ability of the Subjects. This is a deviance from the typical course of development. However, deficits in joint attention imitation, and pretense showed significant correlations with the overall severity of autism, Students with autism reportedly engage in significantly higher levels of instrumental than social communication and parents tend to rate their children somewhat higher than teachers on several SII measures, Joint attention deficits may have a more profound effect on how language and cognitive skills are used by children with autism than on how they are acquired. Interventions which focus primarily on the cognitive and language abilities of children with autism may overlook more basic social skills such as joint attention which may warrant more direct intervention

    Competency-based education: teaching and assessing oral communication in Fairbanks, Alaska high schools

    Get PDF
    Thesis (M.A.) University of Alaska Fairbanks, 2000Nationwide developments in the area of educational standards and accountability have produced a movement toward competency-based education in which teachers are increasingly tasked with facilitating the competencies within these developing standards. As a result, professionals in the Communication discipline have an opportunity to apply their knowledge of effective communication practices to provide benefits for students and teachers. The first phase of this study examined State and local educational standards in areas of speaking, listening, and group communication. Local and State standards identified as most closely aligned with standards developed by Communication professionals served as the basis for developing a questionnaire used in the study's second phase interviews to determine how local high school teachers operationalized and assessed these competencies in their classroom curricula. Results indicated that while speaking competencies were the most clearly defined and assessed in the classroom, listening and group communication competencies were in need of further clarification

    TV Politics: Seeing More than We Want, Knowing Less than We Need

    Get PDF
    Joshua Meyrowitz is Professor of Communication at the University of New Hampshire. This essay is adapted from a Keynote Address given at the Third Annual Media Studies Symposium at Sacred Heart University on November 3, 1996. A more detailed version of the Agran campaign case study appears in the author\u27s article, ``Visible and Invisible Candidates: A Case Study in `Competing Logics\u27 of Campaign Coverage,\u27\u27 Political Communication, 11, No. 2 (1994), pp. 145-64
    corecore