11,912 research outputs found

    Automated cognitive presence detection in online discussion transcripts

    Get PDF
    In this paper we present the results of an exploratory study that examined the use of text mining and text classification for the au-tomation of the content analysis of discussion transcripts within the context of distance education. We used Community of In-quiry (CoI) framework and focused on the content analysis of the cognitive presence construct given its central position within the CoI model. Our results demonstrate the potentials of proposed ap-proach; The developed classifier achieved 58.4 % accuracy and Co-hen’s Kappa of 0.41 for the 5-category classification task. In this paper we analyze different classification features and describe the main problems and lessons learned from the development of such a system. Furthermore, we analyzed the use of several novel classifi-cation features that are based on the specifics of cognitive presence construct and our results indicate that some of them significantly improve classification accuracy. 1

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    Pediatric residents\u27 use of jargon during counseling about newborn genetic screening results

    Get PDF
    OBJECTIVE. The goal was to investigate pediatric residents’ usage of jargon during discussions about positive newborn screening test results. METHODS. An explicit-criteria abstraction procedure was used to identify jargon usage and explanations in transcripts of encounters between residents and standardized parents of a fictitious infant found to carry cystic fibrosis or sickle cell hemoglobinopathy. Residents were recruited from a series of educational workshops on how to inform parents about positive newborn screening test results. The time lag from jargon words to explanations was measured by using “statements,” each of which contained 1 subject and 1 predicate. RESULTS. Duplicate abstraction revealed reliability K of 0.92. The average number of unique jargon words per transcript was 20; the total jargon count was 72.3 words. There was an average of 7.5 jargon explanations per transcript, but the explained/ total jargon ratio was only 0.17. When jargon was explained, the average time lag from the first usage to the explanation was 8.2 statements. CONCLUSION. The large number of jargon words and the small number of explanations suggest that physicians’ counseling about newborn screening may be too complex for some parents

    E-Drama: Facilitating Online Role-play using an AI Actor and Emotionally Expressive Characters.

    Get PDF
    This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring
    • 

    corecore