2,302 research outputs found

    How Supervisors Influence Performance: A Multilevel Study of Coaching and Group Management in Technology-Mediated Services

    Get PDF
    This multilevel study examines the role of supervisors in improving employee performance through the use of coaching and group management practices. It examines the individual and synergistic effects of these management practices. The research subjects are call center agents in highly standardized jobs, and the organizational context is one in which calls, or task assignments, are randomly distributed via automated technology, providing a quasi-experimental approach in a real-world context. Results show that the amount of coaching that an employee received each month predicted objective performance improvements over time. Moreover, workers exhibited higher performance where their supervisor emphasized group assignments and group incentives and where technology was more automated. Finally, the positive relationship between coaching and performance was stronger where supervisors made greater use of group incentives, where technology was less automated, and where technological changes were less frequent. Implications and potential limitations of the present study are discussed

    On the Differences Between Practical and Cognitive Presumptions

    Get PDF
    The study of presumptions has intensified in argumentation theory over the last years. Although scholars put forward different accounts, they mostly agree that presumptions can be studied in deliberative and epistemic contexts, have distinct contextual functions (guiding decisions vs. acquiring information), and promote different kinds of goals (non-epistemic vs. epistemic). Accordingly, there are "practical" and "cognitive" presumptions. In this paper, I show that the differences between practical and cognitive presumptions go far beyond contextual considerations. The central aim is to explore Nicholas Rescher's contention that both types of presumptions have a closely analogous pragmatic function, i.e., that practical and cognitive presumptions are made to avoid greater harm in circumstances of epistemic uncertainty. By comparing schemes of practical and cognitive reasoning, I show that Rescher's contention requires qualifications. Moreover, not only do practical and cognitive presumptions have distinct pragmatic functions, but they also perform different dialogical functions (enabling progress vs. preventing regress) and, in some circumstances, cannot be defeated by the same kinds of evidence. Hence, I conclude that the two classes of presumptions merit distinct treatment in argumentation theory

    Argumentation models and their use in corpus annotation: practice, prospects, and challenges

    Get PDF
    The study of argumentation is transversal to several research domains, from philosophy to linguistics, from the law to computer science and artificial intelligence. In discourse analysis, several distinct models have been proposed to harness argumentation, each with a different focus or aim. To analyze the use of argumentation in natural language, several corpora annotation efforts have been carried out, with a more or less explicit grounding on one of such theoretical argumentation models. In fact, given the recent growing interest in argument mining applications, argument-annotated corpora are crucial to train machine learning models in a supervised way. However, the proliferation of such corpora has led to a wide disparity in the granularity of the argument annotations employed. In this paper, we review the most relevant theoretical argumentation models, after which we survey argument annotation projects closely following those theoretical models. We also highlight the main simplifications that are often introduced in practice. Furthermore, we glimpse other annotation efforts that are not so theoretically grounded but instead follow a shallower approach. It turns out that most argument annotation projects make their own assumptions and simplifications, both in terms of the textual genre they focus on and in terms of adapting the adopted theoretical argumentation model for their own agenda. Issues of compatibility among argument-annotated corpora are discussed by looking at the problem from a syntactical, semantic, and practical perspective. Finally, we discuss current and prospective applications of models that take advantage of argument-annotated corpora

    A network model of interpersonal alignment in dialog

    Get PDF
    In dyadic communication, both interlocutors adapt to each other linguistically, that is, they align interpersonally. In this article, we develop a framework for modeling interpersonal alignment in terms of the structural similarity of the interlocutors’ dialog lexica. This is done by means of so-called two-layer time-aligned network series, that is, a time-adjusted graph model. The graph model is partitioned into two layers, so that the interlocutors’ lexica are captured as subgraphs of an encompassing dialog graph. Each constituent network of the series is updated utterance-wise. Thus, both the inherent bipartition of dyadic conversations and their gradual development are modeled. The notion of alignment is then operationalized within a quantitative model of structure formation based on the mutual information of the subgraphs that represent the interlocutor’s dialog lexica. By adapting and further developing several models of complex network theory, we show that dialog lexica evolve as a novel class of graphs that have not been considered before in the area of complex (linguistic) networks. Additionally, we show that our framework allows for classifying dialogs according to their alignment status. To the best of our knowledge, this is the first approach to measuring alignment in communication that explores the similarities of graph-like cognitive representations. Keywords: alignment in communication; structural coupling; linguistic networks; graph distance measures; mutual information of graphs; quantitative network analysi

    University faculty research training and performance: A case from peru

    Get PDF
    ABSTRACT The present study aims to understand faculty research training in private universities in Lima, Peru. The paradigm that guided the study was interpretive, under the reflexive critical approach based on introspection of the experience. The methodology was qualitative, based on the thesis and its contradiction, the antithesis, the synthesis, from Hegel's perspective. The social actors were represented by faculty from Peruvian universities with different selection criteria; sixteen professors were selected as key informants. Data were collected through assertive communication strategies, using participant observation, focus groups, and interviews as techniques. In addition, the study used different research instruments to gather information such as descriptive record sheets, reflection scripts, questionnaires, video recording being key always, as a resource. Data analysis was performed with Atlas TI software using both content analysis and grounded theory. According to the observed dimensions, research has not been promoted as a core element. There is a predominance of doing research from a positivist paradigm, and universities have updated their policies to promote research dissemination with predominance on indexed article publication. The results reveal that there were transcendental changes in research training from the multidimensionality of knowledge. Hence, private universities must promote research training as the backbone of the research work focused on its strategic research plans, where the entire university community can investigate multiple social realities from different epistemic perspectives

    Defining and Assessing Critical Thinking: toward an automatic analysis of HiEd students’ written texts

    Get PDF
    L'obiettivo principale di questa tesi di dottorato è testare, attraverso due studi empirici, l'affidabilità di un metodo volto a valutare automaticamente le manifestazioni del Pensiero Critico (CT) nei testi scritti da studenti universitari. Gli studi empirici si sono basati su una review critica della letteratura volta a proporre una nuova classificazione per sistematizzare le diverse definizioni di CT e i relativi approcci teorici. La review esamina anche la relazione tra le diverse definizioni di CT e i relativi metodi di valutazione. Dai risultati emerge la necessità di concentrarsi su misure aperte per la valutazione del CT e di sviluppare strumenti automatici basati su tecniche di elaborazione del linguaggio naturale (NLP) per superare i limiti attuali delle misure aperte, come l’attendibilità e i costi di scoring. Sulla base di una rubrica sviluppata e implementata dal gruppo di ricerca del Centro di Didattica Museale – Università di Roma Tre (CDM) per la valutazione e l'analisi dei livelli di CT all'interno di risposte aperte (Poce, 2017), è stato progettato un prototipo per la misurazione automatica di alcuni indicatori di CT. Il primo studio empirico condotto su un gruppo di 66 docenti universitari mostra livelli di affidabilità soddisfacenti della rubrica di valutazione, mentre la valutazione effettuata dal prototipo non era sufficientemente attendibile. I risultati di questa sperimentazione sono stati utilizzati per capire come e in quali condizioni il modello funziona meglio. La seconda indagine empirica era volta a capire quali indicatori del linguaggio naturale sono maggiormente associati a sei sottodimensioni del CT, valutate da esperti in saggi scritti in lingua italiana. Lo studio ha utilizzato un corpus di 103 saggi pre-post di studenti universitari di laurea magistrale che hanno frequentato il corso di "Pedagogia sperimentale e valutazione scolastica". All'interno del corso, sono state proposte due attività per stimolare il CT degli studenti: la valutazione delle risorse educative aperte (OER) (obbligatoria e online) e la progettazione delle OER (facoltativa e in modalità blended). I saggi sono stati valutati sia da valutatori esperti, considerando sei sotto-dimensioni del CT, sia da un algoritmo che misura automaticamente diversi tipi di indicatori del linguaggio naturale. Abbiamo riscontrato un'affidabilità interna positiva e un accordo tra valutatori medio-alto. I livelli di CT degli studenti sono migliorati in modo significativo nel post-test. Tre indicatori del linguaggio naturale sono 5 correlati in modo significativo con il punteggio totale di CT: la lunghezza del corpus, la complessità della sintassi e la funzione di peso tf-idf (term frequency–inverse document frequency). I risultati raccolti durante questo dottorato hanno implicazioni sia teoriche che pratiche per la ricerca e la valutazione del CT. Da un punto di vista teorico, questa tesi mostra sovrapposizioni inesplorate tra diverse tradizioni, prospettive e metodi di studio del CT. Questi punti di contatto potrebbero costituire la base per un approccio interdisciplinare e la costruzione di una comprensione condivisa di CT. I metodi di valutazione automatica possono supportare l’uso di misure aperte per la valutazione del CT, specialmente nell'insegnamento online. Possono infatti facilitare i docenti e i ricercatori nell'affrontare la crescente presenza di dati linguistici prodotti all'interno di piattaforme educative (es. Learning Management Systems). A tal fine, è fondamentale sviluppare metodi automatici per la valutazione di grandi quantità di dati che sarebbe impossibile analizzare manualmente, fornendo agli insegnanti e ai valutatori un supporto per il monitoraggio e la valutazione delle competenze dimostrate online dagli studenti.The main goal of this PhD thesis is to test, through two empirical studies, the reliability of a method aimed at automatically assessing Critical Thinking (CT) manifestations in Higher Education students’ written texts. The empirical studies were based on a critical review aimed at proposing a new classification for systematising different CT definitions and their related theoretical approaches. The review also investigates the relationship between the different adopted CT definitions and CT assessment methods. The review highlights the need to focus on open-ended measures for CT assessment and to develop automatic tools based on Natural Language Processing (NLP) technique to overcome current limitations of open-ended measures, such as reliability and costs. Based on a rubric developed and implemented by the Center for Museum Studies – Roma Tre University (CDM) research group for the evaluation and analysis of CT levels within open-ended answers (Poce, 2017), a NLP prototype for the automatic measurement of CT indicators was designed. The first empirical study was carried out on a group of 66 university teachers. The study showed satisfactory reliability levels of the CT evaluation rubric, while the evaluation carried out by the prototype was not yet sufficiently reliable. The results were used to understand how and under what conditions the model works better. The second empirical investigation was aimed at understanding which NLP features are more associated with six CT sub-dimensions as assessed by human raters in essays written in the Italian language. The study used a corpus of 103 students’ pre-post essays who attended a Master's Degree module in “Experimental Education and School Assessment” to assess students' CT levels. Within the module, we proposed two activities to stimulate students' CT: Open Educational Resources (OERs) assessment (mandatory and online) and OERs design (optional and blended). The essays were assessed both by expert evaluators, considering six CT sub-dimensions, and by an algorithm that automatically calculates different kinds of NLP features. The study shows a positive internal reliability and a medium to high inter-coder agreement in expert evaluation. Students' CT levels improved significantly in the post-test. Three NLP indicators significantly correlate with CT total score: the Corpus Length, the Syntax Complexity, and an adapted measure of Term Frequency- Inverse Document Frequency. The results collected during this PhD have both theoretical and practical implications for CT research and assessment. From a theoretical perspective, this thesis shows unexplored similarities among different CT traditions, perspectives, and study methods. These similarities could be exploited to open up an interdisciplinary dialogue among experts and build up a shared understanding of CT. Automatic assessment methods can enhance the use of open-ended measures for CT assessment, especially in online teaching. Indeed, they can support teachers and researchers to deal with the growing presence of linguistic data produced within educational 4 platforms. To this end, it is pivotal to develop automatic methods for the evaluation of large amounts of data which would be impossible to analyse manually, providing teachers an

    Inside the “black box” of the antibody test: deconstructing the official classification of “risk” in test algorithms used for identifying the Human Immunodeficiency Virus

    Get PDF
    This paper interrogates the last 20 years in the British experience of using official antibody test algorithms to detect the human immunodeficiency virus (HIV). Case definitions of the Acquired Immunodeficiency Syndrome (AIDS) cite antibody test methodologies licensed since 1985 for screening purposes and derived from laboratory identification of HIV. Two common (yet surrogate) methodologies are the enzyme-linked immunosorbent assay (ELISA) and the Western blot (WB), both used for screening human populations. Test manufacturers publicise the interpretative flexibility of these tests, which may produce false or indeterminate results, given laboratory identification of HIV is cited as problematic, time-intensive and as using surrogate techniques. Globally, public health officials publish differing algorithms for testing of human subjects. The paper shows how these algorithms (whilst aiming to balance test specificity/sensitivity), are based on perceptions of ‘risk’ of exposure determined during pre-test dialogue: how the test subject is positioned as ‘high’/‘low’ risk and within a hierarchy of exposure categories. The interpretation of indeterminate results is problematic given the possibility of false results, which are ruled out by estimating the risk of exposure (‘window period’) and the seroprevalence in the population of the test subject. It is argued that during the last 20 years experience with these test algorithms the interpretation of the test ‘result’ is not wholly ‘objective’ or laboratory-determined, as it relies as much upon the classification of the test subject as being ‘at risk’ during pre-test dialogue as it does upon the “epidemo logic” of the ELISA or WB, data which often remains ‘black-boxed’ from a critical public scrutiny. Using data from tested subjects and published accounts/texts, the paper deconstructs the classification of ‘risk’ embodied by official test algorithms and analyses how the ambiguity/uncertainty characteristic of antibody-test methodologies have sociological implications for ethical decision-making, self-identity and social movements
    corecore