Integration of artificial intelligence in academia: A case study of critical teaching and learning in higher education

Abstract

This study scrutinizes the role of AI literacy and ChatGPT-3 in enhancing critical reasoning and journalistic writing competencies among 50 third-term journalism students at Tajik National University. Given the escalating relevance of AI across sectors, including journalism, we aim to highlight the potential advantages of incorporating AI utilities in journalism pedagogy. We utilized a mixedmethods approach, comprising both quantitative and qualitative data collection techniques, for a comprehensive examination of the influence of AI literacy and ChatGPT-3 on student skill development. We gathered insights via surveys and interviews, revealing the impact of AI on learning outcomes. Our findings suggest a significant improvement in students' critical thinking and journalistic writing skills with ChatGPT-3 usage. The integration of AI tools in the classroom encourages in-depth analysis and collaboration, thereby enhancing students' writing skills. The results underline the importance of AI literacy in journalism education, preparing students for the rapidly transforming, AI-centric journalism industry. Measuring Improvement in Critical Thinking and Journalism Writing Skills:  A pre-and-post-test model was applied to measure the improvement in students' critical reasoning and journalistic writing competencies. The students undertook a critical analysis evaluation and a journalistic writing assignment prior to and subsequent to the use of ChatGPT-3. Descriptive and inferential statistical methods were used to analyse the quantitative data.  Collecting Qualitative Data through Semi-Structured Interviews Semi-structured interviews were conducted with the students to obtain qualitative data regarding their experiences and perceptions of using ChatGPT-3. These interviews were transcribed and subjected to thematic analysis. Assessing Critical Thinking and Journalism Writing Tasks Before the intervention (pre-test) and after utilizing ChatGPT-3 (post-test), participants undertook a critical reasoning evaluation and a journalistic writing assignment. The critical reasoning evaluation was modelled on the Cornell Critical Thinking Test Level Z (Ennis et al., 2005), assessing critical reasoning abilities like inductive and deductive reasoning, credibility appraisal, and identification of presuppositions. The evaluation contained 25 multiple-choice queries, where higher scores represented enhanced critical thinking competencies. The journalism writing task required participants to compose a 500-word news article based on a given set of information. The articles were evaluated using a rubric developed by Rivenburgh et al. (2018), which assessed the articles based on accuracy, clarity, structure, and objectivity. Two experienced journalism educators, blinded to the study's purpose and the participants' pre- and post-test conditions, independently assessed the articles. The inter-rater reliability was calculated using Cohen's kappa coefficient (Cohen, 1960).  Quantitative Data Analysis The quantitative data were analyzed using descriptive statistics to describe the sample's characteristics and to summarize the scores in the critical thinking assessment and the journalism writing task. Paired sample t-tests were performed to compare the pre-test and post-test scores for both the critical thinking assessment and the journalism writing task, determining whether there were significant improvements after using ChatGPT-3.  Qualitative Data Collection and Analysis: Semi-Structured Interviews Semi-formal interviews were held with the participants to gather qualitative insights regarding their experiences and viewpoints on the application of ChatGPT-3 in journalism pedagogy. An interview blueprint, crafted in light of the literature review (Cukier et al., 2019; Graefe, 2016; Broussard, 2018), encompassed open ended queries about the perceived advantages, hurdles, and learning encounters linked to the usage of ChatGPT-3. Each interview, lasting roughly 30 minutes, was audibly documented and transcribed word-for-word. </p

    Similar works