219 research outputs found

    From Jargon to Clarity: Enhancing Science Communication with ChatGPT

    Get PDF
    ChatGPT, an advanced language model developed by OpenAI, represents a groundbreaking leap in the realm of science communication. This remarkable chatbot seamlessly generates clear and concise explanations, unraveling the intricacies of complex scientific concepts for many fraternities. It goes beyond mere elucidation, actively addressing public inquiries and dispelling common misconceptions and flaws, resulting in a more informed and scientifically literate society. The conversational prowess of ChatGPT empowers even the general public to initiate and sustain meaningful dialogues with individuals from diverse backgrounds. By leveraging ChatGPT's interactive capabilities, many stimulate thought-provoking conversations, fueling curiosity and fostering a deeper engagement with scientific topics. Moreover, ChatGPT's extensive training imbues it with a vast knowledge base, enabling it to provide highly informative responses to a wide range of questions in no time

    Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education

    Full text link
    In the rapidly evolving landscape of education, digital technologies have repeatedly disrupted traditional pedagogical methods. This paper explores the latest of these disruptions: the potential integration of large language models (LLMs) and chatbots into graduate engineering education. We begin by tracing historical and technological disruptions to provide context and then introduce key terms such as machine learning and deep learning and the underlying mechanisms of recent advancements, namely attention/transformer models and graphics processing units. The heart of our investigation lies in the application of an LLM-based chatbot in a graduate fluid mechanics course. We developed a question bank from the course material and assessed the chatbot's ability to provide accurate, insightful responses. The results are encouraging, demonstrating not only the bot's ability to effectively answer complex questions but also the potential advantages of chatbot usage in the classroom, such as the promotion of self-paced learning, the provision of instantaneous feedback, and the reduction of instructors' workload. The study also examines the transformative effect of intelligent prompting on enhancing the chatbot's performance. Furthermore, we demonstrate how powerful plugins like Wolfram Alpha for mathematical problem-solving and code interpretation can significantly extend the chatbot's capabilities, transforming it into a comprehensive educational tool. While acknowledging the challenges and ethical implications surrounding the use of such AI models in education, we advocate for a balanced approach. The use of LLMs and chatbots in graduate education can be greatly beneficial but requires ongoing evaluation and adaptation to ensure ethical and efficient use.Comment: 44 pages, 16 figures, preprint for PLOS ON

    My Chatbot Companion – a Study of Human-Chatbot Relationships

    Get PDF
    There has been a recent surge of interest in social chatbots, and human–chatbot relationships (HCRs) are becoming more prevalent, but little knowledge exists on how HCRs develop and may impact the broader social context of the users. Guided by Social Penetration Theory, we interviewed 18 participants, all of whom had developed a friendship with a social chatbot named Replika, to understand the HCR development process. We find that at the outset, HCRs typically have a superficial character motivated by the users' curiosity. The evolving HCRs are characterised by substantial affective exploration and engagement as the users' trust and engagement in self-disclosure increase. As the relationship evolves to a stable state, the frequency of interactions may decrease, but the relationship can still be seen as having substantial affective and social value. The relationship with the social chatbot was found to be rewarding to its users, positively impacting the participants' perceived wellbeing. Key chatbot characteristics facilitating relationship development included the chatbot being seen as accepting, understanding and non-judgmental. The perceived impact on the users' broader social context was mixed, and a sense of stigma associated with HCRs was reported. We propose an initial model representing the HCR development identified in this study and suggest avenues for future research.publishedVersio

    Chatbots as Part of Digital Government Service Provision – A User Perspective

    Get PDF
    Chatbots are taken up as part of digital government service provision. While the success of chatbots for this purpose depends on these being accepted by their intended users, there is a lack of knowledge concerning user perceptions of such chatbots and the implications of these for intention to use. In response to this, an exploratory qualitative interview study was conducted with 15 users of a chatbot for municipality service provision. The interviews showed the importance of performance expectations, effort expectations, and trust. In particular, while a municipality chatbot supporting service triaging may be perceived as beneficial for their availability and to provide support navigation of municipality services and information, this benefit is compared by users to the benefit of other digital government channels. On the basis of the findings, we present key implications to theory and practice, and suggest avenues for future research.acceptedVersio

    “I love my AI girlfriend” A study of consent in AI-human relationships.

    Get PDF
    Mastergradsoppgave i digital kulturDIKULT350MAHF-DIKU

    Waiting for a digital therapist : three challenges on the path to psychotherapy delivered by artificial intelligence

    Get PDF
    Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called “general” or “human-like” AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy

    Bias and Fairness in Chatbots: An Overview

    Full text link
    Chatbots have been studied for more than half a century. With the rapid development of natural language processing (NLP) technologies in recent years, chatbots using large language models (LLMs) have received much attention nowadays. Compared with traditional ones, modern chatbots are more powerful and have been used in real-world applications. There are however, bias and fairness concerns in modern chatbot design. Due to the huge amounts of training data, extremely large model sizes, and lack of interpretability, bias mitigation and fairness preservation of modern chatbots are challenging. Thus, a comprehensive overview on bias and fairness in chatbot systems is given in this paper. The history of chatbots and their categories are first reviewed. Then, bias sources and potential harms in applications are analyzed. Considerations in designing fair and unbiased chatbot systems are examined. Finally, future research directions are discussed

    Artificial Intelligence Service Agents: Role of Parasocial Relationship

    Get PDF
    Increased use of artificial intelligence service agents (AISA) has been associated with improvements in AISA service performance. Whilst there is consensus that unique forms of attachment develop between users and AISA that manifest as parasocial relationships (PSRs), the literature is less clear about the AISA service attributes and how they influence PSR and the users’ subjective well-being. Based on a dataset collected from 408 virtual assistant users from the US, this research develops and tests a model that can explain how AISA-enabled service influences subjective well-being through the mediating effect of PSR. Findings also indicate significant gender and AISA experience differences in the PSR effect on subjective well-being. This study advances current understanding of AISA in service encounters by investigating the mediating role of PSR in AISA’s effect on users’ subjective well-being. We also discuss managerial implications for practitioners who are increasingly using AISA for delivering customer service

    Building Emotional Support Chatbots in the Era of LLMs

    Full text link
    The integration of emotional support into various conversational scenarios presents profound societal benefits, such as social interactions, mental health counseling, and customer service. However, there are unsolved challenges that hinder real-world applications in this field, including limited data availability and the absence of well-accepted model training paradigms. This work endeavors to navigate these challenges by harnessing the capabilities of Large Language Models (LLMs). We introduce an innovative methodology that synthesizes human insights with the computational prowess of LLMs to curate an extensive emotional support dialogue dataset. Our approach is initiated with a meticulously designed set of dialogues spanning diverse scenarios as generative seeds. By utilizing the in-context learning potential of ChatGPT, we recursively generate an ExTensible Emotional Support dialogue dataset, named ExTES. Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions. An exhaustive assessment of the resultant model showcases its proficiency in offering emotional support, marking a pivotal step in the realm of emotional support bots and paving the way for subsequent research and implementations

    Reimagining the Journal Editorial Process: An AI-Augmented Versus an AI-Driven Future

    Get PDF
    The editorial process at our leading information systems journals has been pivotal in shaping and growing our field. But this process has grown long in the tooth and is increasingly frustrating and challenging its various stakeholders: editors, reviewers, and authors. The sudden and explosive spread of AI tools, including advances in language models, make them a tempting fit in our efforts to ease and advance the editorial process. But we must carefully consider how the goals and methods of AI tools fit with the core purpose of the editorial process. We present a thought experiment exploring the implications of two distinct futures for the information systems powering today’s journal editorial process: an AI-augmented and an AI-driven one. The AI-augmented scenario envisions systems providing algorithmic predictions and recommendations to enhance human decision-making, offering enhanced efficiency while maintaining human judgment and accountability. However, it also requires debate over algorithm transparency, appropriate machine learning methods, and data privacy and security. The AI-driven scenario, meanwhile, imagines a fully autonomous and iterative AI. While potentially even more efficient, this future risks failing to align with academic values and norms, perpetuating data biases, and neglecting the important social bonds and community practices embedded in and strengthened by the human-led editorial process. We consider and contrast the two scenarios in terms of their usefulness and dangers to authors, reviewers, editors, and publishers. We conclude by cautioning against the lure of an AI-driven, metric-focused approach, advocating instead for a future where AI serves as a tool to augment human capacity and strengthen the quality of academic discourse. But more broadly, this thought experiment allows us to distill what the editorial process is about: the building of a premier research community instead of chasing metrics and efficiency. It is up to us to guard these values
    • …
    corecore