475 research outputs found

    Exploring The Design of Prompts For Applying GPT-3 based Chatbots: A Mental Wellbeing Case Study on Mechanical Turk

    Full text link
    Large-Language Models like GPT-3 have the potential to enable HCI designers and researchers to create more human-like and helpful chatbots for specific applications. But evaluating the feasibility of these chatbots and designing prompts that optimize GPT-3 for a specific task is challenging. We present a case study in tackling these questions, applying GPT-3 to a brief 5-minute chatbot that anyone can talk to better manage their mood. We report a randomized factorial experiment with 945 participants on Mechanical Turk that tests three dimensions of prompt design to initialize the chatbot (identity, intent, and behaviour), and present both quantitative and qualitative analyses of conversations and user perceptions of the chatbot. We hope other HCI designers and researchers can build on this case study, for other applications of GPT-3 based chatbots to specific tasks, and build on and extend the methods we use for prompt design, and evaluation of the prompt design

    Co-Design Disaster Management Chatbot with Indigenous Communities

    Get PDF
    Indigenous communities are disproportionately impacted by rising disaster risk, climate change, and environmental degradation due to their close relationship with the environment and its resources. Unfortunately, gathering the necessary information or evidence to request or co-share sufficient funds can be challenging for indigenous people and their lands. This paper aims to co-design an AI-based chatbot with two tribes and investigate their perception and experience of using it in disaster reporting practices. The study was conducted in two stages. Firstly, we interviewed experienced first-line emergency managers and invited tribal members to an in-person design workshop. Secondly, based on qualitative analysis, we identified three themes of emergency communication, documentation, and user experience. Our findings support that indigenous communities favored the proposed Emergency Reporter chatbot solution. We further discussed how the proposed chatbot could empower the tribes in disaster management, preserve sovereignty, and seek support from other agencies

    Bot-Based Emergency Software Applications for Natural Disaster Situations

    Get PDF
    Upon a serious emergency situation such as a natural disaster, people quickly try to call their friends and family with the software they use every day. On the other hand, people also tend to participate as a volunteer for rescue purposes. It is unlikely and impractical for these people to download and learn to use an application specially designed for aid processes. In this work, we investigate the feasibility of including bots, which provide a mechanism to get inside the software that people use daily, to develop emergency software applications designed to be used by victims and volunteers during stressful situations. In such situations, it is necessary to achieve efficiency, scalability, fault tolerance, elasticity, and mobility between data centers. We evaluate three bot-based applications. The first one, named Jayma, sends information about affected people during the natural disaster to a network of contacts. The second bot-based application, Ayni, manages and assigns tasks to volunteers. The third bot-based application named Rimay registers volunteers and manages campaigns and emergency tasks. The applications are built using common practice for distributed software architecture design. Most of the components forming the architecture are from existing public domain software, and some components are even consumed as an external service as in the case of Telegram. Moreover, the applications are executed on commodity hardware usually available from universities. We evaluate the applications to detect critical tasks, bottlenecks, and the most critical resource. Results show that Ayni and Rimay tend to saturate the CPU faster than other resources. Meanwhile, the RAM memory tends to reach the highest utilization level in the Jayma application.Fil: Ovando Leon, Gabriel. Universidad de Santiago de Chile; ChileFil: Veas Castillo, Luis. Universidad de Santiago de Chile; ChileFil: Gil Costa, Graciela VerĂłnica. Universidad Nacional de San Luis; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - San Luis; ArgentinaFil: Marin, Mauricio. Universidad de Santiago de Chile; Chil

    Artificial vs. Non-Artificial Intelligence: What Does ChatGPT Mean for Labor and Employment?

    Get PDF
    ChatGPT has set the world ablaze. The publicly available and free-to-use chatbot is an application programming interface (API) that generates responses to language requests through artificial intelligence (AI), and processes millions of such requests per day. Released for public access in November 2022, ChatGPT can, upon request, produce jokes, TV episodes, music, and computer code. Students now use it to write papers, businesses use it to create promotional materials, and lawyers use it to draft legal briefs. This post was originally published on the Cardozo International & Comparative Law Review on February 14, 2023. The original post can be accessed via the Archived Link button above

    Chatting with chatbots: Sign making in text-based human–computer interaction

    Get PDF
    This paper investigates the kind of sign making that goes on in text-based human–computer interaction, between human users and chatbots, from the point of view of integrational linguistics. A chatbot serves as a “conversational” user interface, allowing users to control computer programs in “natural language”. From the user’s perspective, the interaction is a case of semiologically integrated activity, but even if the textual traces of a chat may look like a written conversation between two humans the correspondence is not one-to-one. It is argued that chatbots cannot engage in communication processes, although they may display communicative behaviour. They presuppose a (second-order) language model, they can only communicate at the level of sentences, not utterances, and they implement communicational sequels by selecting from an inventory of executable skills. Instead of seeing them as interlocutors in silico, chatbots should be seen as powerful devices for humans to make signs with.    &nbsp

    Choreographing Shadows: Interdisciplinary Collaboration to Orchestrate Ethical AI Image-Making

    Get PDF
    Although popular media attention has suggested that recent advancements in AI image-making tools have threatened creative labor, this nascent medium is capable of research opportunities involving diverse academic fields which may not be readily apparent. Using a collaboration between an artist and scholar of religious studies as a case study, the ongoing “Noo Icons” media arts project, comprising images, video, animation, and installation, explores how AI image-making tools are well suited to reframe the visual history of the religious transcendent. Building on the scholarship of Hito Steyerl and Eryk Salvaggio, AI art’s usage as a diagnostic tool for deciphering internet biases is compared to the scholar of religious studies\u27 theoretical method of redaction criticism. This article explores ways in which the training set data of Stable Diffusion can be refined to produce more accurate composite images, as well as the power for AI image-making tools to be used as visual aids in the creation of “imagined realities:” images for which we have credible eyewitness testimony, but which we do not have photographic evidence for. The ethics of AI image-making is primary to the methodology advanced in this interdisciplinary mode

    Artificial Intelligence Crime:An Overview of Malicious Use and Abuse of AI

    Get PDF
    The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI

    Update Tutorial: Big Data Analytics: Concepts, Technology, and Applications

    Get PDF
    In 2014, I wrote a paper on big data analytics that the Communications of the Association for Information Systems published (volume 34). Since then, we have seen significant advances in the technologies, applications, and impacts of big data analytics. While the original paper’s content remains accurate and relevant, with this new paper, I update readers on important, recent developments in the area
    • …
    corecore