28,868 research outputs found

    Chatbots for learning: A review of educational chatbots for the Facebook Messenger

    Get PDF
    With the exponential growth in the mobile device market over the last decade, chatbots are becoming an increasingly popular option to interact with users, and their popularity and adoption are rapidly spreading. These mobile devices change the way we communicate and allow ever-present learning in various environments. This study examined educational chatbots for Facebook Messenger to support learning. The independent web directory was screened to assess chatbots for this study resulting in the identification of 89 unique chatbots. Each chatbot was classified by language, subject matter and developer's platform. Finally, we evaluated 47 educational chatbots using the Facebook Messenger platform based on the analytic hierarchy process against the quality attributes of teaching, humanity, affect, and accessibility. We found that educational chatbots on the Facebook Messenger platform vary from the basic level of sending personalized messages to recommending learning content. Results show that chatbots which are part of the instant messaging application are still in its early stages to become artificial intelligence teaching assistants. The findings provide tips for teachers to integrate chatbots into classroom practice and advice what types of chatbots they can try out.Web of Science151art. no. 10386

    "Mango Mango, How to Let The Lettuce Dry Without A Spinner?'': Exploring User Perceptions of Using An LLM-Based Conversational Assistant Toward Cooking Partner

    Full text link
    The rapid advancement of the Large Language Model (LLM) has created numerous potentials for integration with conversational assistants (CAs) assisting people in their daily tasks, particularly due to their extensive flexibility. However, users' real-world experiences interacting with these assistants remain unexplored. In this research, we chose cooking, a complex daily task, as a scenario to investigate people's successful and unsatisfactory experiences while receiving assistance from an LLM-based CA, Mango Mango. We discovered that participants value the system's ability to provide extensive information beyond the recipe, offer customized instructions based on context, and assist them in dynamically planning the task. However, they expect the system to be more adaptive to oral conversation and provide more suggestive responses to keep users actively involved. Recognizing that users began treating our LLM-CA as a personal assistant or even a partner rather than just a recipe-reading tool, we propose several design considerations for future development.Comment: Under submission to CHI202

    Philanthropic Paths: An Exploratory Study of the Career Pathways of Professionals of Color in Philanthropy

    Get PDF
    This study, commissioned by the D5 Coalition, provides a nuanced picture of the career experiences of 43 philanthropic professionals of color ranging from Program Officers to CEOs working in an array of foundations. Through an exploration of the perceptions, analyses, and career histories of people of color working in the philanthropic sector, this study aims to advance the field's understanding of the following questions:What are the career pathways of people of color in philanthropy in terms of how they enter the field and advance to higher levels of seniority?What factors do philanthropic professionals of color view as posing the greatest barriers and contributors to career advancement in the sector?What is the perceived value of and challenges to achieving greater leadership diversity in foundations from the perspective of professionals of color in the field? While not generalizable to the broader population of people of color working in the sector, interviews conducted with these individuals surfaced a set of potentially common points of entry and career pathways among professionals of color in philanthropy, as well as the factors that helped shape those pathways

    Innovate Magazine / Annual Review 2009-2010

    Get PDF
    https://scholarworks.sjsu.edu/innovate/1002/thumbnail.jp

    Visualizations for an Explainable Planning Agent

    Full text link
    In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and explainability requirements on such agents is especially important in order to establish trust and common ground with the end-to-end automated planning system. Visualizing the agent's internal decision-making processes is a crucial step towards achieving this. This may include externalizing the "brain" of the agent -- starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We also show how the planner can bootstrap on the latest techniques in explainable planning to cast plan visualization as a plan explanation problem, and thus provide concise model-based visualization of its plans. We demonstrate these functionalities in the context of the automated planning components of a smart assistant in an instrumented meeting space.Comment: PREVIOUSLY Mr. Jones -- Towards a Proactive Smart Room Orchestrator (appeared in AAAI 2017 Fall Symposium on Human-Agent Groups

    It's Good to Talk: A Comparison of Using Voice Versus Screen-Based Interactions for Agent-Assisted Tasks

    Get PDF
    Voice assistants have become hugely popular in the home as domestic and entertainment devices. Recently, there has been a move towards developing them for work settings. For example, Alexa for Business and IBM Watson for Business were designed to improve productivity, by assisting with various tasks, such as scheduling meetings and taking minutes. However, this kind of assistance is largely limited to planning and managing user's work. How might they be developed to do more by way of empowering people at work? Our research is concerned with achieving this by developing an agent with the role of a facilitator that assists users during an ongoing task. Specifically, we were interested in whether the modality in which the agent interacts with users makes a difference: How does a voice versus screen-based agent interaction affect user behavior? We hypothesized that voice would be more immediate and emotive, resulting in more fluid conversations and interactions. Here, we describe a user study that compared the benefits of using voice versus screen-based interactions when interacting with a system incorporating an agent, involving pairs of participants doing an exploratory data analysis task that required them to make sense of a series of data visualizations. The findings from the study show marked differences between the two conditions, with voice resulting in more turn-taking in discussions, questions asked, more interactions with the system and a tendency towards more immediate, faster-paced discussions following agent prompts. We discuss the possible reasons for why talking and being prompted by a voice assistant may be preferable and more effective at mediating human-human conversations and we translate some of the key insights of this research into design implications

    The Use of Bakhtin\u27s Polyphony to Analyze Peer Relationships

    Get PDF
    Abstract The purpose of this study was to examine how resident assistants integrate training on leadership and ethics with their personal beliefs in their roles as resident assistants (RAs). Data for this study was gathered using an electronic survey. Participants who have between one and four years of RA experience were to participate in this study. An announcement of the study with a link to the survey was sent to the resident director of all dorms on the UA campus with the request that the announcement be forwarded to the RAs. The survey included six questions that provided basic demographic information and training experience. Then the demographics was followed by four scenarios, each having four multiple choice options and two open ended questions about leadership and ethical action. It took approximately 10-20 minutes to complete the survey. Results were analyzed descriptively
    corecore