17,654 research outputs found

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    Modelling human teaching tactics and strategies for tutoring systems

    Get PDF
    One of the promises of ITSs and ILEs is that they will teach and assist learning in an intelligent manner. Historically this has tended to mean concentrating on the interface, on the representation of the domain and on the representation of the student’s knowledge. So systems have attempted to provide students with reifications both of what is to be learned and of the learning process, as well as optimally sequencing and adjusting activities, problems and feedback to best help them learn that domain. We now have embodied (and disembodied) teaching agents and computer-based peers, and the field demonstrates a much greater interest in metacognition and in collaborative activities and tools to support that collaboration. Nevertheless the issue of the teaching competence of ITSs and ILEs is still important, as well as the more specific question as to whether systems can and should mimic human teachers. Indeed increasing interest in embodied agents has thrown the spotlight back on how such agents should behave with respect to learners. In the mid 1980s Ohlsson and others offered critiques of ITSs and ILEs in terms of the limited range and adaptability of their teaching actions as compared to the wealth of tactics and strategies employed by human expert teachers. So are we in any better position in modelling teaching than we were in the 80s? Are these criticisms still as valid today as they were then? This paper reviews progress in understanding certain aspects of human expert teaching and in developing tutoring systems that implement those human teaching strategies and tactics. It concentrates particularly on how systems have dealt with student answers and how they have dealt with motivational issues, referring particularly to work carried out at Sussex: for example, on responding effectively to the student’s motivational state, on contingent and Vygotskian inspired teaching strategies and on the plausibility problem. This latter is concerned with whether tactics that are effectively applied by human teachers can be as effective when embodied in machine teachers

    Understanding the Impact of AI Decision speed and Historical Decision Quality on User adoption in AI-assisted Decision Making

    Get PDF
    Artificial intelligence (AI) has shown increasing potential in assisting users with decision-making. However, the impact of AI decision speed on users\u27 adoption intention has received limited attention compared to the focus on decision quality. Building on cue utilization theory, this study investigates the influence of AI decision speed on users\u27 intention to adopt AI. Three experiments were conducted, revealing that users exhibit a higher intention to adopt AI when AI\u27s decision speed is higher and historical decision quality is better. Furthermore, the perceived intelligence and perceived risk in decision-making act as mediating variables in these effects. Importantly, the study finds that historical decision quality moderates the relationship between AI decision speed and user adoption, weakening the impact in conditions of high quality. These findings contribute to the understanding of AI adoption and offer practical implications for AI service providers and developers

    A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

    Get PDF
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artifcial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specifc ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may beneft their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fll this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient

    Evaluating multi-agent conversational interfaces in the early stages of the design process

    Get PDF
    In this paper we describe a mixed-approach technique to understand user’s perceptions of concepts in the early stage of the design process. We designed an evaluation study to understand desirability of a multi-agent cognitive investment advisor, a Chabot. The study was threefold. First participants watched the video, then chose reaction card adjectives to report their perceptions, and lastly gave their opinions guided by questions about the multi-party dialogue. From this experiment, we gather positive and negative reactions from users that helped to shape the user experience of cognitive investment advisors.Â

    Innovative integrated architecture for educational games: Challenges and merits

    Get PDF
    Interactive Narrative in game environments acts as the main catalyst to provide a motivating learning experience. In previous work, we have described how the use of a dual narrative generation technique could help to resolve the conflict between allowing high player student agency and also the track of the learning process. In this paper, we define a novel architecture that assists the dual narrative generation technique to be employed effectively in an adaptive educational game environment. The architecture composes components that individually have shown effectiveness in educational games environments. These components are graph structured narrative, dynamically generated narrative, evolving agents and a student model. An adaptive educational game, AEINS, has been developed to investigate the synergy of the architecture components. AEINS aims to foster character education at 8-12 year old children through the use of various interactive moral dilemmas that attempt the different student\u27s cognitive levels. AEINS was evaluated through a study involved 20 participants who interacted with AEINS on an individual basis

    Interactive Narrative for Adaptive Educational Games: Architecture and an Application to Character Education

    Get PDF
    This thesis presents AEINS, Adaptive Educational Interactive Narrative System, that supports teaching ethics for 8-12 year old children. AEINS is designed based on Keller's and Gagné's learning theories. The idea is centered around involving students in moral dilemmas (called teaching moments) within which the Socratic Method is used as the teaching pedagogy. The important unique aspect of AEINS is that it exhibits the presence of four features shown to individually increase effectiveness of edugames environments, yet not integrated together in past research: a student model, a dynamic generated narrative, scripted branched narrative and evolving non-player characters. The student model aims to provide adaptation. The dynamic generated narrative forms a continuous story that glues the scripted teaching moments together. The evolving agents increase the realism and believability of the environment and perform a recognized pedagogical role by helping in supplying the educational process. AEINS has been evaluated intrinsically and empirically according to the following themes: architecture and implementation, social aspects, and educational achievements. The intrinsic evaluation checked the implicit goals embodied by the design aspects and made a value judgment about these goals. In the empirical evaluation, twenty participants were assigned to use AEINS over a number of games. The evaluation showed positive results as the participants appreciated the social characteristics of the system as they were able to recognize the genuine social aspects and the realism represented in the game. Finally, the evaluation showed indications for developing new lines of thinking for some participants to the extent that some of them were ready to carry the experience forward to the real world. However, the evaluation also suggested possible improvements, such as the use of 3D interface and free text natural language

    Partnering People with Deep Learning Systems: Human Cognitive Effects of Explanations

    Get PDF
    Advances in “deep learning” algorithms have led to intelligent systems that provide automated classifications of unstructured data. Until recently these systems could not provide the reasons behind a classification. This lack of “explainability” has led to resistance in applying these systems in some contexts. An intensive research and development effort to make such systems more transparent and interpretable has proposed and developed multiple types of explanation to address this challenge. Relatively little research has been conducted into how humans process these explanations. Theories and measures from areas of research in social cognition were selected to evaluate attribution of mental processes from intentional systems theory, measures of working memory demands from cognitive load theory, and self-efficacy from social cognition theory. Crowdsourced natural disaster damage assessment of aerial images was employed using a written assessment guideline as the task. The “Wizard of Oz” method was used to generate the damage assessment output of a simulated agent. The output and explanations contained errors consistent with transferring a deep learning system to a new disaster event. A between-subjects experiment was conducted where three types of natural language explanations were manipulated between conditions. Counterfactual explanations increased intrinsic cognitive load and made participants more aware of the challenges of the task. Explanations that described boundary conditions and failure modes (“hedging explanations”) decreased agreement with erroneous agent ratings without a detectable effect on cognitive load. However, these effects were not large enough to counteract decreases in self-efficacy and increases in erroneous agreement as a result of providing a causal explanation. The extraneous cognitive load generated by explanations had the strongest influence on self-efficacy in the task. Presenting all of the explanation types at the same time maximized cognitive load and agreement with erroneous simulated output. Perceived interdependence with the simulated agent was also associated with increases in self-efficacy; however, trust in the agent was not associated with differences in self-efficacy. These findings identify effects related to research areas which have developed methods to design tasks that may increase the effectiveness of explanations
    corecore