96 research outputs found

    Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems

    Full text link
    We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies -- qualitative interviews, a controlled experiment, and a card-sorting task -- to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.Comment: IUI 202

    Researching AI Legibility Through Design

    Get PDF
    Everyday interactions with computers are increasingly likely to involve elements of Artificial Intelligence (AI). Encompassing a broad spectrum of technologies and applications, AI poses many challenges for HCI and design. One such challenge is the need to make AI’s role in a given system legible to the user in a meaningful way. In this paper we employ a Research through Design (RtD) approach to explore how this might be achieved. Building on contemporary concerns and a thorough exploration of related research, our RtD process reflects on designing imagery intended to help increase AI legibility for users. The paper makes three contributions. First, we thoroughly explore prior research in order to critically unpack the AI legibility problem space. Second, we respond with design proposals whose aim is to enhance the legibility, to users, of systems using AI. Third, we explore the role of design-led enquiry as a tool for critically exploring the intersection between HCI and AI research

    AI-Driven Assessment of Students: Current Uses and Research Trends

    Get PDF
    During the last decade, the use of AIs is being incorporated into the educational field whether to support the analysis of human behavior in teachinglearning contexts, as didactic resource combined with other technologies or as a tool for the assessment of the students. This proposal presents a Systematic Literature Review and mapping study on the use of AIs for the assessment of students that aims to provide a general overview of the state of the art and identify the current areas of research by answering 6 research questions related with the evolution of the field, and the geographic and thematic distribution of the studies. As a result of the selection process this study identified 20 papers focused on the research topic in the repositories SCOPUS and Web of Science from an initial amount of 129. The analysis of the papers allowed the identification of three main thematic categories: assessment of student behaviors, assessment of student sentiments and assessment of student achievement as well as several gaps in the literature and future research lines addressed in the discussion

    Renegotiation and Relative Performance Evaluation: Why an Informative Signal may be Useless

    Get PDF
    Although Holmström's informativeness criterion provides a theoretical foundation for the controllability principle and inter firm relative performance evaluation, empirical and field studies provide only weak evidence on such practices. This paper refines the traditional informativeness criterion by abandoning the conventional full-commitment assumption. With the possibility of renegotiation, a signal's usefulness in incentive contracting depends on its information quality, not simply on whether the signal is informative. This paper derives conditions for determining when a signal is useless and when it is useful. In particular, these conditions will be met when the signal's information quality is either sufficiently poor or sufficiently rich

    Crowdsourcing the Perception of Machine Teaching

    Full text link
    Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience, and reflect on their engagement in machine teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk. Using a performance-based payment scheme, Mechanical Turkers (N = 100) are called to train, test, and re-train a robust recognition model in real-time with a few snapshots taken in their environment. We find that participants incorporate diversity in their examples drawing from parallels to how humans recognize objects independent of size, viewpoint, location, and illumination. Many of their misconceptions relate to consistency and model capabilities for reasoning. With limited variation and edge cases in testing, the majority of them do not change strategies on a second training attempt.Comment: 10 pages, 8 figures, 5 tables, CHI2020 conferenc

    Intelligent analysis and data visualisation for teacher assistance tools: the case of exploratory learning

    Get PDF
    While it is commonly accepted that Learning Analytics tools can support teachers’ awareness and classroom orchestration, not all forms of pedagogy are congruent to the types of data generated by digital technologies or the algorithms used to analyse them. One such pedagogy that has been so far underserved by Learning Analytics is exploratory learning, exemplified by tools such as simulators, virtual labs, microworlds and some interactive educational games. This paper argues that the combination of intelligent analysis of interaction data from such an exploratory learning environment (ELE) and the targeted design of visualisations has the benefit of supporting classroom orchestration and consequently enabling the adoption of this pedagogy to the classroom. We present a case study of learning analytics in the context of an ELE supporting the learning of algebra. We focus on the formative qualitative evaluation of a suite of Teacher Assistance tools. We draw conclusions relating to the value of the tools to teachers and reflect with transferable lessons for future related work

    Habilidades e avaliação de executivos

    Full text link
    • 

    corecore