3 research outputs found

    Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy

    Full text link
    Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants' trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.Comment: Accepted and to appear in the Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP) 202

    A Transparency Index Framework for Machine Learning powered AI in Education

    Get PDF
    The increase in the use of AI systems in our daily lives, brings calls for more ethical AI development from different sectors including, finance, the judiciary and to an increasing extent education. A number of AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI, such as fairness, explainability and safety. However, the abstract nature of these existing ethical AI guidelines often makes them difficult to operationalise in real-world contexts. The inadequacy of the existing situation with respect to ethical guidance is further complicated by the paucity of work to develop transparent machine learning powered AI systems for real-world. This is particularly true for AI applied in education and training. In this thesis, a Transparency Index Framework is presented as a tool to forefront the importance of transparency and aid the contextualisation of ethical guidance for the education and training sector. The transparency index framework presented here has been developed in three iterative phases. In phase one, an extensive literature review of the real-world AI development pipelines was conducted. In phase two, an AI-powered tool for use in an educational and training setting was developed. The initial version of the Transparency Index Framework was prepared after phase two. And in phase three, a revised version of the Transparency Index Framework was co- designed that integrates learning from phases one and two. The co-design process engaged a range of different AI in education stakeholders, including educators, ed-tech experts and AI practitioners. The Transparency Index Framework presented in this thesis maps the requirements of transparency for different categories of AI in education stakeholders, and shows how transparency considerations can be ingrained throughout the AI development process, from initial data collection to deployment in the world, including continuing iterative improvements. Transparency is shown to enable the implementation of other ethical AI dimensions, such as interpretability, accountability and safety. The 3 optimisation of transparency from the perspective of end-users and ed-tech companies who are developing AI systems is discussed and the importance of conceptualising transparency in developing AI powered ed-tech products is highlighted. In particular, the potential for transparency to bridge the gap between the machine learning and learning science communities is noted. For example, through the use of datasheets, model cards and factsheets adapted and contextualised for education through a range of stakeholder perspectives, including educators, ed-tech experts and AI practitioners
    corecore