10 research outputs found

    Human-Autonomy Teaming - an Evolving Interaction Paradigm: Teaming and Automation

    Get PDF
    Intelligent and complex systems are becoming common in our workplace and our homes, providing direct assistance in transport, health and education domains. In many instances the nature of these systems are somewhat ubiquitous, and influence the manner in which we make decisions. Traditionally we understand the benefits of how humans work within teams, and the associated pitfalls and costs when this team fails to work. However, we can view the autonomous agent as a synthetic partner emerging in roles that have traditionally been the bastion of the human alone. Within these new Human-Autonomy Teams we can witness different levels of automation and decision support held within a clear hierarchy of tasks and goals. However, when we start examining the nature of more autonomous systems and software agents we see a partnership that can suggest different constructs of authority depending on the context of the task. This may vary in terms of whether the human or agent is leading the team in order to achieve a goal. This paper examines the nature of HAT composition whilst examining the application of this in aviation and how trust in such stystems can be assessed

    Assessing Demand for Transparency in Intelligent Systems Using Machine Learning

    Get PDF
    Intelligent systems offering decision support can lessen cognitive load and improve the efficiency of decision making in a variety of contexts. These systems assist users by evaluating multiple courses of action and recommending the right action at the right time. Modern intelligent systems using machine learning introduce new capabilities in decision support, but they can come at a cost. Machine learning models provide little explanation of their outputs or reasoning process, making it difficult to determine when it is appropriate to trust, or if not, what went wrong. In order to improve trust and ensure appropriate reliance on these systems, users must be afforded increased transparency, enabling an understanding of the systems reasoning, and an explanation of its predictions or classifications. Here we discuss the salient factors in designing transparent intelligent systems using machine learning, and present the results of a user-centered design study. We propose design guidelines derived from our study, and discuss next steps for designing for intelligent system transparency

    Fairness and Explanation in AI-Informed Decision Making

    Full text link
    AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant’s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users’ perceptions of fairness, while high levels of introduced fairness increased users’ perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used.</jats:p

    A method for electric load data verification and repair in home environment

    Get PDF
    Home energy management (HEM) and smart home have been popular among people; HEM collects and analyses the electric load data to make the power use safe, reliable, economical, efficient and environmentally friendly. Without the correct data, the correct decisions and plans would not be made, so the data quality is of the great importance. This paper focuses on the verification and repair of the electric load data in family environment. Due to the irregularity of modern people's life styles, this paper proposes a system of 'N + 1' framework to handle this properly. The system collects information of every appliance and the power bus to make them verify each other, so it can solve the stochastic uncertainty problem and verify if the data is correct or not to ensure the data quality. In the course of data upload, there are many factors like smart meter malfunctions, communication failures and so on which will cause some wrong data. To repair the wrong data, we proposes a method called LBboosting, which integrates two curve fitting methods. As the results show, the method has a better performance than up-to date methods

    Facilitating machine learning model comparison and explanation through a radial visualisation<sup>†</sup>

    Full text link
    Building an effective Machine Learning (ML) model for a data set is a difficult task involving various steps. One of the most important steps is to compare a substantial amount of generated ML models to find the optimal one for deployment. It is challenging to compare such models with a dynamic number of features. Comparison is more than only finding differences of ML model performance, as users are also interested in the relations between features and model performance such as feature importance for ML explanations. This paper proposes RadialNet Chart, a novel visualisation approach, to compare ML models trained with a different number of features of a given data set while revealing implicit dependent relations. In RadialNet Chart, ML models and features are represented by lines and arcs, respectively. These lines are generated effectively using a recursive function. The dependence of ML models with a dynamic number of features is encoded into the structure of visualisation, where ML models and their dependent features are directly revealed from related line connections. ML model performance information is encoded with colour and line width in RadialNet Chart. Taken together with the structure of visualisation, feature importance can be directly discerned in RadialNet Chart for ML explanations. Compared with other commonly used visualisation approaches, RadialNet Chart can help to simplify the ML model comparison process with different benefits such as the following: more efficient in terms of helping users to focus their attention to find visual elements of interest and easier to compare ML performance to find optimal ML model and discern important features visually and directly instead of through complex algorithmic calculations for ML explanations

    Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

    Full text link
    The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.</jats:p

    Interactive Machine Learning with Applications in Health Informatics

    Full text link
    Recent years have witnessed unprecedented growth of health data, including millions of biomedical research publications, electronic health records, patient discussions on health forums and social media, fitness tracker trajectories, and genome sequences. Information retrieval and machine learning techniques are powerful tools to unlock invaluable knowledge in these data, yet they need to be guided by human experts. Unlike training machine learning models in other domains, labeling and analyzing health data requires highly specialized expertise, and the time of medical experts is extremely limited. How can we mine big health data with little expert effort? In this dissertation, I develop state-of-the-art interactive machine learning algorithms that bring together human intelligence and machine intelligence in health data mining tasks. By making efficient use of human expert's domain knowledge, we can achieve high-quality solutions with minimal manual effort. I first introduce a high-recall information retrieval framework that helps human users efficiently harvest not just one but as many relevant documents as possible from a searchable corpus. This is a common need in professional search scenarios such as medical search and literature review. Then I develop two interactive machine learning algorithms that leverage human expert's domain knowledge to combat the curse of "cold start" in active learning, with applications in clinical natural language processing. A consistent empirical observation is that the overall learning process can be reliably accelerated by a knowledge-driven "warm start", followed by machine-initiated active learning. As a theoretical contribution, I propose a general framework for interactive machine learning. Under this framework, a unified optimization objective explains many existing algorithms used in practice, and inspires the design of new algorithms.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147518/1/raywang_1.pd

    Making machine learning useable by revealing internal states update - a transparent approach

    No full text
    corecore