1,840 research outputs found

    Assessing Demand for Transparency in Intelligent Systems Using Machine Learning

    Get PDF
    Intelligent systems offering decision support can lessen cognitive load and improve the efficiency of decision making in a variety of contexts. These systems assist users by evaluating multiple courses of action and recommending the right action at the right time. Modern intelligent systems using machine learning introduce new capabilities in decision support, but they can come at a cost. Machine learning models provide little explanation of their outputs or reasoning process, making it difficult to determine when it is appropriate to trust, or if not, what went wrong. In order to improve trust and ensure appropriate reliance on these systems, users must be afforded increased transparency, enabling an understanding of the systems reasoning, and an explanation of its predictions or classifications. Here we discuss the salient factors in designing transparent intelligent systems using machine learning, and present the results of a user-centered design study. We propose design guidelines derived from our study, and discuss next steps for designing for intelligent system transparency

    Physical pixels

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.Includes bibliographical references (leaves 48-51).The picture element, or pixel, is a conceptual unit of representation for digital information. Like all data structures of the computer, pixels are invisible and therefore require an output device to be seen. The physical unit of display, or physical pixel, can be any form that makes the pixel visible. Pixels are often represented as the electronically addressable phosphors of a video monitor, but the potential for different visualizations inspires the development of novel phenotypes. Four new systems of physical pixels are presented: Nami, Peano, the Digital Palette and 20/20 Refurbished. In each case, the combination of material, hardware and software design results in a unique visualization of computation. The chief contribution of this research is the articulation of a mode of artistic practice in which custom units of representation integrate physical and digital media to engender a new art.by Kelly Bowman Heaton.S.M

    Physiological Indicators for User Trust in Machine Learning with Influence Enhanced Fact-Checking

    Full text link
    © IFIP International Federation for Information Processing 2019. Trustworthy Machine Learning (ML) is one of significant challenges of “black-box” ML for its wide impact on practical applications. This paper investigates the effects of presentation of influence of training data points on machine learning predictions to boost user trust. A framework of fact-checking for boosting user trust is proposed in a predictive decision making scenario to allow users to interactively check the training data points with different influences on the prediction by using parallel coordinates based visualization. This work also investigates the feasibility of physiological signals such as Galvanic Skin Response (GSR) and Blood Volume Pulse (BVP) as indicators for user trust in predictive decision making. A user study found that the presentation of influences of training data points significantly increases the user trust in predictions, but only for training data points with higher influence values under the high model performance condition, where users can justify their actions with more similar facts to the testing data point. The physiological signal analysis showed that GSR and BVP features correlate to user trust under different influence and model performance conditions. These findings suggest that physiological indicators can be integrated into the user interface of AI applications to automatically communicate user trust variations in predictive decision making

    Assessing the Value of Transparency in Recommender Systems: An End-User Perspective

    Get PDF
    Recommender systems, especially those built on machine learning, are increasing in popularity, as well as complexity and scope. Systems that cannot explain their reasoning to end-users risk losing trust with users and failing to achieve acceptance. Users demand interfaces that afford them insights into internal workings, allowing them to build appropriate mental models and calibrated trust. Building interfaces that provide this level of transparency, however, is a significant design challenge, with many design features that compete, and little empirical research to guide implementation. We investigated how end-users of recommender systems value different categories of information to help in determining what to do with computer-generated recommendations in contexts involving high risk to themselves or others. Findings will inform future design of decision support in high-criticality contexts

    End-User Development for Interactive Data Analytics: Uncertainty, Correlation and User Confidence

    Get PDF

    Towards Explainability for AI Fairness

    Full text link
    AI explainability is becoming indispensable to allow users to gain insights into the AI system’s decision-making process. Meanwhile, fairness is another rising concern that algorithmic predictions may be misaligned to the designer’s intent or social expectations such as discrimination to specific groups. In this work, we provide a state-of-the-art overview on the relations between explanation and AI fairness and especially the roles of explanation on human’s fairness judgement. The investigations demonstrate that fair decision making requires extensive contextual understanding, and AI explanations help identify potential variables that are driving the unfair outcomes. It is found that different types of AI explanations affect human’s fairness judgements differently. Some properties of features and social science theories need to be considered in making senses of fairness with explanations. Different challenges are identified to make responsible AI for trustworthy decision making from the perspective of explainability and fairness

    Facilitating machine learning model comparison and explanation through a radial visualisation<sup>†</sup>

    Full text link
    Building an effective Machine Learning (ML) model for a data set is a difficult task involving various steps. One of the most important steps is to compare a substantial amount of generated ML models to find the optimal one for deployment. It is challenging to compare such models with a dynamic number of features. Comparison is more than only finding differences of ML model performance, as users are also interested in the relations between features and model performance such as feature importance for ML explanations. This paper proposes RadialNet Chart, a novel visualisation approach, to compare ML models trained with a different number of features of a given data set while revealing implicit dependent relations. In RadialNet Chart, ML models and features are represented by lines and arcs, respectively. These lines are generated effectively using a recursive function. The dependence of ML models with a dynamic number of features is encoded into the structure of visualisation, where ML models and their dependent features are directly revealed from related line connections. ML model performance information is encoded with colour and line width in RadialNet Chart. Taken together with the structure of visualisation, feature importance can be directly discerned in RadialNet Chart for ML explanations. Compared with other commonly used visualisation approaches, RadialNet Chart can help to simplify the ML model comparison process with different benefits such as the following: more efficient in terms of helping users to focus their attention to find visual elements of interest and easier to compare ML performance to find optimal ML model and discern important features visually and directly instead of through complex algorithmic calculations for ML explanations

    Information for Impact: Liberating Nonprofit Sector Data

    Get PDF
    This paper explores the costs and benefits of four avenues for achieving open Form 990 data: a mandate for e-filing, an IRS initiative to turn Form 990 data into open data, a third-party platform that would create an open database for Form 990 data, and a priori electronic filing. Sections also discuss the life and usage of 990 data. With bibliographical references
    • …
    corecore