380 research outputs found

    Into the Black Box: Designing for Transparency in Artificial Intelligence

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build

    Varieties of interpretation in educational research: how we frame the project

    No full text

    Information Markets and Nonmarkets

    Get PDF
    As large amounts of data become available and can be communicated more easily and processed more e¤ectively, information has come to play a central role for economic activity and welfare in our age. This essay overviews contributions to the industrial organization of information markets and nonmarkets, while attempting to maintain a balance between foundational frameworks and more recent developments. We start by reviewing mechanism-design approaches to modeling the trade of information. We then cover ratings, predictions, and recommender systems. We turn to forecasting contests, prediction markets, and other institutions designed for collecting and aggregating information from decentralized participants. Finally, we discuss science as a prototypical information nonmarket with participants who interact in a non-anonymous way to produce and disseminate information. We aim to make the reader familiar with the central notions and insights in this burgeoning literature and also point to some open critical questions that future research will have to address

    Modeling aggregated expertise of user contributions to assess the credibility of OpenStreetMap features + Erratum

    Get PDF
    The emergence of volunteered geographic information (VGI) during the past decade has fueled a wide range of research and applications. The assessment of VGI quality and fitness-of-use is still a challenge because of the non-standardized and crowdsourced data collection process, as well as the unknown skill and motivation of the contributors. However, the frequent approach of assessing VGI quality against external data sources using ISO quality standard measures is problematic because of a frequent lack of available external (reference) data, and because for certain types of features, VGI might be more up-to-date than the reference data. Therefore, a VGI-intrinsic measure of quality is highly desirable. This study proposes such an intrinsic measure of quality by developing the concept of aggregated expertise based on the characteristics of a feature's contributors. The article further operationalizes this concept and examines its feasibility through a case study using OpenStreetMap (OSM). The comparison of model OSM feature quality with information from a field survey demonstrates the successful implementation of this novel approach
    • …
    corecore