60 research outputs found

    Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning

    Get PDF
    Interactive machine learning techniques inject domain expertise to improve or adapt models. Prior research has focused on adapting underlying algorithms and optimizing system performance, which comes at the expense of user experience. This dissertation advances our understanding of how to design for human-machine collaboration--improving both user experience and system performance--through four studies of end users' experience, perceptions, and behaviors with interactive machine learning systems. In particular, we focus on two critical aspects of interactive machine learning: how systems explain themselves to users (transparency) and how users provide feedback or guide systems (control). We first explored how explanations shape users' experience of a simple text classifier with or without the ability to provide feedback to it. Users were frustrated when given explanations without means for feedback and expected model improvement over time even in the absence of feedback. To explore transparency and control in the context of more complex models and subjective tasks, we chose an unsupervised machine learning case, topic modeling. First, we developed a novel topic visualization technique and compared it against common topic representations (e.g., word lists) for interpretability. While users quickly understood topics with simple word lists, our visualization exposed phrases that other representations obscured. Next, we developed a novel, ``human-centered'' interactive topic modeling system supporting users' desired control mechanisms. A formative user study with this system identified two aspects of control exposed by transparency: adherence, or whether models incorporate user feedback as expected, and stability, or whether other unexpected model updates occur. Finally, we further studied adherence and stability by comparing user experience across three interactive topic modeling approaches. These approaches incorporate input differently, resulting in varied adherence, stability, and update speeds. Participants disliked slow updates most, followed by lack of adherence. Instability was polarizing: some participants liked it when it surfaced interesting information, while others did not. Across modeling approaches, participants differed only in whether they noticed adherence. This dissertation contributes to our understanding of how end users comprehend and interact with machine learning models and provides guidelines for designing systems for the ``human in the loop.'

    Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation through the Lens of News Headline Generation

    Full text link
    To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants' perception of control compared to freeform editing

    From ”Explainable AI” to ”Graspable AI”

    Get PDF
    Since the advent of Artificial Intelligence (AI) and Machine Learning (ML), researchers have asked how intelligent computing systems could interact with and relate to their users and their surroundings, leading to debates around issues of biased AI systems, ML black-box, user trust, user’s perception of control over the system, and system’s transparency, to name a few. All of these issues are related to how humans interact with AI or ML systems, through an interface which uses different interaction modalities. Prior studies address these issues from a variety of perspectives, spanning from understanding and framing the problems through ethics and Science and Technology Studies (STS) perspectives to finding effective technical solutions to the problems. But what is shared among almost all those efforts is an assumption that if systems can explain the how and why of their predictions, people will have a better perception of control and therefore will trust such systems more, and even can correct their shortcomings. This research field has been called Explainable AI (XAI). In this studio, we take stock on prior efforts in this area; however, we focus on using Tangible and Embodied Interaction (TEI) as an interaction modality for understanding ML. We note that the affordances of physical forms and their behaviors potentially can not only contribute to the explainability of ML systems, but also can contribute to an open environment for criticism. This studio seeks to both critique explainable ML terminology and to map the opportunities that TEI can offer to the HCI for designing more sustainable, graspable and just intelligent systems.QC 20210526</p

    Understanding the behavior of Prometheus and Pandora

    Get PDF
    We revisit the dynamics of Prometheus and Pandora, two small moons flanking Saturn's F ring. Departures of their orbits from freely precessing ellipses result from mutual interactions via their 121:118 mean motion resonance. Motions are chaotic because the resonance is split into four overlapping components. Orbital longitudes were observed to drift away from Voyager predictions, and a sudden jump in mean motions took place close to the time at which the orbits' apses were antialigned in 2000. Numerical integrations reproduce both the longitude drifts and the jumps. The latter have been attributed to the greater strength of interactions near apse antialignment (every 6.2 years), and it has been assumed that this drift-jump behavior will continue indefinitely. We re-examine the dynamics by analogy with that of a nearly adiabatic, parametric pendulum. In terms of this analogy, the current value of the action of the satellite system is close to its maximum in the chaotic zone. Consequently, at present, the two separatrix crossings per precessional cycle occur close to apse antialignment. In this state libration only occurs when the potential's amplitude is nearly maximal, and the 'jumps' in mean motion arise during the short intervals of libration that separate long stretches of circulation. Because chaotic systems explore the entire region of phase space available to them, we expect that at other times the system would be found in states of medium or low action. In a low action state it would spend most of the time in libration, and separatrix crossings would occur near apse alignment. We predict that transitions between these different states can happen in as little as a decade. Therefore, it is incorrect to assume that sudden changes in the orbits only happen near apse antialignment.Comment: 22 pages, 13 figs, Icarus accepte

    Uncertainty in current and future health wearables

    Get PDF
    Expect inherent uncertainties in health-wearables data to complicate future decision making concerning user health

    Emerging Perspectives in Human-Centered Machine Learning

    Get PDF
    Current Machine Learning (ML) models can make predictions that are as good as or better than those made by people. The rapid adoption of this technology puts it at the forefront of systems that impact the lives of many, yet the consequences of this adoption are not fully understood. Therefore, work at the intersection of people's needs and ML systems is more relevant than ever. This area of work, dubbed Human-Centered Machine Learning (HCML), re-thinks ML research and systems in terms of human goals. HCML gathers an interdisciplinary group of HCI and ML practitioners, each bringing their unique, yet related perspectives. This one-day workshop is a successor of Gillies et al. (2016) and focuses on recent advancements and emerging areas in HCML. We aim to discuss different perspectives on these areas and articulate a coordinated research agenda for the XXI century

    Fine-mapping of the HNF1B multicancer locus identifies candidate variants that mediate endometrial cancer risk.

    Get PDF
    Common variants in the hepatocyte nuclear factor 1 homeobox B (HNF1B) gene are associated with the risk of Type II diabetes and multiple cancers. Evidence to date indicates that cancer risk may be mediated via genetic or epigenetic effects on HNF1B gene expression. We previously found single-nucleotide polymorphisms (SNPs) at the HNF1B locus to be associated with endometrial cancer, and now report extensive fine-mapping and in silico and laboratory analyses of this locus. Analysis of 1184 genotyped and imputed SNPs in 6608 Caucasian cases and 37 925 controls, and 895 Asian cases and 1968 controls, revealed the best signal of association for SNP rs11263763 (P = 8.4 × 10(-14), odds ratio = 0.86, 95% confidence interval = 0.82-0.89), located within HNF1B intron 1. Haplotype analysis and conditional analyses provide no evidence of further independent endometrial cancer risk variants at this locus. SNP rs11263763 genotype was associated with HNF1B mRNA expression but not with HNF1B methylation in endometrial tumor samples from The Cancer Genome Atlas. Genetic analyses prioritized rs11263763 and four other SNPs in high-to-moderate linkage disequilibrium as the most likely causal SNPs. Three of these SNPs map to the extended HNF1B promoter based on chromatin marks extending from the minimal promoter region. Reporter assays demonstrated that this extended region reduces activity in combination with the minimal HNF1B promoter, and that the minor alleles of rs11263763 or rs8064454 are associated with decreased HNF1B promoter activity. Our findings provide evidence for a single signal associated with endometrial cancer risk at the HNF1B locus, and that risk is likely mediated via altered HNF1B gene expression
    corecore