6 research outputs found

    Visual interactive grouping:follow the leader!

    Get PDF

    Assisting Users with Clustering Tasks by Combining Metric Learning and Classification

    No full text
    Interactive clustering refers to situations in which a human labeler is willing to assist a learning algorithm in automatically clustering items. We present a related but somewhat different task, assisted clustering, in which a user creates explicit groups of items from a large set and wants suggestions on what items to add to each group. While the traditional approach to interactive clustering has been to use metric learning to induce a distance metric, our situation seems equally amenable to classification. Using clusterings of documents from human subjects, we found that one or the other method proved to be superior for a given cluster, but not uniformly so. We thus developed a hybrid mechanism for combining the metric learner and the classifier. We present results from a large number of trials based on human clusterings, in which we show that our combination scheme matches and often exceeds the performance of a method which exclusively uses either type of learner

    Designing AI Experiences: Boundary Representations, Collaborative Processes, and Data Tools

    Full text link
    Artificial Intelligence (AI) has transformed our everyday interactions with technology through automation, intelligence augmentation, and human-machine partnership. Nevertheless, we regularly encounter undesirable and often frustrating experiences due to AI. A fundamental challenge is that existing software practices for coordinating system and experience designs fall short when creating AI for diverse human needs, i.e., ``human-centered AI'' or HAI. ``AI-first'' development workflows allow engineers to first develop the AI components, and then user experience (UX) designers create end-user experiences around the AI's capabilities. Consequently, engineers encounter end-user blindness when making critical decisions about AI training data needs, implementation logic, behavior, and evaluation. In the conventional ``UX-first'' process, UX designers lack the needed technical understanding of AI capabilities (technological blindness) that limits their ability to shape system design from the ground up. Human-AI design guidelines have been offered to help but neither describe nor prescribe ways to bridge the gaps in needed expertise in creating HAI. In this dissertation, I investigate collaboration approaches between designers and engineers to operationalize the vision for HAI as technology inspired by human intelligence that augments human abilities while addressing societal needs. In a series of studies combining technical HCI research with qualitative studies of AI production in practice, I contribute (1) an approach to software development that blurs rigid design-engineering boundaries, (2) a process model for co-designing AI experiences, and (3) new methods and tools to empower designers by making AI accessible to UX designers. Key findings from interviews with industry practitioners include the need for ``leaky'' abstractions shared between UX and AI designers. Because modular development and separation of concerns fail with HAI design, leaky abstractions afford collaboration across expertise boundaries and support human-centered design solutions through vertical prototyping and constant evaluation. Further, by observing how designers and engineers collaborate on HAI design in an in-lab study, I highlight the role of design `probes' with user data to establish common ground between AI system and UX design specifications, providing a critical tool for shaping HAI design. Finally, I offer two design methods and tool implementations --- Data-Assisted Affinity Diagramming and Model Informed Prototyping --- for incorporating end-user data into HAI design. HAI is necessarily a multidisciplinary endeavor, and human data (in multiple forms) is the backbone of AI systems. My dissertation contributions inform how stakeholders with differing expertise can collaboratively design AI experiences by reducing friction across expertise boundaries and maintaining agency within team roles. The data-driven methods and tools I created provide direct support for software teams to tackle the novel challenges of designing with data. Finally, this dissertation offers guidance for imagining future design tools for human-centered systems that are accessible to diverse stakeholders.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169917/1/harihars_1.pd

    Empowering users to communicate their preferences to machine learning models in Visual Analytics

    Get PDF
    Recent visual analytic (VA) systems rely on machine learning (ML) to allow users to perform a variety of data analytic tasks, e.g., biologists clustering genome samples, medical practitioners predicting the diagnosis for a new patient, ML practitioners tuning models' hyperparameter settings, etc. These VA systems support interactive construction of models to people (I call them power users) with a diverse set of expertise in ML; from non-experts, to intermediates, to expert ML users. Through my research, I designed and developed VA systems for power users empowering them to communicate their preferences to interactively construct machine learning models for their analytical tasks. In this process, I design algorithms to incorporate user interaction data in machine learning modeling pipelines. Specifically, I deployed and tested (e.g., task completion times, user satisfaction ratings, success rate in finding user-preferred models, model accuracies) two main interaction techniques, multi-model steering, and interactive objective functions to facilitate specification of user goals and objectives to underlying model(s) in VA. However, designing these VA systems for power users poses various challenges, such as addressing diversity in user expertise, metric selection, user modeling to automatically infer preferences, evaluating the success of these systems, etc. Through this work I contribute a set of VA systems that support interactive construction and selection of supervised and unsupervised models using tabular data. In addition, I also present results/findings from a design study of interactive ML in a specific domain with real users and real data.Ph.D
    corecore