11 research outputs found

    Perspectives on Incorporating Expert Feedback into Model Updates

    Full text link
    Machine learning (ML) practitioners are increasingly tasked with developing models that are aligned with non-technical experts' values and goals. However, there has been insufficient consideration on how practitioners should translate domain expertise into ML updates. In this paper, we consider how to capture interactions between practitioners and experts systematically. We devise a taxonomy to match expert feedback types with practitioner updates. A practitioner may receive feedback from an expert at the observation- or domain-level, and convert this feedback into updates to the dataset, loss function, or parameter space. We review existing work from ML and human-computer interaction to describe this feedback-update taxonomy, and highlight the insufficient consideration given to incorporating feedback from non-technical experts. We end with a set of open questions that naturally arise from our proposed taxonomy and subsequent survey

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    Full text link
    Even though machine learning (ML) pipelines affect an increasing array of stakeholders, there is little work on how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders. Each log records important details about the feedback collection process, the feedback itself, and how the feedback is used to update the ML pipeline. In this paper, we introduce and formalise a process for collecting a FeedbackLog. We also provide concrete use cases where FeedbackLogs can be employed as evidence for algorithmic auditing and as a tool to record updates based on stakeholder feedback

    The Role of Human Knowledge in Explainable AI

    Get PDF
    As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system). This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. Furthermore, a discussion on the challenges, state-of-the-art, and future trends in explainability is also provided

    Insights on Learning Tractable Probabilistic Graphical Models

    Get PDF

    Insights on Learning Tractable Probabilistic Graphical Models

    Get PDF

    Maximizing Insight from Modern Economic Analysis

    Full text link
    The last decade has seen a growing trend of economists exploring how to extract different economic insight from "big data" sources such as the Web. As economists move towards this model of analysis, their traditional workflow starts to become infeasible. The amount of noisy data from which to draw insights presents data management challenges for economists and limits their ability to discover meaningful information. This leads to economists needing to invest a great deal of energy in training to be data scientists (a catch-all role that has grown to describe the usage of statistics, data mining, and data management in the big data age), with little time being spent on applying their domain knowledge to the problem at hand. We envision an ideal workflow that generates accurate and reliable results, where results are generated in near-interactive time, and systems handle the "heavy lifting" required for working with big data. This dissertation presents several systems and methodologies that bring economists closer to this ideal workflow, helping them address many of the challenges faced in transitioning to working with big data sources like the Web. To help users generate accurate and reliable results, we present approaches to identifying relevant predictors in nowcasting applications, as well as methods for identifying potentially invalid nowcasting models and their inputs. We show how a streamlined workflow, combined with pruning and shared computation, can help handle the heavy lifting of big data analysis, allowing users to generate results in near-interactive time. We also present a novel user model and architecture for helping users avoid undesirable bias when doing data preparation: users interactively define constraints for transformation code and the data that the code produces, and an explain-and-repair system satisfies these constraints as best it can, also providing an explanation for any problems along the way. These systems combined represent a unified effort to streamline the transition for economists to this new big data workflow.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144007/1/dol_1.pd

    The Machinic Imaginary: A Post-Phenomenological Examination of Computational Society

    Get PDF
    The central claim of this thesis is the postulation of a machinic dimension of the social imaginary—a more-than-human process of creative expression of the social world. With the development of machine learning and the sociality of interactive media, computational logics have a creative capacity to produce meaning of a radically machinic order. Through an analysis of computational functions and infrastructures ranging from artificial neural networks to large-scale machine ecologies, the institution of computational logics into the social imaginary is nothing less than a reordering of the conditions of social-historical creation. Responding to dominant technopolitical propositions concerning digital culture, this thesis proposes a critical development of Cornelius Castoriadis’ philosophy of the social imaginary. To do so, a post phenomenological framework is constructed by tracing a trajectory from Maurice Merleau-Ponty’s late ontological turn, through to the process-relational philosophies of Gilbert Simondon and Castoriadis. Introducing the concept of the machinic imaginary, the thesis maps the extent to which the dynamic, interactive paradigm of twenty-first century computation is changing how meaning is socially instituted in ways incomprehensible to human sense. As social imaginary significations are increasingly created and carried by machines, the articulation of the social diverges into human and non-human worlds. This inaccessibility of the machinic imaginary is a core problematic raised by this thesis, indicating a fragmentation of the social imaginary and a novel form of existential alienation. Any political theorisation of the contemporary social condition must therefore work within this alienation and engage with the transsubjective character of social-historical creation

    Rationality in Artificial Intelligence Decision-making

    Get PDF
    Organisaatioiden päätöksenteossa käytetään enenevissä määrin tekoälyä, jonka odotetaan luovan kilpailuetua sitä käyttäville. Kuitenkin uusien mahdollisuuksien ja hyötyjen myötä päädytään myös uusien ongelmien ja haasteiden pariin. Tekoälyn osalta merkittävä osa näistä haasteista koskee rationaliteettia, jolla tässä tarkoitetaan päätöksenteon takana olevia syitä, niiden suhteita toisiinsa, sekä prosessia, jonka tuloksena ne saadaan. Tekoälyn luomat haasteet päästä näkemään ja ymmärtämään päätösten takana olevia rationaliteetteja luo huolta päätöksenteon reiluudesta, vastuusta, ja luottamuksesta päätöksentekoprosessiin. Lisäksi tekoälyn käyttämän rationaliteetin katsotaan luovan haasteita moraaliselle, refleksiiviselle harkintakyvylle päätöksenteossa. Rationaliteetti sekä toimijuus ovat molemmat oleellisia päätöksenteon kannalta, mutta toimijuus on käsitteenä kehittynyt suuntaan, jossa teknologia ja ihminen nähdään erottamattomia toimijuuden suhteen. Niiden katsotaan muodostavan yhdessä yhteinen toimijuus. Tekoälykeskusteluissa rationaaliteettiin sen sijaan on juurtunut syvälle dualistinen ajattelu, joka on toimijuuden suhteen jo hylätty. Dualistisen ajattelutavan rationaliteetin suhteen voidaan katsoa ylläpitävän tunnistettuja ongelmia tekoälyn suhteen. Tekoälyn rationaliteetin laatua on käsitelty teoreettisesti, mutta tutkimuskentältä puuttuu vielä empiirinen tutkimus aiheesta. Tämä väitöskirja käyttää postfenomenologiaa empiiriseen tutkimukseen siitä, miten tekoälyn käyttö muuttaa päätöksenteon rationaliteettia. Postfenomenologia on yhteensopiva toimijuuden kanssa, joka ymmärretään ei-dualistiseksi. Sen sijaan post- fenomenologia käsittää teknologian “välittäjänä” ihmisten toimijuudelle. Tämä väitöskirja käyttää vastaavaa näkemystä rationaliteetin tarkasteluun, ja siten tuo rationaliteetin ei-dualistisen tarkastelun tasa-arvoiseksi toimijuuden kanssa päätöksenteossa. Esitetty tutkimuskysymys on “Kuinka tekoäly toimii välittäjänä rationaliteetille päätöksenteossa?” Postfenomenologinen analyysi on tarkoitettu käytettäväksi kun tutkitaan tiettyjä teknologioita ja sitä, miten ne toimivat välittäjinä ihmisten olemiselle ja kokemuksille. Nämä välitykset voidaan jakaa ulottuvuuksiin, jotka tässä väitöskirjassa ovat piilottaminen–paljastaminen, mahdollistava–rajoittava, sekä vieraannuttava– osallistava. Empiiriset tutkimukset luovat postfenomenologiassa perustan filosofiselle ja konseptuaaliselle analyysille. Tyypillisesti nämä ovat tapaustutkimuksia konkreettisista teknologioista, jotka voivat olla primäärisiä omia tutkimuksia, perustua sekundääriseen materiaaliin, tai olla tutkijan omaa reflektiota. Vaikka väitöskirjan julkaisut eivät itsessään ole olleet tapaustutkimuksia, käytetty postfenomenologinen tutkimusote käsittää ne sellaisina muodostaen väitöskirjasta monitapaustutkimuksen. Neljä ensimmäistä julkaisua ovat empiirisiä tekoälysovelluksia erilaisilla, mutta verrattavissa olevilla datoilla ja tutkimusasetelmilla. Viimeinen julkaisu on teoreettinen, ja se täydentää aiempia julkaisuita tarjoamalla näkökulman tarkasteltavaan vieraannuttava– osallistava-ulottuvuuteen. Tekoälyn havaittiin piilottavan päätöksien rationaliteettia useissa eri päätöksentekoprosessin vaiheissa, mutta toisaalta myös paljastavan tiettyjä uusia rationaliteettimahdollisuuksia. Piilotuksesta löydettiin kaksi eri tasoa. Ensimmäisellä tasolla rationaliteetin sisältö on piilossa, mutta on nähtävissä, että jotain rationaliteettia on käytetty. Toisella tasolla on piilossa, että päätökseen on edes käytetty rationaliteettia. Sen sijaan päätös vaikuttaa tapahtuneen ilman syitä ikää kuin “automaattisesti.” Rationaliteeteista muodostui abstraktimpeja ja jäykempiä riippumatta tekoälyn käytöstä päätöksenteossa, mikä kuitenkin tyypillisesti paljasti rationaliteetin sisältöä kun päätöksenteko oli osallistavaa, kun taas vieraantuneessa päätöksenteossa tämä prosessi ja rationaliteetti jäi piiloon. Tekoäly luonteensa vuoksi rajoitti rationaliteetteja vertailemaan datan erilaisuuksia ja samanlaisuuksia. Tulokset vihjaavat, että ihmiset ovat itse osallistuvat omaan vieraantumiseensa päätöksenteossa tekoälyn kanssa erityisesti rationaliteetin piilottamisen kautta. Tämä väitöskirja tarjoaa uusia näkemyksiä ja tarkemman tarkastelutason rationaliteettiin ja sen moraaliin tekoälyavusteisessa päätöksenteossa. Väitöskirja myös tarjoaa testattavia väitteitä tekoälyn välityksistä, joita voidaan käyttää teorian kehittämiseen tekoälyn reiluuden ja vastuun näkökulmista. Lisäksi väitöskirja vie rationaliteetin ja organisaatioiden päätöksenteon tutkimuskenttää eteenpäin jättämällä tarpeettoman dualismin pois rationaliteetin osalta. Löydökset myös auttavat ammattilaisia löytämään oleellisia tekoälyn vaikutuksia, jotka on syytä huomioida onnistuneen tekoälyn käytön kannalta.Artificial intelligence (AI) has become increasingly ubiquitous in a variety of organizations for decision-making, and it promises competitive advantages to those who use it. However, with the novel insights and benefits of AI come unprecedented side-effects and externalities, which circle around a theme of rationality. A rationality for a decision is the reasons, the relationships between the reasons, and the process of their emergence. Lack of access to the decision rationality of AI is posed to cause issues with trust in AI due to lack of fairness and accountability. Moreover, AI rationality in moral decisions is seen to pose threats to reflective moral capabilities. While rationality and agency are both fundamental to decision-making, agency has seen a shift into more relational views in which the technical and social are seen as inseparable and co-constituting of each other. However, AI rationality discussions are still heavily entrenched in dualism that has been overcome regarding agency. This entrenchment can contribute to a variety of the issues noted around AI. Moreover, while the types of AI rationality have been considered theoretically, currently the field lacks empirical work to support the discussions revolving around AI rationality. This dissertation uses postphenomenology as a methodology to study empiri- cally how AI in decision-making impacts rationality. Postphenomenology honours anti-dualistic agency: Technology mediates and co-constitutes agency with people in intra-action. This dissertation uses this approach to study the mediation of rationality. Thus, it helps views on rationality to catch up with agency in terms of overcoming unnecessary dualism. The posed research question is “How does AI mediate rationality in decision-making?” Postphenomenological analysis is meant to be used at the level of the technological mediations of a specific technology, such as AI mediation of rationality in decision-making. Mediations can be considered in dimensions. This dissertation considers revealing–concealing, enabling–constraining, and involving–alienating dimensions of mediation to answer the posed research question. In postphenomenology a basis for analysis is provided by empirical works, which are typically case studies of concrete intra-actions between humans and technologies. Postphenomenology as a methodology allows secondary empirical work by others, primary self-conducted studies, and first-person reflection as basis for empirical case analysis. Thus, while the publications of this dissertation are not published as case studies, postphenomenology considers them as such, making this dissertation a multiple case study. The first four publications are empirical works of applied AI with various different types of combinations of human and AI decision-making tasks with different yet comparable data. Data and methodology remain similar across studies in the empirical publications and are well comparable for postphenomenological analysis as case studies. The last publication is a theoretical paper, which provides a complement to the empirical publications on the involving–alienating dimension. AI was found to conceal decision rationality in various stages of AI decision making, while in some cases AI also revealed possibilities for specific, novel rationalities. Two levels of rationality concealment were discovered: The contents of a rationality could become concealed, but also the presence of a rationality in the first place could become concealed. Rationality became more abstract and formalized regardless of whether the rationality was constructed with an AI or not. This formalization constrained rationality by ruling out other valid rationalities. Constraint also happened due to rationalities necessarily taking the specific form of similarities versus differences in the data. The results suggest that people can become involved in their alienation from rationality in AI decision-making. Study of the relationships between the mediation dimensions suggest that the constraint of formalization was revealing with involvement. Otherwise, formalization was both concealed because of and resulted in alienation from AI in decision-making. Results point to the direction that people may be involved in their own alienation via rationality concealment. This dissertation contributes new insights and levels of analysis for AI rationality in decision-making and its moral implications. It provides testable claims about technological mediations that can be used to develop theory and posits that they can be useful in theorizing how to increase AI fairness, accountability, and transparency. Moreover, the dissertation contributes to the field of rationality in management and organizational decision-making by developing rationality beyond unnecessary dualism. For practitioners, the findings guide them to identify relevant AI mediations in decision-making to consider to ensure successful AI adoption and mitigation of its issues in their specific contexts

    Human-in-the-loop feature selection

    No full text
    Feature selection is a crucial step in the conception of Machine Learning models, which is often performed via data driven approaches that overlook the possibility of tapping into the human decision-making of the model’s designers and users. We present a human-in-the-loop framework that interacts with domain experts by collecting their feedback regarding the variables (of few samples) they evaluate as the most relevant for the task at hand. Such information can be modeled via Reinforcement Learning to derive a per-example feature selection method that tries to minimize the model’s loss function by focusing on the most pertinent variables from a human perspective. We report results on a proof-of-concept image classification dataset and on a real-world risk classification task in which the model successfully incorporated feedback from experts to improve its accuracy

    Human-in-the-Loop Feature Selection

    No full text
    corecore