29 research outputs found

    How to Solve AI Bias

    Get PDF
    © 2020 The Author(s). This an open access work distributed under the terms of the Creative Commons Attribution Licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.Bias in AI is a topic that impacts machine learning and artificial intelligence technology that learns from datasets and its training data. While gender discrimination and chatbots showing bias have recently caught people’s attention and imagination, the overall area of how to correct and manage bias is in its infancy for business use. Further, little is known about how to solve bias in AI and how there could potent for malicious misuse at large scale. We explore this area and propose solutions to this problem.Non peer reviewe

    Guidelines For Pursuing and Revealing Data Abstractions

    Full text link
    Many data abstraction types, such as networks or set relationships, remain unfamiliar to data workers beyond the visualization research community. We conduct a survey and series of interviews about how people describe their data, either directly or indirectly. We refer to the latter as latent data abstractions. We conduct a Grounded Theory analysis that (1) interprets the extent to which latent data abstractions exist, (2) reveals the far-reaching effects that the interventionist pursuit of such abstractions can have on data workers, (3) describes why and when data workers may resist such explorations, and (4) suggests how to take advantage of opportunities and mitigate risks through transparency about visualization research perspectives and agendas. We then use the themes and codes discovered in the Grounded Theory analysis to develop guidelines for data abstraction in visualization projects. To continue the discussion, we make our dataset open along with a visual interface for further exploration

    PizzaBlock: Designing Artefacts and Roleplay to Understand Decentralised Identity Management Systems

    Get PDF
    This pictorial describes in detail the design, and multiple iterations, of PizzaBlock - a role-playing game and design workshop to introduce non-technical participants to decentralised identity management systems. We have so far played this game with six different audiences, with over one hundred participants - iterating the design of the artefacts and gameplay each time. In this pictorial, we reflect on this RtD project to unpack: a) How we designed artefacts and roleplay to explore decentralised technologies and networks; b) How we communicated the key challenges and parameters of a complex system, through the production of a playable, interactive, analogue representation of that technology; c) How we struck a balance between playful tangible gameplay and high-fidelity technical analogy; and d) How approaches like PizzaBlock invite engagement with complex infrastructures and can support more participatory approaches to their design

    Designing for Design-after-Design in a Museum Installation

    Get PDF

    From Personal Data to Service Innovation – Guiding the Design of New Service Opportunities

    Get PDF
    Stimulated by an ongoing digital transformation, companies obtain a new source for digital service innovation: The use of personal data has the potential to build deeper customer relationships and to develop individualized services. However, methodological support for the systematic application of personal data in innovation processes is still scarce. This paper suggests a comprehensive approach for service design tools that enable collaborative design activities by participants with different data skills to identify new service opportunities. This approach includes the systematic development of customer understanding as well as a process to match customer needs to existing personal data resources. Following a design science research approach, we develop design principles for service design tools and build and evaluate a service opportunity canvas as a first instantiation

    Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems

    Full text link
    We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies -- qualitative interviews, a controlled experiment, and a card-sorting task -- to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.Comment: IUI 202
    corecore