8 research outputs found

    Effectiveness of Example-Based Explanations to Improve Human Decision Quality in Machine Learning Forecasting Systems

    Get PDF
    Algorithmic forecasts outperform human forecasts by 10% on average. State-of-the-art machine learning (ML) algorithms have further expanded this discrepancy. Because a variety of other activities rely on them, sales forecasting is critical to a company\u27s profitability. However, individuals are hesitant to use ML forecasts. To overcome this algorithm aversion, explainable artificial intelligence (XAI) can be a solution by making ML systems more comprehensible by providing explanations. However, current XAI techniques are incomprehensible for laymen, as they impose too much cognitive load. We contribute to this research gap by investigating the effectiveness in terms of forecast accuracy of two example-based explanation approaches. We conduct an online experiment based on a two-by-two between-subjects design with factual and counterfactual examples as experimental factors. A control group has access to ML predictions, but not to explanations. We report results of this study: While factual explanations significantly improved participants’ decision quality, counterfactual explanations did not

    Managing Bias in Machine Learning Projects

    Get PDF

    Conceptual Foundations on Debiasing for Machine Learning-Based Software

    Get PDF
    Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive and harmful consequences for users, business, and society inflicted through bias. While approaches to address bias are increasingly recognized and developed, our understanding of debiasing remains nascent. Research has yet to provide a comprehensive coverage of this vast growing field, much of which is not embedded in theoretical understanding. Conceptualizing and structuring the nature, effect, and implementation of debiasing instruments could provide necessary guidance for practitioners investing in debiasing efforts. We develop a taxonomy that classifies debiasing instrument characteristics into seven key dimensions. We evaluate and refine our taxonomy through nine experts and apply our taxonomy to three actual debiasing instruments, drawing lessons for the design and choice of appropriate instruments. Bridging the gaps between our conceptual understanding of debiasing for ML-based software and its organizational implementation, we discuss contributions and future research

    Explanation Interfaces for Sales Forecasting

    Get PDF
    Algorithmic forecasts outperform human forecasts in many tasks. State-of-the-art machine learning (ML) algorithms have even widened that gap. Since sales forecasting plays a key role in business profitability, ML based sales forecasting can have significant advantages. However, individuals are resistant to use algorithmic forecasts. To overcome this algorithm aversion, explainable AI (XAI), where an explanation interface (XI) provides model predictions and explanations to the user, can help. However, current XAI techniques are incomprehensible for laymen. Despite the economic relevance of sales forecasting, there is no significant research effort towards aiding non-expert users make better decisions using ML forecasting systems by designing appropriate XI. We contribute to this research gap by designing a model-agnostic XI for laymen. We propose a design theory for XIs, instantiate our theory and report initial formative evaluation results. A real-world evaluation context is used: A medium-sized Swiss bakery chain provides past sales data and human forecasts

    Exploring the Synergies in Human-AI Hybrids: A Longitudinal Analysis in Sales Forecasting

    No full text
    Despite the promised potential of artificial intelligence (AI), insights into real-life human-AI hybrids and their dynamics remain obscure. Based on digital trace data of over 1.4 million forecasting decisions over a 69-month period, we study the implications of an AI sales forecasting system’s introduction in a bakery enterprise on decision-makers’ overriding of the AI system and resulting hybrid performance. Decision-makers quickly started to rely on AI forecasts, leading to lower forecast errors. Overall, human intervention deteriorated forecasting performance as overriding resulted in greater forecast error. The results confirm the notion that AI systems outperform humans in forecasting tasks. However, the results also indicate previously neglected, domain-specific implications: As the AI system aimed to reduce forecast error and thus overproduction, forecasting numbers decreased over time, and thereby also sales. We conclude that minimal forecast errors do not inevitably yield optimal business outcomes when detrimental human factors in decision-making are ignored

    Neural Network Hyperparameter Optimization for the Assisted Selection of Assembly Equipment

    No full text
    The design of assembly systems has been mainly a manual task including activities such as gathering and analyzing product data, deriving the production process and assigning suitable manufacturing resources. Especially in the early phases of assembly system design in automotive industry, the complexity reaches a substantial level, caused by the increasing number of product variants and the decreased time to market. In order to mitigate the arising challenges, researchers are continuously developing novel methods to support the design of assembly systems. This paper presents an artificial intelligence system for assisting production engineers in the selection of suitable equipment for highly automated assembly systems

    Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods

    No full text
    Over the last decade, the importance of machine learning increased dramatically in business and marketing. However, when machine learning is used for decision-making, bias rooted in unrepresentative datasets, inadequate models, weak algorithm designs, or human stereotypes can lead to low performance and unfair decisions, resulting in financial, social, and reputational losses. This paper offers a systematic, interdisciplinary literature review of machine learning biases as well as methods to avoid and mitigate these biases. We identified eight distinct machine learning biases, summarized these biases in the cross-industry standard process for data mining to account for all phases of machine learning projects, and outline twenty-four mitigation methods. We further contextualize these biases in a real-world case study and illustrate adequate mitigation strategies. These insights synthesize the literature on machine learning biases in a concise manner and point to the importance of human judgment for machine learning algorithms
    corecore