811 research outputs found

    Utilizing Active Machine Learning for Quality Assurance: A Case Study of Virtual Car Renderings in the Automotive Industry

    Get PDF
    Computer-generated imagery of car models has become an indispensable part of car manufacturers' advertisement concepts. They are for instance used in car configurators to offer customers the possibility to configure their car online according to their personal preferences. However, human-led quality assurance faces the challenge to keep up with high-volume visual inspections due to the car models’ increasing complexity. Even though the application of machine learning to many visual inspection tasks has demonstrated great success, its need for large labeled data sets remains a central barrier to using such systems in practice. In this paper, we propose an active machine learning-based quality assurance system that requires significantly fewer labeled instances to identify defective virtual car renderings without compromising performance. By employing our system at a German automotive manufacturer, start-up difficulties can be overcome, the inspection process efficiency can be increased, and thus economic advantages can be realized

    Instance Selection Mechanisms for Human-in-the-Loop Systems in Few-Shot Learning

    Full text link
    Business analytics and machine learning have become essential success factors for various industries - with the downside of cost-intensive gathering and labeling of data. Few-shot learning addresses this challenge and reduces data gathering and labeling costs by learning novel classes with very few labeled data. In this paper, we design a human-in-the-loop (HITL) system for few-shot learning and analyze an extensive range of mechanisms that can be used to acquire human expert knowledge for instances that have an uncertain prediction outcome. We show that the acquisition of human expert knowledge significantly accelerates the few-shot model performance given a negligible labeling effort. We validate our findings in various experiments on a benchmark dataset in computer vision and real-world datasets. We further demonstrate the cost-effectiveness of HITL systems for few-shot learning. Overall, our work aims at supporting researchers and practitioners in effectively adapting machine learning models to novel classes at reduced costs.Comment: International Conference on Wirtschaftsinformatik, 14 page

    Designing Resilient AI-based Robo-Advisors: A Prototype for Real Estate Appraisal

    Get PDF
    For most people, buying a home is a life-changing decision that involves financial obligations for many years into the future. Therefore, it is crucial to realistically assess the value of a property before making a purchase decision. Recent research has shown that artificial intelligence (AI) has the potential to predict property prices accurately. As a result, more and more AI-based robo-advisors offer real estate estimation advice. However, a recent scandal has shown that automated algorithms are not always reliable. Triggered by the Covid-19 pandemic, one of the largest robo-advisors (Zillow) bought houses overvalued, eventually resulting in the dismissal of 2,000 employees. This demonstrates the current weaknesses of AI-based algorithms in real estate appraisal and highlights the need for troubleshooting AI advice. Therefore, we propose to leverage techniques from the explainable AI (XAI) knowledge base to help humans question AI consultations. We derive design principles based on the literature and implement them in a configurable real estate valuation artifact. We then evaluate it in two focus groups to confirm the validity of our approach. We contribute to research and practice by deriving design knowledge in accordance with a unique artifact

    Instance Selection Mechanisms for Human-in-the-Loop Systems in Few-Shot Learning

    Get PDF
    Business analytics and machine learning have become essential success factors for various industries - with the downside of cost-intensive gathering and labeling of data. Few-shot learning addresses this challenge and reduces data gathering and labeling costs by learning novel classes with very few labeled data. In this paper, we design a human-in-the-loop (HITL) system for few-shot learning and analyze an extensive range of mechanisms that can be used to acquire human expert knowledge for instances that have an uncertain prediction outcome. We show that the acquisition of human expert knowledge significantly accelerates the few-shot model performance given a negligible labeling effort. We validate our findings in various experiments on a benchmark dataset in computer vision and real-world datasets. We further demonstrate the cost-effectiveness of HITL systems for few-shot learning. Overall, our work aims at supporting researchers and practitioners in effectively adapting machine learning models to novel classes at reduced costs

    A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making

    Get PDF
    Research in artificial intelligence (AI)-assisted decision-making is experiencing tremendous growth with a constantly rising number of studies evaluating the effect of AI with and without techniques from the field of explainable AI (XAI) on human decision-making performance. However, as tasks and experimental setups vary due to different objectives, some studies report improved user decision-making performance through XAI, while others report only negligible effects. Therefore, in this article, we present an initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research. We observe a statistically positive impact of XAI on users\u27 performance. Additionally, the first results indicate that human-AI decision-making tends to yield better task performance on text data. However, we find no effect of explanations on users\u27 performance compared to sole AI predictions. Our initial synthesis gives rise to future research investigating the underlying causes and contributes to further developing algorithms that effectively benefit human decision-makers by providing meaningful explanations

    Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making

    Get PDF
    Many important decisions in daily life are made with the help of advisors, e.g., decisions about medical treatments or financial investments. Whereas in the past, advice has often been received from human experts, friends, or family, advisors based on artificial intelligence (AI) have become more and more present nowadays. Typically, the advice generated by AI is judged by a human and either deemed reliable or rejected. However, recent work has shown that AI advice is not always beneficial, as humans have shown to be unable to ignore incorrect AI advice, essentially representing an over-reliance on AI. Therefore, the aspired goal should be to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions. Specifically, that means that humans should rely on the AI in the presence of correct advice and self-rely when confronted with incorrect advice, i.e., establish appropriate reliance (AR) on AI advice on a case-by-case basis. Current research lacks a metric for AR. This prevents a rigorous evaluation of factors impacting AR and hinders further development of human-AI decision-making. Therefore, based on the literature, we derive a measurement concept of AR. We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly. In this article, we derive the measurement concept, illustrate its application and outline potential future research

    On the Influence of Cognitive Styles on Users’ Understanding of Explanations

    Get PDF
    Artificial intelligence (AI) is becoming increasingly complex, making it difficult for users to understand how the AI has derived its prediction. Using explainable AI (XAI)-methods, researchers aim to explain AI decisions to users. So far, XAI-based explanations pursue a technology-focused approach—neglecting the influence of users’ cognitive abilities and differences in information processing on the understanding of explanations. Hence, this study takes a human-centered perspective and incorporates insights from cognitive psychology. In particular, we draw on the psychological construct of cognitive styles that describe humans’ characteristic modes of processing information. Applying a between-subject experiment design, we investigate how users’ rational and intuitive cognitive styles affect their objective and subjective understanding of different types of explanations provided by an AI. Initial results indicate substantial differences in users’ understanding depending on their cognitive style. We expect to contribute to a more nuanced view of the interrelation of human factors and XAI design

    Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial to Human Experts

    Get PDF
    Information systems increasingly leverage artificial intelligence (AI) and machine learning (ML) to generate value from vast amounts of data. However, ML models are imperfect and can generate incorrect classifications. Hence, human-in-the-loop (HITL) extensions to ML models add a human review for instances that are difficult to classify. This study argues that continuously relying on human experts to handle difficult model classifications leads to a strong increase in human effort, which strains limited resources. To address this issue, we propose a hybrid system that creates artificial experts that learn to classify data instances from unknown classes previously reviewed by human experts. Our hybrid system assesses which artificial expert is suitable for classifying an instance from an unknown class and automatically assigns it. Over time, this reduces human effort and increases the efficiency of the system. Our experiments demonstrate that our approach outperforms traditional HITL systems for several benchmarks on image classification
    • 

    corecore