408 research outputs found

    HUK-COBURG: The Implementation of an AI-Enabled Behavioural Insurance Business Model using Geo-Spatial Data

    Get PDF
    Automotive insurance is undergoing digital transformation that exploits new forms of big data and Artificial Intelligence (AI) systems. Geo-spatial data from GPS and telematics systems enables innovative risk modelling to evaluate driver behaviour and leads to the creation of new insurance services and novel insurance business models. A research framework is proposed to analyse AI-enabled business models and applied to a detailed case analysis of behavioural insurance in HUK-COBURG. The results illustrate the application of geo-spatial data in an insurance context and demonstrate the utility of the research framework to analyse new AI-enabled business models. The analysis identifies important implementation issues and shows that the strategic logic, regulatory and ethical context are important elements of business models. The empirical analysis reveals the strategic properties and effects of the data flywheel concept, which has general applicability. The theory framework and empirical results have important implications for other markets and theoretical contexts

    How companies succeed in developing ethical artificial intelligence (AI)

    Full text link
    The rapid advancement of artificial intelligence (AI) has the potential to bring great benefits to society, but also raises important ethical and moral questions. To ensure that AI systems are developed and deployed in a responsible and ethical manner, companies must consider a number of factors, including fairness, accountability, transparency, privacy, and consistency with human values. This essay provides an overview of the key considerations for building an ethical AI system and briefly discusses the challenges, including the importance of developing AI systems with a clear understanding of their potential impact on society and taking steps to mitigate any potential negative consequences. This essay also highlights the need for continuous monitoring and evaluation of AI systems and outlines a strategy, namely an enterprise-wide "Ethics Sheet for AI tasks", to ensure that AI systems are used in an ethical and responsible manner within the company. Ultimately, building an ethical AI system requires a commitment to transparency, accountability, and a clear understanding of the ethical and moral implications of AI technology, and the company must be aware of the long-term consequences of using a non-ethical and morally questionable AI system. (DIPF/Orig.

    The Management of Direct Material Cost During New Product Development: A Case Study on the Application of Big Data, Machine Learning, and Target Costing

    Get PDF
    This dissertation thesis investigates the application of big data, machine learning, and the target costing approach for managing costs during new product development in the context of high product complexity and uncertainty. A longitudinal case study at a German car manufacturer is conducted to examine the topic. First, we conduct a systematic literature review, which analyzes use cases, issues, and benefits of big data and machine learning technology for the application in management accounting. Our review contributes to the literature by providing an overview about the specific aspects of both technologies that can be applied in managerial accounting. Further, we identify the specific issues and benefits of both technologies in the context management accounting. Second, we present a case study on the applicability of machine learning and big data technology for product cost estimation, focusing on the material costs of passenger cars. Our case study contributes to the literature by providing a novel approach to increase the predictive accuracy of cost estimates of subsequent product generations, we show that the predictive accuracy is significantly larger when using big data sets, and we find that machine learning can outperform cost estimates from cost experts, or produce at least comparable results, even when dealing with highly complex products. Third, we conduct an experimental study to investigate the trade-off between accuracy (predictive performance) and explainability (transparency and interpretability) of machine learning models in the context of product cost estimation. We empirically confirm the oftenimplied inverse relationship between both attributes from the perspective of cost experts. Further, we show that the relative importance of explainability to accuracy perceived by cost experts is important when selecting between alternative machine learning models. Then, we present four factors that significantly determine the perceived relative importance of explainability to accuracy. Fourth, we present a proprietary archival study to investigate the target costing approach in a complex product development context, which is characterized by product design interdependence and uncertainty about target cost difficulty. We find that target cost difficulty is related to more cost reduction performance during product development based on archival company data, and thereby complement results from earlier studies, which are based on experimental studies. Further, we demonstrate that in a complex product development context, product design interdependence and uncertainty about target cost difficulty may both limit the effectiveness of target costing

    Towards AI Standards Whitepaper: Thought-leadership in AI legal, ethical and safety specifications through experimentation

    Get PDF
    With the rapid adoption of algorithms in business and society there is a growing concern to safeguard the public interest. Researchers, policy-makers and industry sharing this view convened to collectively identify future areas of focus in order to advance AI standards - in particular the acute need to ensure standard suggestions are practical and empirically informed. This discussion occurred in the context of the creation of a lab at UCL with these concerns in mind (currently dubbed as UCL The Algorithms Standards and Technology Lab). Via a series of panels, with the main stakeholders, three themes emerged, namely (i) Building public trust, (ii) Accountability and Operationalisation, and (iii) Experimentation. In order to forward the themes, lab activities will fall under three streams - experimentation, community building and communication. The Lab’s mission is to provide thought-leadership in AI standards through experimentation

    Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development

    Full text link
    Despite large progress in Explainable and Safe AI, practitioners suffer from a lack of regulation and standards for AI safety. In this work we merge recent regulation efforts by the European Union and first proposals for AI guidelines with recent trends in research: data and model cards. We propose the use of standardized cards to document AI applications throughout the development process. Our main contribution is the introduction of use-case and operation cards, along with updates for data and model cards to cope with regulatory requirements. We reference both recent research as well as the source of the regulation in our cards and provide references to additional support material and toolboxes whenever possible. The goal is to design cards that help practitioners develop safe AI systems throughout the development process, while enabling efficient third-party auditing of AI applications, being easy to understand, and building trust in the system. Our work incorporates insights from interviews with certification experts as well as developers and individuals working with the developed AI applications.Comment: 54 pages, 1 figure, to be submitte

    Machine learning and deep learning

    Full text link
    Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization.Comment: Published online first in Electronic Market

    Fostering Effective Human-AI Collaboration: Bridging the Gap Between User-Centric Design and Ethical Implementation

    Get PDF
    The synergy between humans and artificial intelligence (AI) systems has become pivotal in contemporary technological landscapes. This research paper delves into the multifaceted domain of Human-AI collaboration, aiming to decipher the intricate interplay between user-centric design and ethical implementation. As AI systems continue to permeate various facets of society, the significance of seamless interaction and ethical considerations has emerged as a critical axis for exploration. This study critically examines the pivotal components of successful Human-AI collaboration, emphasizing the importance of user experience design that prioritizes intuitive interfaces and transparent interactions. Furthermore, ethical implications encompassing privacy, fairness, bias mitigation, and accountability in AI decision-making are thoroughly investigated, emphasizing the imperative need for responsible AI deployment. The paper presents an analysis of diverse scenarios where Human-AI collaboration manifests, elucidating the impact on various sectors such as education, healthcare, workforce augmentation, and problem-solving domains. Insights into the cognitive augmentation offered by AI systems and the consequential implications on human decision-making processes are also probed, offering a comprehensive understanding of collaborative problem-solving and decision support mechanisms. Through an integrative approach merging user-centric design philosophies and ethical frameworks, this research advocates for a paradigm shift in AI development. It underscores the necessity of incorporating user feedback, participatory design methodologies, and transparent ethical guidelines into the development life cycle of AI systems. Ultimately, the paper proposes a roadmap towards fostering a symbiotic relationship between humans and AI, fostering trust, reliability, and enhanced performance in collaborative endeavors. This abstract outline the scope, key areas of investigation, and proposed outcomes of a research paper centered on Human-AI collaboration, providing a glimpse into the depth and breadth of the study
    corecore