1,786 research outputs found

    ON THE THEORETICAL FOUNDATIONS OF RESEARCH INTO THE UNDERSTANDABILITY OF BUSINESS PROCESS MODELS

    Get PDF
    Against the background of the growing significance of Business Process Management (BPM) for Information Systems (IS) research and practice, especially the field of Business Process Modeling gains more and more importance. Business process models support communication about as well as the coordination of processes and have become a widely adopted tool in practice. As the understandability of business process models plays a crucial role in communication processes, more and more studies on process model understandability have been conducted in IS research. This article aims at investigating underlying theories of research into business process model understandability by means of an in-depth analysis of 126 systematically retrieved research articles on the topic. It shows in how far process model understandability research is multi-theoretically founded. Identified theories differ regarding addressed subject matters, their coverage, their focus as well as the underlying notion of model understanding, which is exemplarily demonstrated and discussed in this article. Moreover, implications of the findings are discussed and an outlook on future business process model understandability research and on the integration potential of theories in this field is given

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Learning Interpretable Rules for Multi-label Classification

    Full text link
    Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously. In this chapter, we advocate a rule-based approach to multi-label classification. Rule learning algorithms are often employed when one is not only interested in accurate predictions, but also requires an interpretable theory that can be understood, analyzed, and qualitatively evaluated by domain experts. Ideally, by revealing patterns and regularities contained in the data, a rule-based theory yields new insights in the application domain. Recently, several authors have started to investigate how rule-based models can be used for modeling multi-label data. Discussing this task in detail, we highlight some of the problems that make rule learning considerably more challenging for MLC than for conventional classification. While mainly focusing on our own previous work, we also provide a short overview of related work in this area.Comment: Preprint version. To appear in: Explainable and Interpretable Models in Computer Vision and Machine Learning. The Springer Series on Challenges in Machine Learning. Springer (2018). See http://www.ke.tu-darmstadt.de/bibtex/publications/show/3077 for further informatio

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability

    Peak reduction in decentralised electricity systems : markets and prices for flexible planning

    Get PDF
    In contemporary societies, industrial processes as well as domestic activities rely to a large degree on a well-functioning electricity system. This reliance exists both structurally (the system should always be available) and economically (the prices for electricity affect the costs of operating a business and the costs of living). After many decades of stability in engineering principles and related economic paradigms, new developments require us to reconsider how electricity is distributed and paid for.Twowell-known examples of important technological developments in this regard are decentralised renewable energy generation (e.g. solar and wind power) and electric vehicles. They promise to be highly useful, for instance because they allow us to decrease our CO2 emissions and our dependence on energy imports. However, a widespread introduction of these (and related) technologies requires significant engineering efforts. In particular, two challenges to themanagement of electricity systems are of interest to the scope of this dissertation. First, the usage of these technologies has significant effects on howwell (part of) supply and demand can be planned ahead of time and balanced in real time. Planning and balancing are important activities in electricity distribution for keeping the number of peaks low (peaks can damage network hardware and lead to high prices). It can become more difficult to plan and balance in future electricity systems, because supply will partly depend on intermittent sunshine and wind patterns, and demand will partly depend on dynamic mobility patterns of electric vehicle drivers. Second, these technologies are often placed in the lower voltage (LV) tiers of the grid in a decentralised manner, as opposed to conventional energy sources, which are located in higher voltage (HV) tiers in central positions. This is introducing bi-directional power flows on the grid, and it significantly increases the number of actors in the electricity systems whose day-to-day decisionmaking about consumption and generation (e.g. electric vehicles supplying electricity back to the network) has significant impacts on the electricity system.In this dissertation, we look into dynamic pricing and markets in order to achieve allocations (of electricity and money) which are acceptable in future electricity systems. Dynamic pricing and markets are concepts that are highly useful to enable efficient allocations of goods between producers and consumers. Currently, they are being used to allocate electricity between wholesale traders. In recent years, the roles of the wholesale producer and the retailer have been unbundled in many countries of the world, which is often referred to as “market liberalisation”. This is supposed to increase competition and give end consumers more choice in contracts. Market liberalisation creates opportunities to design markets and dynamic pricing approaches that can tackle the aforementioned challenges in future electricity systems. However, they also introduce new challenges themselves, such as the acceptance of price fluctuations by consumers.The research objective of this dissertation is to develop market mechanisms and dynamic pricing strategies which can deal with the challenges mentioned above and achieve acceptable outcomes. To this end, we formulate three major research questions:First, can we design pricing mechanisms for electricity systems that support two necessary featureswell, which are not complementary—namely to encourage adaptations in electricity consumption and generation on short notice (by participants who have this flexibility), but also to enable planning ahead of electricity consumption and generation (for participants who can make use of planning)?Second, the smart grid vision (among others) posits that in future electricity systems, outcomeswill be jointly determined by a large number of (possibly) small actors and allocations will be mademore frequently than today. Which pricing mechanisms do not require high computational capabilities from the participants, limit the exposure of small participants to risk and are able to find allocations fast?Third, automated grid protection against peaks is a crucial innovation step for network operators, but a costly infrastructure program. Is it possible for smart devices to combine the objective of protecting network assets (e.g. cables) from overloading with applying buying and selling strategies in a dynamic pricing environment, such that the devices can earn back parts of their own costs?In order to answer the research questions, our methods are as follows: We consider four problems which are likely to occur in future electricity systems and are of relevance to our research objective. For each problem, we develop an agent-based model and propose a novel solution. Then, we evaluate our proposed solution using stochastic computational simulations in parameterised scenarios. We thus make the following four contributions:In Chapter 3,we design a market mechanism in which both binding commitments and optional reserve capacity are explicitly represented in the bid format, which can facilitate price finding and planning in future electricity systems (and therefore gives answers to our first research question). We also show that in this mechanism, flexible consumers are incentivised to offer reserve capacity ahead of time, whichwe prove for the case of perfect competition and showin simulations for the case of imperfect competition. We are able to show in a broad range of scenarios that our proposed mechanism has no economic drawbacks for participants. Furthermore (giving answers to our second research question), the mechanism requires less computational capabilities in order to participate in it than a contemporary wholesale electricitymarket with comparable features for planning ahead.In Chapter 4, we consider the complexity of dynamic pricing strategies that retailers could use in future electricity systems (this gives answers to our first, but foremost to our second research question). We argue that two important features of pricing strategies are not complementary—namely power peak reduction and comprehensibility of prices—and we propose indicators for the comprehensibility of a pricing strategy from the perspective of consumers. We thereby add a novel perspective for the design and evaluation of pricing strategies.In Chapter 5, we consider dynamic pricing mechanisms where the price is set by a single seller. In particular, we develop pricing strategies for a seller (a retailer) who commits to respect an upper limit on its unit prices (this gives answers to both our first and second research question). Upper price limits reduce exposure of market participants to price fluctuations. We show that employing the proposed dynamic pricing strategies reduces consumption peaks, although their parameters are being simultaneously optimised for themaximisation of retailer profits.In Chapter 6, we develop control algorithms for a small storage device which is connected to a low voltage cable. These algorithms can be used to reach decisions about when to charge and when to discharge the storage device, in order to protect the cable from overloading as well as to maximise revenue from buying and selling (this gives answers to our third research question). We are able to show in computational simulations that our proposed strategies perform well when compared to an approximated theoretical lower cost bound. We also demonstrate the positive effects of one of our proposed strategies in a laboratory setupwith real-world cable hardware.The results obtained in this dissertation advance the state of the art in designing pricing mechanisms and strategies which are useful for many use cases in future decentralised electricity systems. The contributions made can provide two positive effects: First, they are able to avoid or reduce unwanted extreme situations, often related to consumption or production peaks. Second, they are suitable for small actors who do not have much computation power but still need to participate in future electricity systems where fast decision making is needed

    Computing Competencies for Undergraduate Data Science Curricula: ACM Data Science Task Force

    Get PDF
    At the August 2017 ACM Education Council meeting, a task force was formed to explore a process to add to the broad, interdisciplinary conversation on data science, with an articulation of the role of computing discipline-specific contributions to this emerging field. Specifically, the task force would seek to define what the computing/computational contributions are to this new field, and provide guidance on computing-specific competencies in data science for departments offering such programs of study at the undergraduate level. There are many stakeholders in the discussion of data science – these include colleges and universities that (hope to) offer data science programs, employers who hope to hire a workforce with knowledge and experience in data science, as well as individuals and professional societies representing the fields of computing, statistics, machine learning, computational biology, computational social sciences, digital humanities, and others. There is a shared desire to form a broad interdisciplinary definition of data science and to develop curriculum guidance for degree programs in data science. This volume builds upon the important work of other groups who have published guidelines for data science education. There is a need to acknowledge the definition and description of the individual contributions to this interdisciplinary field. For instance, those interested in the business context for these concepts generally use the term “analytics”; in some cases, the abbreviation DSA appears, meaning Data Science and Analytics. This volume is the third draft articulation of computing-focused competencies for data science. It recognizes the inherent interdisciplinarity of data science and situates computing-specific competencies within the broader interdisciplinary space

    Would Archimedes Shout “Eureka” If He Had Google? Innovating with Search Algorithms

    Get PDF
    In this paper we investigate the relationship between algorithmic search tools and the innovation process. Today, search algorithms are used for all tasks, yet we know little about their impact on the well-studied innovation process. We suggest a theoretical framework based on centripetal and centrifugal forces that conceptualizes the relationship between the algorithmic design logics of search tools and the innovation process. We use it to illustrate the current challenges with the use of informational search tools based on design principles of popularity and personalization, for innovation. We propose the need to develop and use exploratory search models and tools for innovation

    The structure and formation of natural categories

    Get PDF
    Categorization and concept formation are critical activities of intelligence. These processes and the conceptual structures that support them raise important issues at the interface of cognitive psychology and artificial intelligence. The work presumes that advances in these and other areas are best facilitated by research methodologies that reward interdisciplinary interaction. In particular, a computational model is described of concept formation and categorization that exploits a rational analysis of basic level effects by Gluck and Corter. Their work provides a clean prescription of human category preferences that is adapted to the task of concept learning. Also, their analysis was extended to account for typicality and fan effects, and speculate on how the concept formation strategies might be extended to other facets of intelligence, such as problem solving

    Towards Intelligent Assistance for a Data Mining Process:-

    Get PDF
    A data mining (DM) process involves multiple stages. A simple, but typical, process might include preprocessing data, applying a data-mining algorithm, and postprocessing the mining results. There are many possible choices for each stage, and only some combinations are valid. Because of the large space and non-trivial interactions, both novices and data-mining specialists need assistance in composing and selecting DM processes. Extending notions developed for statistical expert systems we present a prototype Intelligent Discovery Assistant (IDA), which provides users with (i) systematic enumerations of valid DM processes, in order that important, potentially fruitful options are not overlooked, and (ii) effective rankings of these valid processes by different criteria, to facilitate the choice of DM processes to execute. We use the prototype to show that an IDA can indeed provide useful enumerations and effective rankings in the context of simple classification processes. We discuss how an IDA could be an important tool for knowledge sharing among a team of data miners. Finally, we illustrate the claims with a comprehensive demonstration of cost-sensitive classification using a more involved process and data from the 1998 KDDCUP competition.NYU, Stern School of Business, IOMS Department, Center for Digital Economy Researc
    corecore