140,039 research outputs found

    Technological development and concentration of stock exchanges in Europe

    Get PDF
    This paper provides an explanation of technical inefficiencies of financial exchanges in Europe as well as an empirical analysis of their existence and extent. A single-stage stochastic cost frontier approach is employed, which generates exchange inefficiency scores based on a unique unbalanced panel data set for all major European financial exchanges over the period 1985–1999. Overall cost inefficiency scores reveal that European exchanges operate at 20–25% above the efficiency benchmark. The results also affirm that size of exchange; market concentration and quality; structural reorganisations of exchange governance; diversification in trading service activities; and adoption of automated trading systems significantly influence the efficient provision of trading services in Europe. Over the sample period, European exchanges notably improved their ability to efficiently manage their production and input resources.Europe; financial exchanges; panel data; technical efficiency

    Diagnosis in the Olap Context

    Get PDF
    The purpose of OLAP (On-Line Analytical Processing) systems is to provide a framework for the analysis of multidimensional data. Many tasks related to analysing multidimensional data and making business decisions are still carried out manually by analysts (e.g. financial analysts, accountants, or business managers). An important and common task in multidimensional analysis is business diagnosis. Diagnosis is defined as finding the “best” explanation of observed symptoms. Today’s OLAP systems offer little support for automated business diagnosis. This functionality can be provided by extending the conventional OLAP system with an explanation formalism, which mimics the work of business decision makers in diagnostic processes. The central goal of this paper is the identification of specific knowledge structures and reasoning methods required to construct computerized explanations from multidimensional data and business models. We propose an algorithm that generates explanations for symptoms in multidimensional business data. The algorithm was tested on a fictitious case study involving the comparison of financial results of a firm’s business units

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Five Privacy Principles (from the GDPR) the United States Should Adopt To Advance Economic Justice

    Get PDF
    Algorithmic profiling technologies are impeding the economic security of low-income people in the United States. Based on their digital profiles, low- income people are targeted for predatory marketing campaigns and financial products. At the same time, algorithmic decision-making can result in their exclusion from mainstream employment, housing, financial, health care, and educational opportunities. Government agencies are turning to algorithms to apportion social services, yet these algorithms lack transparency, leaving thousands of people adrift without state support and not knowing why. Marginalized communities are also subject to disproportionately high levels of surveillance, including facial recognition technology and the use of predictive policing software.American privacy law is no bulwark against these profiling harms, instead placing the onus of protecting personal data on individuals while leaving government and businesses largely free to collect, analyze, share, and sell personal data. By contrast, in the European Union, the General Data Protection Regulation (GDPR) gives EU residents numerous, enforceable rights to control their personal data. Spurred in part by the GDPR, Congress is debating whether to adopt comprehensive privacy legislation in the United States. This article contends that the GDPR contains several provisions that have the potential to limit digital discrimination against the poor, while enhancing their economic stability and mobility. The GDPR provides the following: (1) the right to an explanation about automated decision-making; (2) the right not to be subject to decisions based solely on automated profiling; (3) the right to be forgotten; (4) opportunities for public participation in data processing programs; and (5) robust implementation and enforcement tools. The interests of low-income people must be part of privacy lawmaking, and the GDPR is a useful template for thinking about how to meet their data privacy needs

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system

    Electronic fraud detection in the U.S. Medicaid Healthcare Program: lessons learned from other industries

    Get PDF
    It is estimated that between 600and600 and 850 billion annually is lost to fraud, waste, and abuse in the US healthcare system,with 125to125 to 175 billion of this due to fraudulent activity (Kelley 2009). Medicaid, a state-run, federally-matchedgovernment program which accounts for roughly one-quarter of all healthcare expenses in the US, has been particularlysusceptible targets for fraud in recent years. With escalating overall healthcare costs, payers, especially government-runprograms, must seek savings throughout the system to maintain reasonable quality of care standards. As such, the need foreffective fraud detection and prevention is critical. Electronic fraud detection systems are widely used in the insurance,telecommunications, and financial sectors. What lessons can be learned from these efforts and applied to improve frauddetection in the Medicaid health care program? In this paper, we conduct a systematic literature study to analyze theapplicability of existing electronic fraud detection techniques in similar industries to the US Medicaid program

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Realtime market microstructure analysis: online Transaction Cost Analysis

    Full text link
    Motivated by the practical challenge in monitoring the performance of a large number of algorithmic trading orders, this paper provides a methodology that leads to automatic discovery of the causes that lie behind a poor trading performance. It also gives theoretical foundations to a generic framework for real-time trading analysis. Academic literature provides different ways to formalize these algorithms and show how optimal they can be from a mean-variance, a stochastic control, an impulse control or a statistical learning viewpoint. This paper is agnostic about the way the algorithm has been built and provides a theoretical formalism to identify in real-time the market conditions that influenced its efficiency or inefficiency. For a given set of characteristics describing the market context, selected by a practitioner, we first show how a set of additional derived explanatory factors, called anomaly detectors, can be created for each market order. We then will present an online methodology to quantify how this extended set of factors, at any given time, predicts which of the orders are underperforming while calculating the predictive power of this explanatory factor set. Armed with this information, which we call influence analysis, we intend to empower the order monitoring user to take appropriate action on any affected orders by re-calibrating the trading algorithms working the order through new parameters, pausing their execution or taking over more direct trading control. Also we intend that use of this method in the post trade analysis of algorithms can be taken advantage of to automatically adjust their trading action.Comment: 33 pages, 12 figure
    • …
    corecore