20,222 research outputs found

    Credit bureaus between risk-management, creditworthiness assessment and prudential supervision

    Get PDF
    "This text may be downloaded for personal research purposes only. Any additional reproduction for other purposes, whether in hard copy or electronically, requires the consent of the author. If cited or quoted, reference should be made to the full name of the author, the title, the working paper or other series, the year, and the publisher."This paper discusses the role and operations of consumer Credit Bureaus in the European Union in the context of the economic theories, policies and law within which they work. Across Europe there is no common practice of sharing the credit data of consumers which can be used for several purposes. Mostly, they are used by the lending industry as a practice of creditworthiness assessment or as a risk-management tool to underwrite borrowing decisions or price risk. However, the type, breath, and depth of information differ greatly from country to country. In some Member States, consumer data are part of a broader information centralisation system for the prudential supervision of banks and the financial system as a whole. Despite EU rules on credit to consumers for the creation of the internal market, the underlying consumer data infrastructure remains fragmented at national level, failing to achieve univocal, common, or defined policy objectives under a harmonised legal framework. Likewise, the establishment of the Banking Union and the prudential supervision of the Euro area demand standardisation and convergence of the data used to measure debt levels, arrears, and delinquencies. The many functions and usages of credit data suggest that the policy goals to be achieved should inform the legal and institutional framework of Credit Bureaus, as well as the design and use of the databases. This is also because fundamental rights and consumer protection concerns arise from the sharing of credit data and their expanding use

    Local Rule-Based Explanations of Black Box Decision Systems

    Get PDF
    The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. %Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Algorithms & Fiduciaries: Existing and Proposed Regulatory Approaches to Artificially Intelligent Financial Planners

    Get PDF
    Artificial intelligence is no longer solely in the realm of science fiction. Today, basic forms of machine learning algorithms are commonly used by a variety of companies. Also, advanced forms of machine learning are increasingly making their way into the consumer sphere and promise to optimize existing markets. For financial advising, machine learning algorithms promise to make advice available 24–7 and significantly reduce costs, thereby opening the market for financial advice to lower-income individuals. However, the use of machine learning algorithms also raises concerns. Among them, whether these machine learning algorithms can meet the existing fiduciary standard imposed on human financial advisers and how responsibility and liability should be partitioned when an autonomous algorithm falls short of the fiduciary standard and harms a client. After summarizing the applicable law regulating investment advisers and the current state of robo-advising, this Note evaluates whether robo-advisers can meet the fiduciary standard and proposes alternate liability schemes for dealing with increasingly sophisticated machine learning algorithms

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
    • …
    corecore