8,136 research outputs found

    The Promise and The Peril: Artificial Intelligence and Employment Discrimination

    Get PDF
    Artificial intelligence (“AI”) is undeniably transforming the workplace, though many implications remain unknown. Employers increasingly rely on algorithms to determine who gets interviewed, hired, promoted, developed, disciplined, or fired. If appropriately designed and applied, AI promises to help workers find their most rewarding jobs, match companies with their most valuable and productive employees, and advance diversity, inclusion, and accessibility in the work- place. Notwithstanding its positive impacts, however, AI poses new perils for employment discrimination, especially when designed or used improperly. This Article examines the interaction between AI and federal employment antidiscrimination law. This Article explores the legal landscape including responses taken at the federal level, as well as state, local, and global legislation. Next, this Article examines a few legislative proposals designed to further regulate AI as well as several non-legislative proposals. In the absence of a comprehensive federal framework, this Article outlines and advances a deregulatory approach to using AI in the context of employment antidiscrimination that will maintain and spur further innovation. Against the backdrop of the deregulatory approach, this Article concludes by discussing best practices to guide employers in using AI for employment decisions

    CGAMES'2009

    Get PDF

    Ethics in the digital workplace

    Get PDF
    Aquesta publicació s'elabora a partir de les contribucions de cadascú dels membres nacionals que integren la Network of Eurofound Correspondents. Pel cas d'Espanya la contribució ha estat realitzada per l'Alejandro Godino (veure annex Network of Eurofound Correspondents)Adreça alternativa: https://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef22038en.pdfDigitisation and automation technologies, including artificial intelligence (AI), can affect working conditions in a variety of ways and their use in the workplace raises a host of new ethical concerns. Recently, the policy debate surrounding these concerns has become more prominent and has increasingly focused on AI. This report maps relevant European and national policy and regulatory initiatives. It explores the positions and views of social partners in the policy debate on the implications of technological change for work and employment. It also reviews a growing body of research on the topic showing that ethical implications go well beyond legal and compliance questions, extending to issues relating to quality of work. The report aims to provide a good understanding of the ethical implications of digitisation and automation, which is grounded in evidence-based research

    Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

    Full text link
    This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs

    A governance framework for algorithmic accountability and transparency

    Get PDF
    Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refer to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. big data), which can be paired with machine learning methods in order to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's human rights (e.g. critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). This study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on a review and analysis of existing proposals for governance of algorithmic systems, a set of four policy options are proposed, each of which addresses a different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and whistleblowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight and legal liability; and 4. global coordination for algorithmic governance

    Algorithmic Fairness in Business Analytics: Directions for Research and Practice

    Full text link
    The extensive adoption of business analytics (BA) has brought financial gains and increased efficiencies. However, these advances have simultaneously drawn attention to rising legal and ethical challenges when BA inform decisions with fairness implications. As a response to these concerns, the emerging study of algorithmic fairness deals with algorithmic outputs that may result in disparate outcomes or other forms of injustices for subgroups of the population, especially those who have been historically marginalized. Fairness is relevant on the basis of legal compliance, social responsibility, and utility; if not adequately and systematically addressed, unfair BA systems may lead to societal harms and may also threaten an organization's own survival, its competitiveness, and overall performance. This paper offers a forward-looking, BA-focused review of algorithmic fairness. We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms. We then provide a detailed discussion of the utility-fairness relationship, emphasizing that the frequent assumption of a trade-off between these two constructs is often mistaken or short-sighted. Finally, we chart a path forward by identifying opportunities for business scholars to address impactful, open challenges that are key to the effective and responsible deployment of BA

    Facial Recognition for Preventive Purposes: the Human Rights Implications of Detecting Emotions in Public Spaces

    Get PDF
    Police departments are increasingly relying on surveillance technologies to tackle public security issues in smart cities. Automated facial recognition is deployed in public spaces for real-time identification of suspects and warranted individuals. In some cases, law enforcement is going even further by exploiting also emotion recognition technologies. In preventive operations indeed, emotion facial recognition (EFR) is being used to infer individuals’ inner affective states from traits like facial muscle movements. In this way, law enforcement aims to obtain insightful hints on unknown persons acting suspiciously in public or strategic venues (e.g. train stations, airports). While the employment of such tools still seems to be relegated to dystopian scenarios, it is already a reality in some parts of the world. Hence, there emerges a need to explore their compatibility with the European human rights framework. The Chapter undertakes this task and examines whether and how EFR can be considered compliant with the rights to privacy and data protection, the freedom of thought and the presumption of innocence

    The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

    Full text link
    In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract
    corecore