68,331 research outputs found

    Fairness concerns in digital right management models

    Get PDF
    International audienceDigital piracy is threatening the global multimedia content industry and blindly applied coercive Digital Right Management (DRM) policies do nothing but legitimise this piracy. This paper presents new software and hardware infrastructure aimed at reconciling the content providers' and consumers' points of view by giving the ability to develop fair business models (i.e., that preserve the interest of both parties). The solution is based on the use of tamper-resistant devices (smart cards) to securely store sensitive data (e.g., personal consumer data or data expressing the terms of a B2C contract or licence) and to perform the computation required by a contract/licence activation. In other words, smart cards can be seen as tamper-resistant Service Level Agreement (SLA) enablers

    AI management an exploratory survey of the influence of GDPR and FAT principles

    Get PDF
    As organisations increasingly adopt AI technologies, a number of ethical issues arise. Much research focuses on algorithmic bias, but there are other important concerns arising from the new uses of data and the introduction of technologies which may impact individuals. This paper examines the interplay between AI, Data Protection and FAT (Fairness, Accountability and Transparency) principles. We review the potential impact of the GDPR and consider the importance of the management of AI adoption. A survey of data protection experts is presented, the initial analysis of which provides some early insights into the praxis of AI in operational contexts. The findings indicate that organisations are not fully compliant with the GDPR, and that there is limited understanding of the relevance of FAT principles as AI is introduced. Those organisations which demonstrate greater GDPR compliance are likely to take a more cautious, risk-based approach to the introduction of AI

    Logics and practices of transparency and opacity in real-world applications of public sector machine learning

    Get PDF
    Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground. An extended, archival version of this paper is available as Veale M., Van Kleek M., & Binns R. (2018). `Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making' Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), http://doi.org/10.1145/3173574.3174014.Comment: 5 pages, 0 figures, presented as a talk at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017), Halifax, Canada, August 14, 201

    Water Quality Trading and Agricultural Nonpoint Source Pollution: An Analysis of the Effectiveness and Fairness of EPA's Policy on Water Quality Trading

    Get PDF
    Water quality problems continue to plague our nation, even though Congress passed the Clean Water Act (CWA) to "restore and maintain the chemical, physical, and biological integrity of the Nation's waters"1 more than three decades ago. During the past thirty years, the dominant sources of water pollution have changed, requiring us to seek new approaches for cleaning up our waters. Water quality trading has been heralded as an approach that can integrate market mechanisms into the effort of cleaning up our water. This Article examines the Environmental Protection Agency's (EPA) policy on water quality trading and the prospects for water quality trading to help improve water quality.Part II briefly describes our water quality problems and causes. Part III examines the theoretical basis for trading and the EPA's Water Quality Trading Policy. Part IV discusses the potential impact of total maximum daily loads (TMDLs) on water quality trading, and Part V analyzes potential problems that water quality trading programs confront. Part VI addresses distributional and efficiency concerns that arise when considering trading and agricultural nonpoint source pollution. Part VII then examines issues relating to water quality trading and state laws before reaching conclusions and recommendations in Part VIII

    Data analytics and algorithms in policing in England and Wales: Towards a new policy framework

    Get PDF
    RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEI’s review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing. This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper. The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency. Any future policy framework should be principles-based and complement existing police guidance in a ‘tech-agnostic’ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the human–machine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk

    The Bellagio Global Dialogues on Intellectual Property

    Get PDF
    Reviews Rockefeller's conference series on intellectual property and its efforts to promote policies and institutional capacities that better serve the poor, with a focus on food security and public health. Discusses global policy, development, and trade

    Anomalies: Ultimatums, Dictators and Manners

    Get PDF
    Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to "rationalize" or if implausible assumptions are necessary to explain it within the paradigm. This column will resume, after a long rest, the investigation of such anomalies
    • 

    corecore