1,722 research outputs found

    The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining

    Get PDF

    Artificial Intelligence, the Law-Machine Interface, and Fair Use Automation

    Get PDF
    From IBM Watson\u27s success in Jeopardy! to Google DeepMind\u27s victories in Go, the past decade has seen artificial intelligence advancing in leaps and bounds. Such advances have captured the attention of not only computer experts and academic commentators but also policymakers, the mass media and the public at large. In recent years, legal scholars have also actively explored how artificial intelligence will impact the law. Such exploration has resulted in a fast-growing body of scholarship.One area that has not received sufficient policy and scholarly attention concerns the law-machine interface in a hybrid environment in which both humans and intelligent machines will make legal decisions at the same time. To fill this void, the present article utilizes the case study of fair use automation to explore how legal standards can be automated and what this specific case study can teach us about the law-machine interface. Although this article utilizes an example generated from a specialized area of the law—namely, copyright or intellectual property law—its insights will apply to other situations involving the interplay of artificial intelligence and the law.The article begins by outlining the case study of fair use automation and examining three dominant arguments against such automation. Taking seriously the benefits provided by artificial intelligence, machine learning and big data analytics, this article then identifies three distinct pathways for legal automation: translation, approximation and self-determination. The second half of the article turns to key questions concerning the law-machine interface, the understanding of which will be important when automated systems are being designed to implement legal standards. Specifically, these questions focus on the allocation of decision-making power, the hierarchy of decisions and the legal effects of machine-made decisions. The article concludes by highlighting the wide-ranging ramifications of artificial intelligence for the law, the legislature, the bench, the bar and academe

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    The Body Politics of Data

    Get PDF
    The PhD project The Body Politics of Data is an artistic, practice-based study exploring how feminist methodologies can create new ways to conceptualise digital embodiment within the field of art and technology. As a field of practice highly influenced by scientific and technical methodologies, the discursive and artistic tools for examining data as a social concern are limited. The research draws on performance art from the 60s, cyberfeminist practice, Object Oriented Feminism and intersectional perspectives on data to conceive of new models of practice that can account for the body political operations of extractive big data technologies and Artificial Intelligence. The research is created through a body of individual and collective experimental artistic projects featured in the solo exhibition The Body Politics of Data at London Gallery West (2020). It includes work on maternity data and predictive products in relation to reproductive health in the UK, created in collaboration with Loes Bogers (2016-2017), workshops on “bodily bureaucracies” with Autonomous Tech Fetish (2013-2016) and Accumulative Care, a feminist model of care for labouring in the age of extractive digital technologies. This research offers an embodied feminist methodology for artistic practice to become investigative of how processes of digitalisation have adverse individual and collective effects in order to identify and resist the forms of personal and collective risk emerging with data driven technologies

    Cyber Ethics 4.0 : Serving Humanity with Values

    Get PDF
    Cyber space influences all sectors of life and society: Artificial Intelligence, Robots, Blockchain, Self-Driving Cars and Autonomous Weapons, Cyberbullying, telemedicine and cyber health, new methods in food production, destruction and conservation of the environment, Big Data as a new religion, the role of education and citizens’ rights, the need for legal regulations and international conventions. The 25 articles in this book cover the wide range of hot topics. Authors from many countries and positions of international (UN) organisations look for solutions from an ethical perspective. Cyber Ethics aims to provide orientation on what is right and wrong, good and bad, related to the cyber space. The authors apply and modify fundamental values and virtues to specific, new challenges arising from cyber technology and cyber society. The book serves as reading material for teachers, students, policy makers, politicians, businesses, hospitals, NGOs and religious organisations alike. It is an invitation for dialogue, debate and solution

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    The Algorithmic Divide and Equality in the Age of Artificial Intelligence

    Get PDF
    In the age of artificial intelligence, highly sophisticated algorithms have been deployed to provide analysis, detect patterns, optimize solutions, accelerate operations, facilitate self-learning, minimize human errors and biases and foster improvements in technological products and services. Notwithstanding these tremendous benefits, algorithms and intelligent machines do not provide equal benefits to all. Just as the digital divide has separated those with access to the Internet, information technology and digital content from those without, an emerging and ever-widening algorithmic divide now threatens to take away the many political, social, economic, cultural, educational and career opportunities provided by machine learning and artificial intelligence.Although policy makers, commentators and the mass media have paid growing attention to algorithmic bias and the shortcomings of machine learning and artificial intelligence, the algorithmic divide has yet to attract much policy and scholarly attention. To fill this lacuna, this article draws on the digital divide literature to systematically analyze this new inequitable gap between the technology haves and have-nots. Utilizing an analytical framework that the Author developed in the early 2000s, the article begins by discussing the five attributes of the algorithmic divide: awareness, access, affordability, availability and adaptability.This article then turns to three major problems precipitated by an emerging and fast-expanding algorithmic divide: (1) algorithmic deprivation; (2) algorithmic discrimination; and (3) algorithmic distortion. While the first two problems affect primarily those on the unfortunate side of the divide, the last problem impacts individuals on both sides. This article concludes by proposing seven non-exhaustive clusters of remedial actions to help bridge this emerging and ever-widening algorithmic divide. Combining law, communications policy, ethical principles, institutional mechanisms and business practices, the article fashions a holistic response to help foster equality in the age of artificial intelligence

    Fair and equitable AI in biomedical research and healthcare:Social science perspectives

    Get PDF
    Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference “Fair medicine and AI” (online 3–5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation
    • 

    corecore