140,118 research outputs found

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it

    Configuring the Networked Citizen

    Get PDF
    Among legal scholars of technology, it has become commonplace to acknowledge that the design of networked information technologies has regulatory effects. For the most part, that discussion has been structured by the taxonomy developed by Lawrence Lessig, which classifies code as one of four principal regulatory modalities, alongside law, markets, and norms. As a result of that framing, questions about the applicability of constitutional protections to technical decisions have taken center stage in legal and policy debates. Some scholars have pondered whether digital architectures unacceptably constrain fundamental liberties, and what public design obligations might follow from such a conclusion. Others have argued that code belongs firmly on the private side of the public/private divide because it originates in the innovative activity of private actors. In a forthcoming book, the author argues that the project of situating code within one or another part of the familiar constitutional landscape too often distracts legal scholars from more important questions about the quality of the regulation that networked digital architectures produce. The gradual, inexorable embedding of networked information technologies has the potential to alter, in largely invisible ways, the interrelated processes of subject formation and culture formation. Within legal scholarship, the prevailing conceptions of subjectivity tend to be highly individualistic, oriented around the activities of speech and voluntary affiliation. Subjectivity also tends to be understood as definitionally independent of culture. Yet subjectivity is importantly collective, formed by the substrate within which individuality emerges. People form their conceptions of the good in part by reading, listening, and watching—by engaging with the products of a common culture—and by interacting with one another. Those activities are socially and culturally mediated, shaped by the preexisting communities into which individuals are born and within which they develop. They are also technically mediated, shaped by the artifacts that individuals encounter in common use. The social and cultural patterns that mediate the activities of self-constitution are being reconfigured by the pervasive adoption of technical protocols and services that manage the activities of content delivery, search, and social interaction. In developed countries, a broad cross-section of the population routinely uses networked information technologies and communications devices in hundreds of mundane, unremarkable ways. We search for information, communicate with each other, and gain access to networked resources and services. For the most part, as long as our devices and technologies work as expected, we give little thought to how they work; those questions are understood to be technical questions. Such questions are better characterized as sociotechnical. As networked digital architectures increasingly mediate the ordinary processes of everyday life, they catalyze gradual yet fundamental social and cultural change. This chapter—originally published in Imagining New Legalities: Privacy and Its Possibilities in the 21st Century, edited by Austin Sarat, Lawrence Douglas, and Martha Merrill Umphrey (2012)—considers two interrelated questions that flow from understanding sociotechnical change as (re)configuring networked subjects. First, it revisits the way that legal and policy debates locate networked information technologies with respect to the public/private divide. The design of networked information technologies and communications devices is conventionally treated as a private matter; indeed, that designation has been the principal stumbling block encountered by constitutional theorists of technology. The classification of code as presumptively private has effects that reach beyond debates about the scope of constitutional guarantees, shaping views about the extent to which regulation of technical design decisions is normatively desirable. This chapter reexamines that discursive process, using lenses supplied by literatures on third-party liability and governance. Second, this chapter considers the relationship between sociotechnical change and understandings of citizenship. The ways that people think, form beliefs, and interact with one another are centrally relevant to the sorts of citizens that they become. The gradual embedding of networked information technologies into the practice of everyday life therefore has important implications for both the meaning and the practice of citizenship in the emerging networked information society. If design decisions are neither merely technical nor presumptively private, then they should be subject to more careful scrutiny with regard to the kind of citizen they produce. In particular, policy-makers cannot avoid engaging with the particular values that are encoded

    AI management an exploratory survey of the influence of GDPR and FAT principles

    Get PDF
    As organisations increasingly adopt AI technologies, a number of ethical issues arise. Much research focuses on algorithmic bias, but there are other important concerns arising from the new uses of data and the introduction of technologies which may impact individuals. This paper examines the interplay between AI, Data Protection and FAT (Fairness, Accountability and Transparency) principles. We review the potential impact of the GDPR and consider the importance of the management of AI adoption. A survey of data protection experts is presented, the initial analysis of which provides some early insights into the praxis of AI in operational contexts. The findings indicate that organisations are not fully compliant with the GDPR, and that there is limited understanding of the relevance of FAT principles as AI is introduced. Those organisations which demonstrate greater GDPR compliance are likely to take a more cautious, risk-based approach to the introduction of AI

    Subaltern imaginaries of localism: constructions of place, space and democracy in community-led housing organisations.

    Get PDF
    The localism strategies of the UK government provide a suite of ‘rights’ for community organisations that licence place-based political imaginaries with the intent to construct the community as a proxy for a smaller state. Conflating place with participation and promising to devolve power, localism authorises a performative enactment of democracy, citizenship and the ‘public’ through the lived experience of space. In constituting the local as a metaphor for democracy and empowerment, however, community localism foregrounds the pivotal role played by place and scale in cementing social differentiation and in naturalising hierarchical power relations. This paper explores the subaltern strategies of localism that may emerge when the rights of localism are exercised by residents’ organisations in marginalised communities of social housing. Drawing on research with community-led housing organisations it demonstrates how the spatial imaginations and spatial practices of localism can be implemented to assert new claims on democracy and citizenship. In particular it identifies four spatial practices – the extension of domestic space, the invocation of locality, the construction of domestic scale, and the scalar reimagining of democracy – that subvert the reordering of political space that is localism’s regulatory intent

    Artificial intelligence: opportunities and implications for the future of decision making

    Get PDF
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This report sets out some possible approaches, and describes some of the ways government is already engaging with these issues

    Thinking, Interthinking, and Technological Tools

    Get PDF
    Language use is widely regarded as an important indicator of high quality learning and reasoning ability. Yet this masks an irony: language is fundamentally a social, collaborative tool, yet despite the widespread recognition of its importance in relation to learning, the role of dialogue is undervalued in learning contexts. In this chapter we argue that to see language as only a tool for individual thought presents a limited view of its transformative power. This power, we argue, lies in the ways in which dialogue is used to interthink – that is, to think together, to build knowledge co-constructively through our shared understanding. Technology can play an important role in resourcing thinking through the provision of information, and support to provide a space to think alone. It can moreover provide significant support for learners to build shared representations together, particularly through giving learners access to a wealth of ‘given’ inter-related texts which resource the co-construction of knowledge

    On Being Accountable in a Kaleidoscopic World

    Get PDF
    In this lecture, the author explores the concept of accountability in the changing world in which international law operates, and to draw upon my own recent experience chairing the Inspection Panel at the World Bank. In doing so, I want especially to recognize the concerns of poor people and bring their plight into the discussion of accountability. The world today differs sharply from that when the United Nations was formed, some 65 years ago. In that world, there were only 51 states, few international organizations, a nascent global civil society, only 2 billion people, many of whom lived under colonialism and in poverty, an emerging recognition of human rights, and the glimmerings of globalization. International environmental law, for the most part, did not exist
    • 

    corecore