4,802 research outputs found

    Configuring the Networked Citizen

    Get PDF
    Among legal scholars of technology, it has become commonplace to acknowledge that the design of networked information technologies has regulatory effects. For the most part, that discussion has been structured by the taxonomy developed by Lawrence Lessig, which classifies code as one of four principal regulatory modalities, alongside law, markets, and norms. As a result of that framing, questions about the applicability of constitutional protections to technical decisions have taken center stage in legal and policy debates. Some scholars have pondered whether digital architectures unacceptably constrain fundamental liberties, and what public design obligations might follow from such a conclusion. Others have argued that code belongs firmly on the private side of the public/private divide because it originates in the innovative activity of private actors. In a forthcoming book, the author argues that the project of situating code within one or another part of the familiar constitutional landscape too often distracts legal scholars from more important questions about the quality of the regulation that networked digital architectures produce. The gradual, inexorable embedding of networked information technologies has the potential to alter, in largely invisible ways, the interrelated processes of subject formation and culture formation. Within legal scholarship, the prevailing conceptions of subjectivity tend to be highly individualistic, oriented around the activities of speech and voluntary affiliation. Subjectivity also tends to be understood as definitionally independent of culture. Yet subjectivity is importantly collective, formed by the substrate within which individuality emerges. People form their conceptions of the good in part by reading, listening, and watching—by engaging with the products of a common culture—and by interacting with one another. Those activities are socially and culturally mediated, shaped by the preexisting communities into which individuals are born and within which they develop. They are also technically mediated, shaped by the artifacts that individuals encounter in common use. The social and cultural patterns that mediate the activities of self-constitution are being reconfigured by the pervasive adoption of technical protocols and services that manage the activities of content delivery, search, and social interaction. In developed countries, a broad cross-section of the population routinely uses networked information technologies and communications devices in hundreds of mundane, unremarkable ways. We search for information, communicate with each other, and gain access to networked resources and services. For the most part, as long as our devices and technologies work as expected, we give little thought to how they work; those questions are understood to be technical questions. Such questions are better characterized as sociotechnical. As networked digital architectures increasingly mediate the ordinary processes of everyday life, they catalyze gradual yet fundamental social and cultural change. This chapter—originally published in Imagining New Legalities: Privacy and Its Possibilities in the 21st Century, edited by Austin Sarat, Lawrence Douglas, and Martha Merrill Umphrey (2012)—considers two interrelated questions that flow from understanding sociotechnical change as (re)configuring networked subjects. First, it revisits the way that legal and policy debates locate networked information technologies with respect to the public/private divide. The design of networked information technologies and communications devices is conventionally treated as a private matter; indeed, that designation has been the principal stumbling block encountered by constitutional theorists of technology. The classification of code as presumptively private has effects that reach beyond debates about the scope of constitutional guarantees, shaping views about the extent to which regulation of technical design decisions is normatively desirable. This chapter reexamines that discursive process, using lenses supplied by literatures on third-party liability and governance. Second, this chapter considers the relationship between sociotechnical change and understandings of citizenship. The ways that people think, form beliefs, and interact with one another are centrally relevant to the sorts of citizens that they become. The gradual embedding of networked information technologies into the practice of everyday life therefore has important implications for both the meaning and the practice of citizenship in the emerging networked information society. If design decisions are neither merely technical nor presumptively private, then they should be subject to more careful scrutiny with regard to the kind of citizen they produce. In particular, policy-makers cannot avoid engaging with the particular values that are encoded

    Response to Privacy as a Public Good

    Get PDF
    In the spirit of moving forward the theoretical and empirical scholarship on privacy as a public good, this response addresses four issues raised by Professors Fairfield and Engel’s article: first, their depiction of individuals in groups; second, suggestions for clarifying the concept of group; third, an explanation of why the platforms on which groups exist and interact needs more analysis; and finally, the question of what kind of government intervention might be necessary to protect privacy as a public good

    A Storm in an IoT Cup: The Emergence of Cyber-Physical Social Machines

    Full text link
    The concept of social machines is increasingly being used to characterise various socio-cognitive spaces on the Web. Social machines are human collectives using networked digital technology which initiate real-world processes and activities including human communication, interactions and knowledge creation. As such, they continuously emerge and fade on the Web. The relationship between humans and machines is made more complex by the adoption of Internet of Things (IoT) sensors and devices. The scale, automation, continuous sensing, and actuation capabilities of these devices add an extra dimension to the relationship between humans and machines making it difficult to understand their evolution at either the systemic or the conceptual level. This article describes these new socio-technical systems, which we term Cyber-Physical Social Machines, through different exemplars, and considers the associated challenges of security and privacy.Comment: 14 pages, 4 figure

    The Mundane Computer: Non-Technical Design Challenges Facing Ubiquitous Computing and Ambient Intelligence

    Full text link
    Interdisciplinary collaboration, to include those who are not natural scientists, engineers and computer scientists, is inherent in the idea of ubiquitous computing, as formulated by Mark Weiser in the late 1980s and early 1990s. However, ubiquitous computing has remained largely a computer science and engineering concept, and its non-technical side remains relatively underdeveloped. The aim of the article is, first, to clarify the kind of interdisciplinary collaboration envisaged by Weiser. Second, the difficulties of understanding the everyday and weaving ubiquitous technologies into the fabric of everyday life until they are indistinguishable from it, as conceived by Weiser, are explored. The contributions of Anne Galloway, Paul Dourish and Philip Agre to creating an understanding of everyday life relevant to the development of ubiquitous computing are discussed, focusing on the notions of performative practice, embodied interaction and contextualisation. Third, it is argued that with the shift to the notion of ambient intelligence, the larger scale socio-economic and socio-political dimensions of context become more explicit, in contrast to the focus on the smaller scale anthropological study of social (mainly workplace) practices inherent in the concept of ubiquitous computing. This can be seen in the adoption of the concept of ambient intelligence within the European Union and in the focus on rebalancing (personal) privacy protection and (state) security in the wake of 11 September 2001. Fourth, the importance of adopting a futures-oriented approach to discussing the issues arising from the notions of ubiquitous computing and ambient intelligence is stressed, while the difficulty of trying to achieve societal foresight is acknowledged

    Internet Utopianism and the Practical Inevitability of Law

    Get PDF
    Writing at the dawn of the digital era, John Perry Barlow proclaimed cyberspace to be a new domain of pure freedom. Addressing the nations of the world, he cautioned that their laws, which were “based on matter,” simply did not speak to conduct in the new virtual realm. As both Barlow and the cyberlaw scholars who took up his call recognized, that was not so much a statement of fact as it was an exercise in deliberate utopianism. But it has proved prescient in a way that they certainly did not intend. The “laws” that increasingly have no meaning in online environments include not only the mandates of market regulators but also the guarantees that supposedly protect the fundamental rights of internet users, including the expressive and associational freedoms whose supremacy Barlow asserted. More generally, in the networked information era, protections for fundamental human rights — both on- and offline — have begun to fail comprehensively. Cyberlaw scholarship in the Barlowian mold isn’t to blame for the worldwide erosion of protections for fundamental rights, but it also hasn’t helped as much as it might have. In this essay, adapted from a forthcoming book on the evolution of legal institutions in the information era, I identify and briefly examine three intersecting flavors of internet utopianism in cyberlegal thought that are worth reexamining. It has become increasingly apparent that functioning legal institutions have indispensable roles to play in protecting and advancing human freedom. It has also become increasingly apparent, however, that the legal institutions we need are different than the ones we have

    Cross-layer Approach for Designing Resilient (Sociotechnical, Cyber-Physical, Software-intensive and Systems of) Systems

    Get PDF
    Our society’s critical infrastructures are sociotechnical cyber-physical systems (CPS) increasingly using open networks for operation. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. This paper starts to develop an information systems design theory for resilient software-intensive systems (DT4RS) so that communities developing and operating different security technologies can share knowledge and best practices using a common frame of reference. By a sound design theory, the outputs of these communities will combine to create more resilient systems, with fewer vulnerabilities and an improved stakeholder sense of security and welfare. The main element of DT4RS is a multi-layered reference architecture of the human, software (cyber) and platform (physical) layers of a cyber-physical system. The layered architecture can facilitate the understanding of the cross-layer interactions between the layers. Cyber security properties are leveraged to help analyzing the interactions between these layers

    PRIMA — Privacy research through the perspective of a multidisciplinary mash up

    Get PDF
    Based on a summary description of privacy protection research within three fields of inquiry, viz. social sciences, legal science, and computer and systems sciences, we discuss multidisciplinary approaches with regard to the difficulties and the risks that they entail as well as their possible advantages. The latter include the identification of relevant perspectives of privacy, increased expressiveness in the formulation of research goals, opportunities for improved research methods, and a boost in the utility of invested research efforts

    Trust in social machines: the challenges

    No full text
    The World Wide Web has ushered in a new generation of applications constructively linking people and computers to create what have been called ‘social machines.’ The ‘components’ of these machines are people and technologies. It has long been recognised that for people to participate in social machines, they have to trust the processes. However, the notions of trust often used tend to be imported from agent-based computing, and may be too formal, objective and selective to describe human trust accurately. This paper applies a theory of human trust to social machines research, and sets out some of the challenges to system designers
    • 

    corecore