14 research outputs found
Information Privacy and the Inference Economy
Information privacy is in trouble. Contemporary information privacy protections emphasize individualsā control over their own personal information. But machine learning, the leading form of artificial intelligence, facilitates an inference economy that pushes this protective approach past its breaking point. Machine learning provides pathways to use data and make probabilistic predictionsāinferencesāthat are inadequately addressed by the current regime. For one, seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection. Moreover, it is possible to aggregate myriad individualsā data within machine learning models, identify patterns, and then apply the patterns to make inferences about other people who may or may not be part of the original dataset. The inferential pathways created by such models shift away from āyourā data and towards a new category of āinformation that might be about you.ā And because our law assumes that privacy is about personal, identifiable information, we miss the privacy interests implicated when aggregated data that is neither personal nor identifiable can be used to make inferences about you, me, and others.
This Article contends that accounting for the power and peril of inferences requires reframing information privacy governance as a network of organizational relationships to manageānot merely a set of dataflows to constrain. The status quo magnifies the power of organizations that collect and process data, while disempowering the people who provide data and who are affected by data-driven decisions. It ignores the triangular relationship among collectors, processors, and people and, in particular, disregards the codependencies between organizations that collect data and organizations that process data to draw inferences. It is past time to rework the structure of our regulatory protections. This Article provides a framework to move forward. Accounting for organizational relationships reveals new sites for regulatory intervention and offers a more auspicious strategy to contend with the impact of data on human lives in our inference economy
Information Privacy and the Inference Economy
Information privacy is in trouble. Contemporary information privacy protections emphasize individualsā control over their own personal information. But machine learning, the leading form of artificial intelligence, facilitates an inference economy that pushes this protective approach past its breaking point. Machine learning provides pathways to use data and make probabilistic predictionsāinferencesāthat are inadequately addressed by the current regime. For one, seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection. Moreover, it is possible to aggregate myriad individualsā data within machine learning models, identify patterns, and then apply the patterns to make inferences about other people who may or may not be part of the original dataset. The inferential pathways created by such models shift away from āyourā data and towards a new category of āinformation that might be about you.ā And because our law assumes that privacy is about personal, identifiable information, we miss the privacy interests implicated when aggregated data that is neither personal nor identifiable can be used to make inferences about you, me, and others.
This Article contends that accounting for the power and peril of inferences requires reframing information privacy governance as a network of organizational relationships to manageānot merely a set of dataflows to constrain. The status quo magnifies the power of organizations that collect and process data, while disempowering the people who provide data and who are affected by data-driven decisions. It ignores the triangular relationship among collectors, processors, and people and, in particular, disregards the codependencies between organizations that collect data and organizations that process data to draw inferences. It is past time to rework the structure of our regulatory protections. This Article provides a framework to move forward. Accounting for organizational relationships reveals new sites for regulatory intervention and offers a more auspicious strategy to contend with the impact of data on human lives in our inference economy
Public networks for public safety: a workshop on the present and future of mesh networks
This brieļ¬ng document was developed in conjunction with āPublic Networks for Public Safety: A Workshop on the Present and Future of Mesh Networking,ā which was held on March 30, 2012, at Harvard University.
The workshop was intended as a starting point for conversation about whether mesh networks can and should be adopted within consumer technologies to enhance public safety communications and empower and connect the public as well as simultaneously improve public safety. Attendees at the workshop included members of government agencies, academia, the telecommunications industry, and civil society organizations.
The day began with a series of extended introductions and lightning talks, which laid out some of the key issues facing the use of mesh generally and its application to public safety communications in particular. Later sessions included an assessment of the current state of play for these applications, a presentation on social factors that affect community adoption of distributed networking technologies, and a taxonomy of the differences among a variety of decentralized networking technologies.
After public safety ofļ¬cials reļ¬ected on the strengths and weaknesses of current public safety communication, the ļ¬nal session focused on translating insights presented at the workshop into a set of shared principles that could inform future efforts to advance the use of meshāboth as a networking technology and as a social constructāfor public safety
Recommended from our members
Artificial Intelligence in Strategic Context: An Introduction
Artificial intelligence (AI), particularly various methods of machine learning (ML), has achieved landmark advances over the past few years in applications as diverse as playing complex games, language processing, speech recognition and synthesis, image identification, and facial recognition. These breakthroughs have brought a surge of popular, journalistic, and policy attention to the field, including both excitement about anticipated advances and the benefits they promise, and concern about societal impacts and risks ā potentially arising through whatever combination of accident, malicious or reckless use, or just social and political disruption from the scale and rapidity of change
Replication Data for: Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate
Data for the paper "Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate", by Yochai Benkler, Bruce Etling, Rob Faris, Hal Roberts, Alicia Solow-Niederman. http://dx.doi.org/10.2139/ssrn.229595