307,790 research outputs found
Information Makes A Difference For Privacy Design
In the current information age, information can make a difference to all aspects of one’s life, emotionally, eth ically, financially or societally . Information privacy plays a key role in enabling a difference in many dimensions such as trust, respect, reputation, security, resource, ability, employment, etc. The capability of information to make a difference to one’s life is a fundamental factor; and privacy status of information is a key factor driving this difference. Understanding the impact of these two factors to one’s life within an IS context is an import ant research gap in the discipline. This paper studies “information + privacy”, ontologically and integrally, in making a difference to one’s life, within the IS context. In recognition of the importance of the Privacy- by -Design approach to IS development, a methodology is proposed to understand the grounds of information and model fundamental constructs for using Privacy- by - Design approach to develop robust privacy - friendly information systems
Truthful Mechanisms for Agents that Value Privacy
Recent work has constructed economic mechanisms that are both truthful and
differentially private. In these mechanisms, privacy is treated separately from
the truthfulness; it is not incorporated in players' utility functions (and
doing so has been shown to lead to non-truthfulness in some cases). In this
work, we propose a new, general way of modelling privacy in players' utility
functions. Specifically, we only assume that if an outcome has the property
that any report of player would have led to with approximately the same
probability, then has small privacy cost to player . We give three
mechanisms that are truthful with respect to our modelling of privacy: for an
election between two candidates, for a discrete version of the facility
location problem, and for a general social choice problem with discrete
utilities (via a VCG-like mechanism). As the number of players increases,
the social welfare achieved by our mechanisms approaches optimal (as a fraction
of )
Privacy Games: Optimal User-Centric Data Obfuscation
In this paper, we design user-centric obfuscation mechanisms that impose the
minimum utility loss for guaranteeing user's privacy. We optimize utility
subject to a joint guarantee of differential privacy (indistinguishability) and
distortion privacy (inference error). This double shield of protection limits
the information leakage through obfuscation mechanism as well as the posterior
inference. We show that the privacy achieved through joint
differential-distortion mechanisms against optimal attacks is as large as the
maximum privacy that can be achieved by either of these mechanisms separately.
Their utility cost is also not larger than what either of the differential or
distortion mechanisms imposes. We model the optimization problem as a
leader-follower game between the designer of obfuscation mechanism and the
potential adversary, and design adaptive mechanisms that anticipate and protect
against optimal inference algorithms. Thus, the obfuscation mechanism is
optimal against any inference algorithm
More than just friends? Facebook, disclosive ethics and the morality of technology
Social networking sites have become increasingly popular destinations for people wishing to chat,
play games, make new friends or simply stay in touch. Furthermore, many organizations have
been quick to grasp the potential they offer for marketing, recruitment and economic activities.
Nevertheless, counterclaims depict such spaces as arenas where deception, social grooming and
the posting of defamatory content flourish. Much research in this area has focused on the ends to
which people deploy the technology, and the consequences arising, with a view to making policy
recommendations and ethical interventions. In this paper, we argue that tracing where morality
lies is more complex than these efforts suggest. Using the case of a popular social networking site,
and concepts about the morality of technology, we disclose the ethics of Facebook as diffuse and
multiple. In our conclusions we provide some reflections on the possibilities for action in light of
this disclosure
Ethics and social networking sites: A disclosive analysis of Facebook
Paper has been accepted for publication in Information, Technology and People.Purpose: This paper provides insights into the moral values embodied by a popular social networking site (SNS), Facebook. We adopt the position that technology as well as humans has a moral character in order to disclose ethical concerns that are not transparent to users of the site.
Design/methodology/approach: This study is based upon qualitative field work, involving participant observation, conducted over a two year period.
Findings: Much research on the ethics of information systems has focused on the way that people deploy particular technologies, and the consequences arising, with a view to making policy recommendations and ethical interventions. By focusing on technology as a moral actor with reach across and beyond the Internet, we reveal the complex and diffuse nature of ethical responsibility in our case and the consequent implications for governance of SNS.
Research limitations/implications: We situate our research in a body of work known as disclosive ethics and argue for an ongoing process of evaluating SNS to reveal their moral importance. Along with other authors in the genre, our work is largely descriptive, but we engage with prior research by Brey and Introna to highlight the scope for theory development.
Practical implications: Governance measures that require the developers of social networking sites to revise their designs fail to address the diffuse nature of ethical responsibility in this case. Such technologies need to be opened up to scrutiny on a regular basis to increase public awareness of the issues and thereby disclose concerns to a wider audience. We suggest that there is value in studying the development and use of these technologies in their infancy, or if established, in the experiences of novice users. Furthermore, flash points in technological trajectories can prove useful sites of investigation.
Originality/value: Existing research on social networking sites either fails to address ethical concerns head on or adopts a tool view of the technologies so that the focus is on the ethical behaviour of users. We focus upon the agency, and hence the moral character, of technology to show both the possibilities for, and limitations of, ethical interventions in such cases
When the Hammer Meets the Nail: Multi-Server PIR for Database-Driven CRN with Location Privacy Assurance
We show that it is possible to achieve information theoretic location privacy
for secondary users (SUs) in database-driven cognitive radio networks (CRNs)
with an end-to-end delay less than a second, which is significantly better than
that of the existing alternatives offering only a computational privacy. This
is achieved based on a keen observation that, by the requirement of Federal
Communications Commission (FCC), all certified spectrum databases synchronize
their records. Hence, the same copy of spectrum database is available through
multiple (distinct) providers. We harness the synergy between multi-server
private information retrieval (PIR) and database- driven CRN architecture to
offer an optimal level of privacy with high efficiency by exploiting this
observation. We demonstrated, analytically and experimentally with deployments
on actual cloud systems that, our adaptations of multi-server PIR outperform
that of the (currently) fastest single-server PIR by a magnitude of times with
information theoretic security, collusion resiliency, and fault-tolerance
features. Our analysis indicates that multi-server PIR is an ideal
cryptographic tool to provide location privacy in database-driven CRNs, in
which the requirement of replicated databases is a natural part of the system
architecture, and therefore SUs can enjoy all advantages of multi-server PIR
without any additional architectural and deployment costs.Comment: 10 pages, double colum
Evaluating the Contextual Integrity of Privacy Regulation: Parents' IoT Toy Privacy Norms Versus COPPA
Increased concern about data privacy has prompted new and updated data
protection regulations worldwide. However, there has been no rigorous way to
test whether the practices mandated by these regulations actually align with
the privacy norms of affected populations. Here, we demonstrate that surveys
based on the theory of contextual integrity provide a quantifiable and scalable
method for measuring the conformity of specific regulatory provisions to
privacy norms. We apply this method to the U.S. Children's Online Privacy
Protection Act (COPPA), surveying 195 parents and providing the first data that
COPPA's mandates generally align with parents' privacy expectations for
Internet-connected "smart" children's toys. Nevertheless, variations in the
acceptability of data collection across specific smart toys, information types,
parent ages, and other conditions emphasize the importance of detailed
contextual factors to privacy norms, which may not be adequately captured by
COPPA.Comment: 18 pages, 1 table, 4 figures, 2 appendice
Privacy-enhancing Aggregation of Internet of Things Data via Sensors Grouping
Big data collection practices using Internet of Things (IoT) pervasive
technologies are often privacy-intrusive and result in surveillance, profiling,
and discriminatory actions over citizens that in turn undermine the
participation of citizens to the development of sustainable smart cities.
Nevertheless, real-time data analytics and aggregate information from IoT
devices open up tremendous opportunities for managing smart city
infrastructures. The privacy-enhancing aggregation of distributed sensor data,
such as residential energy consumption or traffic information, is the research
focus of this paper. Citizens have the option to choose their privacy level by
reducing the quality of the shared data at a cost of a lower accuracy in data
analytics services. A baseline scenario is considered in which IoT sensor data
are shared directly with an untrustworthy central aggregator. A grouping
mechanism is introduced that improves privacy by sharing data aggregated first
at a group level compared as opposed to sharing data directly to the central
aggregator. Group-level aggregation obfuscates sensor data of individuals, in a
similar fashion as differential privacy and homomorphic encryption schemes,
thus inference of privacy-sensitive information from single sensors becomes
computationally harder compared to the baseline scenario. The proposed system
is evaluated using real-world data from two smart city pilot projects. Privacy
under grouping increases, while preserving the accuracy of the baseline
scenario. Intra-group influences of privacy by one group member on the other
ones are measured and fairness on privacy is found to be maximized between
group members with similar privacy choices. Several grouping strategies are
compared. Grouping by proximity of privacy choices provides the highest privacy
gains. The implications of the strategy on the design of incentives mechanisms
are discussed
- …