243,741 research outputs found

    Trust and privacy management support for context-aware service platforms

    Get PDF
    In a context-aware service platform, service providers adapt their services to the current situation of the service users using context information retrieved from context information providers. In such a service provisioning platform, important trust and privacy issues arise, because different entities responsible for different tasks have to collaborate in the provisioning of the services. Context information is privacy sensitive by nature, making the communication and processing of this information a potential privacy threat. The main goal of this thesis is to learn how to support users and providers of context-aware services in managing the trade-off between privacy protection and context-based service adaptation. More and more precise context information retrieved from trustworthy context information providers allows context-aware service provider to adapt their services more reliably. However, more and more precise context information also means a higher risk for the service users in case of a privacy violation

    UNDERSTANDING EMERGENCE AND OUTCOMES OF INFORMATION PRIVACY CONCERNS: A CASE OF FACEBOOK

    Get PDF
    Drawing on content analysis of user responses to the revisions in the Facebook Privacy Policy, this study develops a process model to explain emergence and outcome processes of users’ information privacy concerns in an online social networking context. The first phase of the model proposes three broad categories of informational practices – collection and storage; processing and use; and dissemination of personal data—associated with users’ information privacy concerns. This phase also identifies the conditions under which proposed practices are attributed as privacy issues. The second phase of the model describes outcomes of perceived privacy issues by proposing users’ affective and behavioral responses. The findings provide evidence for, (1) the important role of trigger conditions in emergence of users’ information privacy concerns, (2) the gap between privacy issues that are perceived by users and identified by domain experts, (3) the uniqueness of online social networking context in providing distinct privacy challenges

    Privacy-preserving recommendations in context-aware mobile environments

    Get PDF
    © Emerald Publishing Limited. Purpose - This paper aims to address privacy concerns that arise from the use of mobile recommender systems when processing contextual information relating to the user. Mobile recommender systems aim to solve the information overload problem by recommending products or services to users of Web services on mobile devices, such as smartphones or tablets, at any given point in time and in any possible location. They use recommendation methods, such as collaborative filtering or content-based filtering and use aconsiderable amount of contextual information to provide relevant recommendations. However, because of privacy concerns, users are not willing to provide the required personal information that would allow their views to be recorded and make these systems usable. Design/methodology/approach - This work is focused on user privacy by providing a method for context privacy-preservation and privacy protection at user interface level. Thus, a set of algorithms that are part of the method has been designed with privacy protectionin mind, which isdone byusing realistic dummy parameter creation. Todemonstrate the applicability of the method, arelevant context-aware data set has been used to run performance and usability tests. Findings - The proposed method has been experimentally evaluated using performance and usability evaluation tests and is shown that with a small decrease in terms of performance, user privacy can be protected. Originality/value - This is a novel research paper that proposed a method for protecting the privacy of mobile recommender systems users when context parameters are used

    Privacy, Ideology, and Technology: A Response to Jeffrey Rosen

    Get PDF
    This essay reviews Jeffrey Rosen’s The Unwanted Gaze: The Destruction of Privacy in America (2000). Rosen offers a compelling (and often hair-raising) account of the pervasive dissolution of the boundary between public and private information. This dissolution is both legal and social; neither the law nor any other social institution seems to recognize many limits on the sorts of information that can be subjected to public scrutiny. The book also provides a rich, evocative characterization of the dignitary harms caused by privacy invasion. Rosen’s description of the sheer unfairness of being “judged out of context” rings instantly true. Privacy, Rosen concludes, is indispensable to human well-being and is at risk of being destroyed unless we act fast. The book is far less convincing, however, when it moves beyond description and attempts to identify the causes of the destruction of privacy and propose solutions. Why is privacy under siege today? The incidents that Rosen chooses as illustrations both reveal and obscure. From Monica Lewinsky’s unsent, deleted e-mails to the private online activities of corporate employees and the Dean of the Harvard Divinity School, the examples offer a rich stew of technology, corporate mind control, public scapegoating, and political intrigue. But for the most part, Rosen seems to think that it is sex that is primarily to blame for these developments—though how, exactly, Rosen cannot seem to decide. He suggests, variously, that we seek private information out of prurient fascination with other people’s intimate behavior, or to enforce upon others authoritarian notions of “correct” interpersonal behavior, or to inform moral judgments about others based on a hasty and ill-conceived equivalence between the personal and the political. Or perhaps Rosen is simply upset about the loss of privacy for a specific sort of (sexual or intimate) behavior, whatever the origin of society’s impulse to pry. Yet there are puzzling anomalies in Rosen’s account. Most notably, appended to Rosen’s excavation of recent sex-related privacy invasions is a chapter on privacy in cyberspace. This chapter sits uneasily in relation to the rest of the book. Its focus is not confined to sex-related privacy, and Rosen does not explain how the more varied information-gathering activities chronicled there bear on his earlier analysis. Rosen acknowledges as much and offers, instead, the explanation that intimate privacy and cyberspace privacy are simply two examples of the same problem: the risk of being judged out of context in a world of short attention spans, and the harms to dignity that follow. This explanation seems far too simple, and more than a bit circular. Why this rush to judge others out of context? Necessity is one answer—if attention spans are limited, we cannot avoid making decisions based on incomplete information—but where does the necessity to judge come from? And what do computers and digital networking technologies—factors that recur not only in the chapter on cyberspace privacy, but also in most of Rosen’s other examples—have to do with it? This Review Essay argues, first, that the use of personal information to sort and classify individuals is inextricably bound up with the fabric of our political economy. As Part II explains, the unfettered use of “true” information to predict risk and minimize uncertainty is a hallmark of the liberal state and its constituent economic and political markets. Not sex, but money, and more broadly an ideology about the predictive power of isolated facts, generate the perceived necessity to judge individuals based on incomplete profiles. The harms of this rush to judgment—harms not only to dignity, but also to economic welfare and more fundamentally to individual autonomy—may undermine liberal individualism (as Rosen argues), but they are products of it as well. Part III argues, further, that the problem of vanishing informational privacy in digital networked environments is not sui generis, but rather is central to understanding the destruction of privacy more generally. This is not simply because new technologies reduce the costs of collecting, exchanging, and processing the traditional sorts of consumer information. The profit-driven search for personal information via digital networks is also catalyzing an erosion of the privacy that individuals have customarily enjoyed in their homes, their private papers, and even their thoughts. This process is transforming not only the way we experience privacy, but also the way we understand it. Privacy is becoming not only harder to protect, but also harder to justify protecting. Part IV concludes that shifting these mutually reinforcing ideological and technological vectors will require more drastic intervention than Rosen suggests

    Privacy-Aware Processing of Biometric Templates by Means of Secure Two-Party Computation

    Get PDF
    The use of biometric data for person identification and access control is gaining more and more popularity. Handling biometric data, however, requires particular care, since biometric data is indissolubly tied to the identity of the owner hence raising important security and privacy issues. This chapter focuses on the latter, presenting an innovative approach that, by relying on tools borrowed from Secure Two Party Computation (STPC) theory, permits to process the biometric data in encrypted form, thus eliminating any risk that private biometric information is leaked during an identification process. The basic concepts behind STPC are reviewed together with the basic cryptographic primitives needed to achieve privacy-aware processing of biometric data in a STPC context. The two main approaches proposed so far, namely homomorphic encryption and garbled circuits, are discussed and the way such techniques can be used to develop a full biometric matching protocol described. Some general guidelines to be used in the design of a privacy-aware biometric system are given, so as to allow the reader to choose the most appropriate tools depending on the application at hand

    E-Commerce and Trans-Atlantic Privacy

    Get PDF
    For almost a decade, the United States and Europe have anticipated a clash over the protection of personal information. Between the implementation in Europe of comprehensive legal protections pursuant to the directive on data protection and the continued reliance on industry self-regulation in the United States, trans-Atlantic privacy policies have been at odds with each other. The rapid growth in e-commerce is now sparking the long-anticipated trans-Atlantic privacy clash. This Article will first look at the context of American e-commerce and the disjuncture between citizens\u27 privacy and business practices. The Article will then turn to the international context and explore the adverse impact, on the status quo in the United States, of European data protection law as harmonized by Directive 95/46/EC of the European Parliament and of the Council of 24 Oct. 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Following this analysis, the Article will show that the “safe harbor” agreement between the United States Department of Commerce and the European Commission--designed to alleviate the threat of disruption in trans-Atlantic data flows and, in particular, to mollify concerns for the stability of online data transfers--is only a weak, seriously flawed solution for e-commerce. In the end, extra-legal technical measures and contractual mechanisms might minimize privacy conflicts for e-commerce transactions, but an international treaty is likely the only sustainable solution for long-term growth in trans-border commercial interchange

    Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework

    Full text link
    The generative Artificial Intelligence (AI) tools based on Large Language Models (LLMs) use billions of parameters to extensively analyse large datasets and extract critical private information such as, context, specific details, identifying information etc. This have raised serious threats to user privacy and reluctance to use such tools. This article proposes the conceptual model called PrivChatGPT, a privacy-preserving model for LLMs that consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data. To demonstrate its applicability, we show how a private mechanism could be integrated into the existing model for training LLMs to protect user privacy; specifically, we employed differential privacy and private training using Reinforcement Learning (RL). We measure the privacy loss and evaluate the measure of uncertainty or randomness once differential privacy is applied. It further recursively evaluates the level of privacy guarantees and the measure of uncertainty of public database and resources, during each update when new information is added for training purposes. To critically evaluate the use of differential privacy for private LLMs, we hypothetically compared other mechanisms e..g, Blockchain, private information retrieval, randomisation, for various performance measures such as the model performance and accuracy, computational complexity, privacy vs. utility etc. We conclude that differential privacy, randomisation, and obfuscation can impact utility and performance of trained models, conversely, the use of ToR, Blockchain, and PIR may introduce additional computational complexity and high training latency. We believe that the proposed model could be used as a benchmark for proposing privacy preserving LLMs for generative AI tools

    “They’re All the Same!” Stereotypical Thinking and Systematic Errors in Users’ Privacy-Related Judgments About Online Services

    Get PDF
    Given the ever-increasing volume of online services, it has become impractical for Internet users to study every company’s handling of information privacy separately and in detail. This challenges a central assumption held by most information privacy research to date—that users engage in deliberate information processing when forming their privacy-related beliefs about online services. In this research, we complement previous studies that emphasize the role of mental shortcuts when individuals assess how a service will handle their personal information. We investigate how a particular mental shortcut—users’ stereotypical thinking about providers’ handling of user information—can cause systematic judgment errors when individuals form their beliefs about an online service. In addition, we explore the effectiveness of counter-stereotypic privacy statements in preventing such judgment errors. Drawing on data collected at two points in time from a representative sample of smartphone users, we studied systematic errors caused by stereotypical thinking in the context of a mobile news app. We found evidence for stereotype-induced errors in users’ judgments regarding this provider, despite the presence of counter-stereotypic privacy statements. Our results further suggest that the tone of these statements makes a significant difference in mitigating the judgment errors caused by stereotypical thinking. Our findings contribute to emerging knowledge about the role of cognitive biases and systematic errors in the context of information privacy
    • …
    corecore