23,450 research outputs found

    On the anonymity risk of time-varying user profiles.

    Get PDF
    Websites and applications use personalisation services to profile their users, collect their patterns and activities and eventually use this data to provide tailored suggestions. User preferences and social interactions are therefore aggregated and analysed. Every time a user publishes a new post or creates a link with another entity, either another user, or some online resource, new information is added to the user profile. Exposing private data does not only reveal information about single users’ preferences, increasing their privacy risk, but can expose more about their network that single actors intended. This mechanism is self-evident in social networks where users receive suggestions based on their friends’ activities. We propose an information-theoretic approach to measure the differential update of the anonymity risk of time-varying user profiles. This expresses how privacy is affected when new content is posted and how much third-party services get to know about the users when a new activity is shared. We use actual Facebook data to show how our model can be applied to a real-world scenario.Peer ReviewedPostprint (published version

    Potential mass surveillance and privacy violations in proximity-based social applications

    Get PDF
    Proximity-based social applications let users interact with people that are currently close to them, by revealing some information about their preferences and whereabouts. This information is acquired through passive geo-localisation and used to build a sense of serendipitous discovery of people, places and interests. Unfortunately, while this class of applications opens different interactions possibilities for people in urban settings, obtaining access to certain identity information could lead a possible privacy attacker to identify and follow a user in their movements in a specific period of time. The same information shared through the platform could also help an attacker to link the victim's online profiles to physical identities. We analyse a set of popular dating application that shares users relative distances within a certain radius and show how, by using the information shared on these platforms, it is possible to formalise a multilateration attack, able to identify the user actual position. The same attack can also be used to follow a user in all their movements within a certain period of time, therefore identifying their habits and Points of Interest across the city. Furthermore we introduce a social attack which uses common Facebook likes to profile a person and finally identify their real identity

    Privacy Implications of Health Information Seeking on the Web

    Full text link
    This article investigates privacy risks to those visiting health- related web pages. The population of pages analyzed is derived from the 50 top search results for 1,986 common diseases. This yielded a total population of 80,124 unique pages which were analyzed for the presence of third-party HTTP requests. 91% of pages were found to make requests to third parties. Investigation of URIs revealed that 70% of HTTP Referer strings contained information exposing specific conditions, treatments, and diseases. This presents a risk to users in the form of personal identification and blind discrimination. An examination of extant government and corporate policies reveals that users are insufficiently protected from such risks

    A Human-centric Perspective on Digital Consenting: The Case of GAFAM

    Get PDF
    According to different legal frameworks such as the European General Data Protection Regulation (GDPR), an end-user's consent constitutes one of the well-known legal bases for personal data processing. However, research has indicated that the majority of end-users have difficulty in understanding what they are consenting to in the digital world. Moreover, it has been demonstrated that marginalized people are confronted with even more difficulties when dealing with their own digital privacy. In this research, we use an enactivist perspective from cognitive science to develop a basic human-centric framework for digital consenting. We argue that the action of consenting is a sociocognitive action and includes cognitive, collective, and contextual aspects. Based on the developed theoretical framework, we present our qualitative evaluation of the consent-obtaining mechanisms implemented and used by the five big tech companies, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The evaluation shows that these companies have failed in their efforts to empower end-users by considering the human-centric aspects of the action of consenting. We use this approach to argue that their consent-obtaining mechanisms violate principles of fairness, accountability and transparency. We then suggest that our approach may raise doubts about the lawfulness of the obtained consent—particularly considering the basic requirements of lawful consent within the legal framework of the GDPR

    Internet Giants as Quasi-Governmental Actors and the Limits of Contractual Consent

    Get PDF
    Although the government’s data-mining program relied heavily on information and technology that the government received from private companies, relatively little of the public outrage generated by Edward Snowden’s revelations was directed at those private companies. We argue that the mystique of the Internet giants and the myth of contractual consent combine to mute criticisms that otherwise might be directed at the real data-mining masterminds. As a result, consumers are deemed to have consented to the use of their private information in ways that they would not agree to had they known the purposes to which their information would be put and the entities – including the federal government – with whom their information would be shared. We also call into question the distinction between governmental actors and private actors in this realm, as the Internet giants increasingly exploit contractual mechanisms to operate with quasi-governmental powers in their relations with consumers. As regulators and policymakers focus on how to better protect consumer data, we propose that solutions that rely upon consumer permission adopt a more exacting and limited concept of the consent required before private entities may collect or make use of consumer’s information where such uses touch upon privacy interests

    On-line privacy behavior: using user interfaces for salient factors

    Get PDF
    The problem of privacy in social networks is well documented within literature; users have privacy concerns however, they consistently disclose their sensitive information and leave it open to unintended third parties. While numerous causes of poor behaviour have been suggested by research the role of the User Interface (UI) and the system itself is underexplored. The field of Persuasive Technology would suggest that Social Network Systems persuade users to deviate from their normal or habitual behaviour. This paper makes the case that the UI can be used as the basis for user empowerment by informing them of their privacy at the point of interaction and reminding them of their privacy needs. The Theory of Planned Behaviour is introduced as a potential theoretical foundation for exploring the psychology behind privacy behaviour as it describes the salient factors that influence intention and action. Based on these factors of personal attitude, subjective norms and perceived control, a series of UIs are presented and implemented in controlled experiments examining their effect on personal information disclosure. This is combined with observations and interviews with the participants. Results from this initial, pilot experiment suggest groups with privacy salient information embedded exhibit less disclosure than the control group. This work reviews this approach as a method for exploring privacy behaviour and proposes further work required

    Data Scraping as a Cause of Action: Limiting Use of the CFAA and Trespass in Online Copying Cases

    Get PDF
    In recent years, online platforms have used claims such as the Computer Fraud and Abuse Act (“CFAA”) and trespass to curb data scraping, or copying of web content accomplished using robots or web crawlers. However, as the term “data scraping” implies, the content typically copied is data or information that is not protected by intellectual property law, and the means by which the copying occurs is not considered to be hacking. Trespass and the CFAA are both concerned with authorization, but in data scraping cases, these torts are used in such a way that implies that real property norms exist on the Internet, a misleading and harmful analogy. To correct this imbalance, the CFAA must be interpreted in its native context, that of computers, computer networks, and the Internet, and given contextual meaning. Alternatively, the CFAA should be amended. Because data scraping is fundamentally copying, copyright offers the correct means for litigating data scraping cases. This Note additionally offers proposals for creating enforceable terms of service online and for strengthening copyright to make it applicable to user-based online platforms
    • …
    corecore