5,808 research outputs found
Online advertising: analysis of privacy threats and protection approaches
Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.Peer ReviewedPostprint (author's final draft
The Exploitation of Web Navigation Data: Ethical Issues and Alternative Scenarios
Nowadays, the users' browsing activity on the Internet is not completely
private due to many entities that collect and use such data, either for
legitimate or illegal goals. The implications are serious, from a person who
exposes unconsciously his private information to an unknown third party entity,
to a company that is unable to control its information to the outside world. As
a result, users have lost control over their private data in the Internet. In
this paper, we present the entities involved in users' data collection and
usage. Then, we highlight what are the ethical issues that arise for users,
companies, scientists and governments. Finally, we present some alternative
scenarios and suggestions for the entities to address such ethical issues.Comment: 11 pages, 1 figur
Web Tracking: Mechanisms, Implications, and Defenses
This articles surveys the existing literature on the methods currently used
by web services to track the user online as well as their purposes,
implications, and possible user's defenses. A significant majority of reviewed
articles and web resources are from years 2012-2014. Privacy seems to be the
Achilles' heel of today's web. Web services make continuous efforts to obtain
as much information as they can about the things we search, the sites we visit,
the people with who we contact, and the products we buy. Tracking is usually
performed for commercial purposes. We present 5 main groups of methods used for
user tracking, which are based on sessions, client storage, client cache,
fingerprinting, or yet other approaches. A special focus is placed on
mechanisms that use web caches, operational caches, and fingerprinting, as they
are usually very rich in terms of using various creative methodologies. We also
show how the users can be identified on the web and associated with their real
names, e-mail addresses, phone numbers, or even street addresses. We show why
tracking is being used and its possible implications for the users (price
discrimination, assessing financial credibility, determining insurance
coverage, government surveillance, and identity theft). For each of the
tracking methods, we present possible defenses. Apart from describing the
methods and tools used for keeping the personal data away from being tracked,
we also present several tools that were used for research purposes - their main
goal is to discover how and by which entity the users are being tracked on
their desktop computers or smartphones, provide this information to the users,
and visualize it in an accessible and easy to follow way. Finally, we present
the currently proposed future approaches to track the user and show that they
can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference
On the Privacy Practices of Just Plain Sites
In addition to visiting high profile sites such as Facebook and Google, web
users often visit more modest sites, such as those operated by bloggers, or by
local organizations such as schools. Such sites, which we call "Just Plain
Sites" (JPSs) are likely to inadvertently represent greater privacy risks than
high profile sites by virtue of being unable to afford privacy expertise. To
assess the prevalence of the privacy risks to which JPSs may inadvertently be
exposing their visitors, we analyzed a number of easily observed privacy
practices of such sites. We found that many JPSs collect a great deal of
information from their visitors, share a great deal of information about their
visitors with third parties, permit a great deal of tracking of their visitors,
and use deprecated or unsafe security practices. Our goal in this work is not
to scold JPS operators, but to raise awareness of these facts among both JPS
operators and visitors, possibly encouraging the operators of such sites to
take greater care in their implementations, and visitors to take greater care
in how, when, and what they share.Comment: 10 pages, 7 figures, 6 tables, 5 authors, and a partridge in a pear
tre
A Human-centric Perspective on Digital Consenting: The Case of GAFAM
According to different legal frameworks such as the European General Data Protection Regulation (GDPR), an end-user's consent constitutes one of the well-known legal bases for personal data processing. However, research has indicated that the majority of end-users have difficulty in understanding what they are consenting to in the digital world. Moreover, it has been demonstrated that marginalized people are confronted with even more difficulties when dealing with their own digital privacy. In this research, we use an enactivist perspective from cognitive science to develop a basic human-centric framework for digital consenting. We argue that the action of consenting is a sociocognitive action and includes cognitive, collective, and contextual aspects. Based on the developed theoretical framework, we present our qualitative evaluation of the consent-obtaining mechanisms implemented and used by the five big tech companies, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The evaluation shows that these companies have failed in their efforts to empower end-users by considering the human-centric aspects of the action of consenting. We use this approach to argue that their consent-obtaining mechanisms violate principles of fairness, accountability and transparency. We then suggest that our approach may raise doubts about the lawfulness of the obtained consent—particularly considering the basic requirements of lawful consent within the legal framework of the GDPR
Stop the Abuse of Gmail!
Gmail, a highly anticipated webmail application made by Google, has been criticized by privacy advocates for breaching wiretapping laws, even before its release from beta testing. Gmail\u27s large storage space and automated processes developed to scan the content of incoming messages and create advertisements based on the scanned terms have enraged privacy groups on an international level. This iBrief will compare Gmail\u27s practices with its peers and conclude that its practices and procedures are consistent with the standards of the webmail industry. The iBrief will then propose additional measures Gmail could institute to further protect webmail users\u27 and alleviate the concerns of privacy advocates
An Automated Approach to Auditing Disclosure of Third-Party Data Collection in Website Privacy Policies
A dominant regulatory model for web privacy is "notice and choice". In this
model, users are notified of data collection and provided with options to
control it. To examine the efficacy of this approach, this study presents the
first large-scale audit of disclosure of third-party data collection in website
privacy policies. Data flows on one million websites are analyzed and over
200,000 websites' privacy policies are audited to determine if users are
notified of the names of the companies which collect their data. Policies from
25 prominent third-party data collectors are also examined to provide deeper
insights into the totality of the policy environment. Policies are additionally
audited to determine if the choice expressed by the "Do Not Track" browser
setting is respected.
Third-party data collection is wide-spread, but fewer than 15% of attributed
data flows are disclosed. The third-parties most likely to be disclosed are
those with consumer services users may be aware of, those without consumer
services are less likely to be mentioned. Policies are difficult to understand
and the average time requirement to read both a given site{\guillemotright}s
policy and the associated third-party policies exceeds 84 minutes. Only 7% of
first-party site policies mention the Do Not Track signal, and the majority of
such mentions are to specify that the signal is ignored. Among third-party
policies examined, none offer unqualified support for the Do Not Track signal.
Findings indicate that current implementations of "notice and choice" fail to
provide notice or respect choice
- …