87 research outputs found
The Legal Fate of Internet Ad-Blocking
Ad-blocking services allow individual users to avoid the obtrusive advertising that both clutters and finances most Internet publishing. Ad-blocking\u27s immense - and growing - popularity suggests the depth of Internet users\u27 frustration with Internet advertising. But its potential to disrupt publishers\u27 traditional Internet revenue model makes ad-blocking one of the most significant recent Internet phenomena. Unsurprisingly, publishers are not inclined to accept ad-blocking without a legal fight. While publishers are threatening suits in the United States, the issues presented by ad-blocking have been extensively litigated in German courts where ad-blocking consistently has triumphed over claims that it represents a form of unfair competition. In this article, I survey the recent German ad-blocking cases and consider the claims publishers are likely to raise against ad-blocking in the imminent American litigation. I conclude that, when the American ad-blocking cases come, they are bound to meet with the fate they suffered in Germany. I argue that the relevant German and American legal frameworks reinforce a similar set of values, including respect for individual autonomy, recognition of the broad social benefits ad-blocking can generate, and an insistence that publishers accept ad-blocking as part of the free market in which they must evolve and innovate in order to compete
FNDaaS: Content-agnostic Detection of Fake News sites
Automatic fake news detection is a challenging problem in misinformation
spreading, and it has tremendous real-world political and social impacts. Past
studies have proposed machine learning-based methods for detecting such fake
news, focusing on different properties of the published news articles, such as
linguistic characteristics of the actual content, which however have
limitations due to the apparent language barriers. Departing from such efforts,
we propose FNDaaS, the first automatic, content-agnostic fake news detection
method, that considers new and unstudied features such as network and
structural characteristics per news website. This method can be enforced
as-a-Service, either at the ISP-side for easier scalability and maintenance, or
user-side for better end-user privacy. We demonstrate the efficacy of our
method using data crawled from existing lists of 637 fake and 1183 real news
websites, and by building and testing a proof of concept system that
materializes our proposal. Our analysis of data collected from these websites
shows that the vast majority of fake news domains are very young and appear to
have lower time periods of an IP associated with their domain than real news
ones. By conducting various experiments with machine learning classifiers, we
demonstrate that FNDaaS can achieve an AUC score of up to 0.967 on past sites,
and up to 77-92% accuracy on newly-flagged ones
The digital-only media consumer: Key findings from a conversation with all-digital millenials
This study offers insight on the digital-onlys, a sub-population of Millennials who only consume media through digital platforms. Based on informal group conversations with 16 to 34 year-olds, the study provides a snapshot of their daily media consumption and preliminary answers to why they regard digital content as the norm.
These findings reveal that some consumers today are not simply abandoning traditional platforms and turning towards digital content, they actually seem to know no other way to consume media but on digital platforms. For them, the biggest consumption change would actually be to watch cable television, listen to FM radio or read a printed newspaper or magazine. Digital-onlys may represent a new kind of consumers that view their media habits as completely normal and organic. Indeed, some are not even aware they belong to this digital group.
The participants shared common characteristics: an ability to adapt devices to their needs, an intrinsically digital lifestyle and a habit of bypassing traditional media to access a larger selection of content despite the fact theyâre struggling with an overabundance of choice. Our conversations also revealed that digital-onlys are fully aware of the negative impact their media consumption habits can have on content creators, yet they cherish freedom above all else
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate usersâ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
Real-Time Client-Side Phishing Prevention
In the last decades researchers and companies have been working to deploy effective solutions to steer users away from phishing websites. These solutions are typically based on servers or blacklisting systems. Such approaches have several drawbacks: they compromise user privacy, rely on off-line analysis, are not robust against adaptive attacks and do not provide much guidance to the users in their warnings. To address these limitations, we developed a fast real-time client-side phishing prevention software that implements a phishing detection technique recently developed by Marchal et al. It extracts information from the visited webpage and detects if it is a phish to warn the user. It is also able to detect the website that the phish is trying to mimic and propose a redirection to the legitimate domain. Furthermore, to attest the validity of our solution we performed two user studies to evaluate the usability of the interface and the program's impact on user experience
Enhancing System Transparency, Trust, and Privacy with Internet Measurement
While on the Internet, users participate in many systems designed to protect their informationâs security. Protection of the userâs information can depend on several technical properties, including transparency, trust, and privacy. Preserving these properties is challenging due to the scale and distributed nature of the Internet; no single actor has control over these features. Instead, the systems are designed to provide them, even in the face of attackers. However, it is possible to utilize Internet measurement to better defend transparency, trust, and privacy. Internet measurement allows observation of many behaviors of distributed, Internet-connected systems. These new observations can be used to better defend the system they measure.
In this dissertation, I explore four contexts in which Internet measurement can be used to the aid of end-users in Internet-centric, adversarial settings. First, I improve transparency into Internet censorship practices by developing new Internet measurement techniques. Then, I use Internet measurement to enable the deployment of end-to-middle censorship circumvention techniques to a half-million users. Next, I evaluate transparency and improve trust in the Web public-key infrastructure by combining Internet measurement techniques and using them to augment core components of the Web public-key infrastructure. Finally, I evaluate browser extensions that provide privacy to users on the web, providing insight for designers and simple recommendations for end-users.
By focusing on end-user concerns in widely deployed systems critical to end-user security and privacy, Internet measurement enables improvements to transparency, trust, and privacy.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163199/1/benvds_1.pd
Data protection in the age of Big Data: legal challenges and responses in the context of online behavioural advertising
This thesis addresses the question of how data protection law should respond
to the challenges arising from the ever-increasing prevalence of big data. The
investigation is conducted with the case study of online behavioural
advertising (OBA) and within the EU data protection legal framework,
especially the General Data Protection Regulation (GDPR). It is argued that
data protection law should respond to the big data challenges by leveraging
the regulatory options that are either already in place in the current legal
regime or potentially available to policymakers.
With the highly complex, powerful and opaque OBA network, in both
technical and economic terms, the use of big data may pose fundamental
threats to certain individualistic, collective or societal values. Despite a limited
number of economic benefits such as free access to online services and the
growth of the digital market, the latent risks of OBA call for an effective
regulatory regime on big data.
While the EUâs GDPR represents the latest and most comprehensive legal
framework regulating the use of personal data, it has still fallen short on
certain important aspects. The regulatory model characterised by
individualised consent and the necessity test remains insufficient in fully
protecting data subjects as autonomous persons, consumers and citizens in the
context of OBA.
There is thus a pressing need for policymakers to review their regulatory
toolbox in the light of the potential threats. On the one hand, it is necessary to
reconsider the possibilities to blacklist or whitelist certain data uses with
mechanisms that are either in place in the legal framework or can be
introduced additionally. On the other hand, it is also necessary to realise the
full range of policy options that can be adopted to assist individuals in making
informed decisions in the age of big data
Recommended from our members
Architectural support for message queue task parallelism
The scaling of threads is an attractive way to exploit task-level parallelism and boost performance. From the perspective of software programming, many applications (e.g., network package processing, SQL queries) could be composite of a set of small tasks. Those tasks are arranged in a data flow graph and each task is undertaken by some threads. Message queues are often used to coordinate the tasks among the threads. On the other side, thread scaling is in favor of the hardware advancing trend that there are more Processing Elements (PE) in modern Chip Multiprocessors (CMP) than ever before. This is because single PE cannot simply run faster due to power and thermal limitations; instead architects have to use more transistors for increasing number of PEs, in order to improve the overall computing power of a processor. Unfortunately, this paradigm using message queues to drive parallel tasks sometime leads to diminishing performance returns due to issues lying in the architecture and system design. Particularly, the conventional coherent shared-memory architectures let task-parallel workloads suffer from unnecessary synchronization overhead and load-to-use latency. For instance, when passing messages through queues, multiple threads could contend for the exclusivity of the cacheline where the shared queue data structure stays. The more threads, the more severe the contention is, because every transition upgrading a cacheline from shared to exclusive state needs to invalidate more copies in the private caches of other cores, and waits for the acknowledgements from more cores. Such a overhead hurts the scalability of threads synchronizing via message queues. Adding to the coherence overhead, the load-to-use latency (from a consumer requesting data until the data being moved to the consumer to use) is often on the critical path, slowing down the computation. This is because the cache hierarchy in modern processors creates some layers of local storage to buffer data separately for different cores. Therefore, serving message queue data in an ondemand manner incurs longer load-to-use latency. It is also challenging to schedule message-driven tasks to use cores efficiently when arrival rate and service rate mismatch. It wastes CPU cycles if a runtime system leaves tasks blocked on full/empty message queues, while switching tasks has additional scheduling overheads. Diverse system topologies further complicate the problem, as the scheduling also needs to take data locality into consideration. This dissertation explores architectural supports for enhancing the scalability of message queue task parallelism, reducing the load-to-use latency, as well as avoiding blocking. Specifically, this dissertation designs and evaluates a message queue architecture that lowers the overhead of synchronization on shared queue states, a speculation technique to hide the load-to-use latency, as well as a locality-aware message queue runtime system with low overhead on scheduling and buffer resizing. The first contribution of the dissertation is Virtual-Link scalable message queue architecture (VL). Instead of having threads access the shared queue state variables (i.e., head, tail, or lock) atomically, VL provides configurable hardware support, providing both data transfer and synchronization. Unlike other hardware queue architectures with dedicated network, VL reuses the existing cache coherence network and delivers a virtualized channel as if there were a direct link (or route) between two arbitrary PEs. VL facilitates efficient synchronized data movement between M:N producers and consumers with several benefits: (i) the number of sharers on synchronization primitives is reduced to zero, eliminating a primary bottleneck of traditional lock-free queues, (ii) memory spills, snoops, and invalidations are reduced, (iii) data stays on the fast path (inside the interconnect) a majority of the time. Another contribution of the dissertation is SPAMeR speculation mechanism. SPAMeR has the capability to speculatively push messages in anticipation of consumer message requests. With the speculation, the latency of moving data from the source to the consumer that needs the data could be partially or fully overlapped with the message processing time. Unlike pre-fetch approaches which predict what addresses to fetch next, with a queue we know exactly what data is needed next but not when it is needed; SPAMeR proposes algorithms to learn from queue operation history in order to predict this. Finally the dissertation contributes ARMQ locality-aware runtime. ARMQ collects a set of approaches that avoids message queue blocking, ranging from the most general yielding, to dynamically resizing the buffer, and to spawning helper tasks. On one hand, ARMQ minimizes the overheads (e.g., wasteful polling, context switch, memory allocation and copying etc.) with a few techniques (e.g., userspace threading, chunk-based ringbuffer etc.) On the other hand, ARMQ schedules the message-driven tasks precisely and opportunely, in order to maximize the data locality preserved (in favor of cache) and balance the resource allocation.Electrical and Computer Engineerin
- âŠ