1,583 research outputs found
Through a router darkly: how new American copyright enforcement initiatives may hinder economic development, net neutrality and creativity
On November 1, 2012, Russia enacted a law putatively aiming to protect Russian children from pedophiles. This law authorizes deep packet inspection (DPI), a method used for monitoring, filtering and shaping internet traffic, which has heightened concerns among many leading privacy groups. These groups are concerned with how the government will use such an intrusive method in prosecuting child predators. Central to this concern is DPI’s capability to allow the Russian government to peer into any citizens’ unencrypted internet traffic and monitor, copy, or even alter the traffic as it moves to its destination. The unresolved question is whether the government’s use of DPI will be restrained and utilized primarily to thwart child predators, or whether it will be expanded to lay the groundwork for a new era of national censorship. Although the United States has not yet adopted similar tactics in regulating its citizens’ internet use, Russia’s implementation of the new DPI monitoring and filtering system will provide an educational opportunity for both privacy advocates and policymakers
One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users
Tor is a popular low-latency anonymity network. However, Tor does not protect
against the exploitation of an insecure application to reveal the IP address
of, or trace, a TCP stream. In addition, because of the linkability of Tor
streams sent together over a single circuit, tracing one stream sent over a
circuit traces them all. Surprisingly, it is unknown whether this linkability
allows in practice to trace a significant number of streams originating from
secure (i.e., proxied) applications. In this paper, we show that linkability
allows us to trace 193% of additional streams, including 27% of HTTP streams
possibly originating from "secure" browsers. In particular, we traced 9% of Tor
streams carried by our instrumented exit nodes. Using BitTorrent as the
insecure application, we design two attacks tracing BitTorrent users on Tor. We
run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor
users. Using these IP addresses, we then profile not only the BitTorrent
downloads but also the websites visited per country of origin of Tor users. We
show that BitTorrent users on Tor are over-represented in some countries as
compared to BitTorrent users outside of Tor. By analyzing the type of content
downloaded, we then explain the observed behaviors by the higher concentration
of pornographic content downloaded at the scale of a country. Finally, we
present results suggesting the existence of an underground BitTorrent ecosystem
on Tor
Is Content Publishing in BitTorrent Altruistic or Profit-Driven
BitTorrent is the most popular P2P content delivery application where
individual users share various type of content with tens of thousands of other
users. The growing popularity of BitTorrent is primarily due to the
availability of valuable content without any cost for the consumers. However,
apart from required resources, publishing (sharing) valuable (and often
copyrighted) content has serious legal implications for user who publish the
material (or publishers). This raises a question that whether (at least major)
content publishers behave in an altruistic fashion or have other incentives
such as financial. In this study, we identify the content publishers of more
than 55k torrents in 2 major BitTorrent portals and examine their behavior. We
demonstrate that a small fraction of publishers are responsible for 66% of
published content and 75% of the downloads. Our investigations reveal that
these major publishers respond to two different profiles. On one hand,
antipiracy agencies and malicious publishers publish a large amount of fake
files to protect copyrighted content and spread malware respectively. On the
other hand, content publishing in BitTorrent is largely driven by companies
with financial incentive. Therefore, if these companies lose their interest or
are unable to publish content, BitTorrent traffic/portals may disappear or at
least their associated traffic will significantly reduce
Understanding collaboration in volunteer computing systems
Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments
Decentralized systems are a subset of distributed systems where multiple
authorities control different components and no authority is fully trusted by
all. This implies that any component in a decentralized system is potentially
adversarial. We revise fifteen years of research on decentralization and
privacy, and provide an overview of key systems, as well as key insights for
designers of future systems. We show that decentralized designs can enhance
privacy, integrity, and availability but also require careful trade-offs in
terms of system complexity, properties provided, and degree of
decentralization. These trade-offs need to be understood and navigated by
designers. We argue that a combination of insights from cryptography,
distributed systems, and mechanism design, aligned with the development of
adequate incentives, are necessary to build scalable and successful
privacy-preserving decentralized systems
Clustering and Sharing Incentives in BitTorrent Systems
Peer-to-peer protocols play an increasingly instrumental role in Internet
content distribution. Consequently, it is important to gain a full
understanding of how these protocols behave in practice and how their
parameters impact overall performance. We present the first experimental
investigation of the peer selection strategy of the popular BitTorrent protocol
in an instrumented private torrent. By observing the decisions of more than 40
nodes, we validate three BitTorrent properties that, though widely believed to
hold, have not been demonstrated experimentally. These include the clustering
of similar-bandwidth peers, the effectiveness of BitTorrent's sharing
incentives, and the peers' high average upload utilization. In addition, our
results show that BitTorrent's new choking algorithm in seed state provides
uniform service to all peers, and that an underprovisioned initial seed leads
to the absence of peer clustering and less effective sharing incentives. Based
on our observations, we provide guidelines for seed provisioning by content
providers, and discuss a tracker protocol extension that addresses an
identified limitation of the protocol
Enabling Social Applications via Decentralized Social Data Management
An unprecedented information wealth produced by online social networks,
further augmented by location/collocation data, is currently fragmented across
different proprietary services. Combined, it can accurately represent the
social world and enable novel socially-aware applications. We present
Prometheus, a socially-aware peer-to-peer service that collects social
information from multiple sources into a multigraph managed in a decentralized
fashion on user-contributed nodes, and exposes it through an interface
implementing non-trivial social inferences while complying with user-defined
access policies. Simulations and experiments on PlanetLab with emulated
application workloads show the system exhibits good end-to-end response time,
low communication overhead and resilience to malicious attacks.Comment: 27 pages, single ACM column, 9 figures, accepted in Special Issue of
Foundations of Social Computing, ACM Transactions on Internet Technolog
- …