3,521 research outputs found
Data-Efficient Quickest Outlying Sequence Detection in Sensor Networks
A sensor network is considered where at each sensor a sequence of random
variables is observed. At each time step, a processed version of the
observations is transmitted from the sensors to a common node called the fusion
center. At some unknown point in time the distribution of observations at an
unknown subset of the sensor nodes changes. The objective is to detect the
outlying sequences as quickly as possible, subject to constraints on the false
alarm rate, the cost of observations taken at each sensor, and the cost of
communication between the sensors and the fusion center. Minimax formulations
are proposed for the above problem and algorithms are proposed that are shown
to be asymptotically optimal for the proposed formulations, as the false alarm
rate goes to zero. It is also shown, via numerical studies, that the proposed
algorithms perform significantly better than those based on fractional
sampling, in which the classical algorithms from the literature are used and
the constraint on the cost of observations is met by using the outcome of a
sequence of biased coin tosses, independent of the observation process.Comment: Submitted to IEEE Transactions on Signal Processing, Nov 2014. arXiv
admin note: text overlap with arXiv:1408.474
FilteredWeb: A Framework for the Automated Search-Based Discovery of Blocked URLs
Various methods have been proposed for creating and maintaining lists of
potentially filtered URLs to allow for measurement of ongoing internet
censorship around the world. Whilst testing a known resource for evidence of
filtering can be relatively simple, given appropriate vantage points,
discovering previously unknown filtered web resources remains an open
challenge.
We present a new framework for automating the process of discovering filtered
resources through the use of adaptive queries to well-known search engines. Our
system applies information retrieval algorithms to isolate characteristic
linguistic patterns in known filtered web pages; these are then used as the
basis for web search queries. The results of these queries are then checked for
evidence of filtering, and newly discovered filtered resources are fed back
into the system to detect further filtered content.
Our implementation of this framework, applied to China as a case study, shows
that this approach is demonstrably effective at detecting significant numbers
of previously unknown filtered web pages, making a significant contribution to
the ongoing detection of internet filtering as it develops.
Our tool is currently deployed and has been used to discover 1355 domains
that are poisoned within China as of Feb 2017 - 30 times more than are
contained in the most widely-used public filter list. Of these, 759 are outside
of the Alexa Top 1000 domains list, demonstrating the capability of this
framework to find more obscure filtered content. Further, our initial analysis
of filtered URLs, and the search terms that were used to discover them, gives
further insight into the nature of the content currently being blocked in
China.Comment: To appear in "Network Traffic Measurement and Analysis Conference
2017" (TMA2017
Distributed Detection of Cycles
Distributed property testing in networks has been introduced by Brakerski and
Patt-Shamir (2011), with the objective of detecting the presence of large dense
sub-networks in a distributed manner. Recently, Censor-Hillel et al. (2016)
have shown how to detect 3-cycles in a constant number of rounds by a
distributed algorithm. In a follow up work, Fraigniaud et al. (2016) have shown
how to detect 4-cycles in a constant number of rounds as well. However, the
techniques in these latter works were shown not to generalize to larger cycles
with . In this paper, we completely settle the problem of cycle
detection, by establishing the following result. For every , there
exists a distributed property testing algorithm for -freeness, performing
in a constant number of rounds. All these results hold in the classical CONGEST
model for distributed network computing. Our algorithm is 1-sided error. Its
round-complexity is where is the property
testing parameter measuring the gap between legal and illegal instances
A Churn for the Better: Localizing Censorship using Network-level Path Churn and Network Tomography
Recent years have seen the Internet become a key vehicle for citizens around
the globe to express political opinions and organize protests. This fact has
not gone unnoticed, with countries around the world repurposing network
management tools (e.g., URL filtering products) and protocols (e.g., BGP, DNS)
for censorship. However, repurposing these products can have unintended
international impact, which we refer to as "censorship leakage". While there
have been anecdotal reports of censorship leakage, there has yet to be a
systematic study of censorship leakage at a global scale. In this paper, we
combine a global censorship measurement platform (ICLab) with a general-purpose
technique -- boolean network tomography -- to identify which AS on a network
path is performing censorship. At a high-level, our approach exploits BGP churn
to narrow down the set of potential censoring ASes by over 95%. We exactly
identify 65 censoring ASes and find that the anomalies introduced by 24 of the
65 censoring ASes have an impact on users located in regions outside the
jurisdiction of the censoring AS, resulting in the leaking of regional
censorship policies
On Modeling the Costs of Censorship
We argue that the evaluation of censorship evasion tools should depend upon
economic models of censorship. We illustrate our position with a simple model
of the costs of censorship. We show how this model makes suggestions for how to
evade censorship. In particular, from it, we develop evaluation criteria. We
examine how our criteria compare to the traditional methods of evaluation
employed in prior works
- …