2,778 research outputs found
Relieving the Wireless Infrastructure: When Opportunistic Networks Meet Guaranteed Delays
Major wireless operators are nowadays facing network capacity issues in
striving to meet the growing demands of mobile users. At the same time,
3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g.,
Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a
content dissemination framework that harnesses ad hoc communication
opportunities to minimize the load on the wireless infrastructure while
guaranteeing tight delivery delays. It achieves this through a control loop
that collects user-sent acknowledgements to determine if new copies need to be
reinjected into the network through the 3G interface. Push-and-Track includes
multiple strategies to determine how many copies of the content should be
injected, when, and to whom. The short delay-tolerance of common content, such
as news or road traffic updates, make them suitable for such a system. Based on
a realistic large-scale vehicular dataset from the city of Bologna composed of
more than 10,000 vehicles, we demonstrate that Push-and-Track consistently
meets its delivery objectives while reducing the use of the 3G network by over
90%.Comment: Accepted at IEEE WoWMoM 2011 conferenc
A Quantitative Flavour of Robust Reachability
Many software analysis techniques attempt to determine whether bugs are
reachable, but for security purpose this is only part of the story as it does
not indicate whether the bugs found could be easily triggered by an attacker.
The recently introduced notion of robust reachability aims at filling this gap
by distinguishing the input controlled by the attacker from those that are not.
Yet, this qualitative notion may be too strong in practice, leaving apart bugs
which are mostly but not fully replicable. We aim here at proposing a
quantitative version of robust reachability, more flexible and still amenable
to automation. We propose quantitative robustness, a metric expressing how
easily an attacker can trigger a bug while taking into account that he can only
influence part of the program input, together with a dedicated quantitative
symbolic execution technique (QRSE). Interestingly, QRSE relies on a variant of
model counting (namely, functional E-MAJSAT) unseen so far in formal
verification, but which has been studied in AI domains such as Bayesian
network, knowledge representation and probabilistic planning. Yet, the existing
solving methods from these fields turn out to be unsatisfactory for formal
verification purpose, leading us to propose a novel parametric method. These
results have been implemented and evaluated over two security-relevant case
studies, allowing to demonstrate the feasibility and relevance of our ideas
Algorithms to Approximate Column-Sparse Packing Problems
Column-sparse packing problems arise in several contexts in both
deterministic and stochastic discrete optimization. We present two unifying
ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain
improved approximation algorithms for some well-known families of such
problems. As three main examples, we attain the integrality gap, up to
lower-order terms, for known LP relaxations for k-column sparse packing integer
programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set
packing (Bansal et al., Algorithmica, 2012), and go "half the remaining
distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and
Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM
Transactions of Algorithm
Approximating the Held-Karp Bound for Metric TSP in Nearly Linear Time
We give a nearly linear time randomized approximation scheme for the
Held-Karp bound [Held and Karp, 1970] for metric TSP. Formally, given an
undirected edge-weighted graph on edges and , the
algorithm outputs in time, with high probability, a
-approximation to the Held-Karp bound on the metric TSP instance
induced by the shortest path metric on . The algorithm can also be used to
output a corresponding solution to the Subtour Elimination LP. We substantially
improve upon the running time achieved previously
by Garg and Khandekar. The LP solution can be used to obtain a fast randomized
-approximation for metric TSP which improves
upon the running time of previous implementations of Christofides' algorithm
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
PopArt: Ranked Testing Efficiency
Too often, programmers are under pressure to maximize their confidence in the correctness of their code with a tight testing budget. Should they spend some of that budget on finding âinterestingâ inputs or spend their entire testing budget on test executions? Work on testing efficiency has explored two competing approaches to answer this question: systematic partition testing (ST), which defines a testing partition and tests its parts, and random testing (RT), which directly samples inputs with replacement. A consensus as to which is better when has yet to emerge. We present Probability Ordered Partition Testing (POPART), a new systematic partition-based testing strategy that visits the parts of a testing partition in decreasing probability order and in doing so leverages any non-uniformity over that partition. We show how to construct a homogeneous testing partition, a requirement for systematic testing, by using an executable oracle and the path partition. A programâs path partition is a naturally occurring testing partition that is usually skewed for the simple reason that some paths execute more frequently than others. To confirm this conventional wisdom, we instrument programs from the Codeflaws repository and find that 80% of them have a skewed path probability distribution. POPART visits the parts of a testing partition in decreasing probability order. We then compare POPART with RT to characterise the configuration space in which each is more efficient. We show that, when simulating Codeflaws, POPART outperforms RT after 100;000 executions. Our results reaffirm RTâs power for very small testing budgets but also show that for any application requiring high (above 90%) probability-weighted coverage POPART should be preferred. In such cases, despite paying more for each test execution, we prove that POPART outperforms RT: it traverses parts whose cumulative probability bounds that of random testing, showing that sampling without replacement pays for itself, given a nonuniform probability over a testing partition
- âŠ