11,720 research outputs found
Evolutionary Approaches to Minimizing Network Coding Resources
We wish to minimize the resources used for network coding while achieving the
desired throughput in a multicast scenario. We employ evolutionary approaches,
based on a genetic algorithm, that avoid the computational complexity that
makes the problem NP-hard. Our experiments show great improvements over the
sub-optimal solutions of prior methods. Our new algorithms improve over our
previously proposed algorithm in three ways. First, whereas the previous
algorithm can be applied only to acyclic networks, our new method works also
with networks with cycles. Second, we enrich the set of components used in the
genetic algorithm, which improves the performance. Third, we develop a novel
distributed framework. Combining distributed random network coding with our
distributed optimization yields a network coding protocol where the resources
used for coding are optimized in the setup phase by running our evolutionary
algorithm at each node of the network. We demonstrate the effectiveness of our
approach by carrying out simulations on a number of different sets of network
topologies.Comment: 9 pages, 6 figures, accepted to the 26th Annual IEEE Conference on
Computer Communications (INFOCOM 2007
On the Limited Communication Analysis and Design for Decentralized Estimation
This paper pertains to the analysis and design of decentralized estimation
schemes that make use of limited communication. Briefly, these schemes equip
the sensors with scalar states that iteratively merge the measurements and the
state of other sensors to be used for state estimation. Contrarily to commonly
used distributed estimation schemes, the only information being exchanged are
scalars, there is only one common time-scale for communication and estimation,
and the retrieval of the state of the system and sensors is achieved in
finite-time. We extend previous work to a more general setup and provide
necessary and sufficient conditions required for the communication between the
sensors that enable the use of limited communication decentralized
estimation~schemes. Additionally, we discuss the cases where the sensors are
memoryless, and where the sensors might not have the capacity to discern the
contributions of other sensors. Based on these conditions and the fact that
communication channels incur a cost, we cast the problem of finding the minimum
cost communication graph that enables limited communication decentralized
estimation schemes as an integer programming problem.Comment: Updates on the paper in CDC 201
Foundations, Properties, and Security Applications of Puzzles: A Survey
Cryptographic algorithms have been used not only to create robust ciphertexts
but also to generate cryptograms that, contrary to the classic goal of
cryptography, are meant to be broken. These cryptograms, generally called
puzzles, require the use of a certain amount of resources to be solved, hence
introducing a cost that is often regarded as a time delay---though it could
involve other metrics as well, such as bandwidth. These powerful features have
made puzzles the core of many security protocols, acquiring increasing
importance in the IT security landscape. The concept of a puzzle has
subsequently been extended to other types of schemes that do not use
cryptographic functions, such as CAPTCHAs, which are used to discriminate
humans from machines. Overall, puzzles have experienced a renewed interest with
the advent of Bitcoin, which uses a CPU-intensive puzzle as proof of work. In
this paper, we provide a comprehensive study of the most important puzzle
construction schemes available in the literature, categorizing them according
to several attributes, such as resource type, verification type, and
applications. We have redefined the term puzzle by collecting and integrating
the scattered notions used in different works, to cover all the existing
applications. Moreover, we provide an overview of the possible applications,
identifying key requirements and different design approaches. Finally, we
highlight the features and limitations of each approach, providing a useful
guide for the future development of new puzzle schemes.Comment: This article has been accepted for publication in ACM Computing
Survey
Compressive Privacy for a Linear Dynamical System
We consider a linear dynamical system in which the state vector consists of
both public and private states. One or more sensors make measurements of the
state vector and sends information to a fusion center, which performs the final
state estimation. To achieve an optimal tradeoff between the utility of
estimating the public states and protection of the private states, the
measurements at each time step are linearly compressed into a lower dimensional
space. Under the centralized setting where all measurements are collected by a
single sensor, we propose an optimization problem and an algorithm to find the
best compression matrix. Under the decentralized setting where measurements are
made separately at multiple sensors, each sensor optimizes its own local
compression matrix. We propose methods to separate the overall optimization
problem into multiple sub-problems that can be solved locally at each sensor.
We consider the cases where there is no message exchange between the sensors;
and where each sensor takes turns to transmit messages to the other sensors.
Simulations and empirical experiments demonstrate the efficiency of our
proposed approach in allowing the fusion center to estimate the public states
with good accuracy while preventing it from estimating the private states
accurately
Actors vs Shared Memory: two models at work on Big Data application frameworks
This work aims at analyzing how two different concurrency models, namely the
shared memory model and the actor model, can influence the development of
applications that manage huge masses of data, distinctive of Big Data
applications. The paper compares the two models by analyzing a couple of
concrete projects based on the MapReduce and Bulk Synchronous Parallel
algorithmic schemes. Both projects are doubly implemented on two concrete
platforms: Akka Cluster and Managed X10. The result is both a conceptual
comparison of models in the Big Data Analytics scenario, and an experimental
analysis based on concrete executions on a cluster platform
- …