206 research outputs found

    Decentralized Erasure Codes for Distributed Networked Storage

    Full text link
    We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.Comment: to appear in IEEE Transactions on Information Theory, Special Issue: Networking and Information Theor

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/log⁥nn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlog⁥n)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201

    Authenticated Key Distribution: When the Coupon Collector is Your Enemy

    Get PDF
    We introduce new authenticated key exchange protocols which on the one hand do not resort to standard public key setups with corresponding assumptions of computationally hard problems, but on the other hand, are more efficient than distributing symmetric keys among the participants. To this end, we rely on a trusted central authority distributing key material whose size is independent of the total number of users, and which allows the users to obtain shared secret keys. We analyze the security of our construction, taking into account various attack models. Importantly, only symmetric primitives are needed in the protocol making it an alternative to quantum-safe key exchange protocols which rely on hardness assumptions

    Toward a complexity theory for randomized search heuristics : black-box models

    Get PDF
    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function class—a class often regarded to analyze how heuristics progress in easy parts of the search space—is the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen FlexibilitĂ€t nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten VerstĂ€ndnis, in welchen Situationen problemunabhĂ€ngige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine KomplexitĂ€tstheorie Ă€hnlich wie es sie in der klassischen Algorithmik gibt, wĂ€re wĂŒnschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen KomplexitĂ€tstheorie fĂŒr Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloß die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt fĂŒr manche Probleme eine bessere EinschĂ€tzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren KomplexitĂ€t als zu gering einzuschĂ€tzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien fĂŒr den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang ĂŒbersehen. Diese Arbeit ist in englischer Sprache verfasst

    Provable and practical approximations for the degree distribution using sublinear graph samples

    Full text link
    The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the hh-index and the zz-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1%1\% of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10%10\% of the vertices to give comparable results. We also observe that the hh and zz-indices of real graphs are large, validating our theoretical analysis.Comment: Longer version of the WWW 2018 submissio

    PINT: Probabilistic In-band Network Telemetry

    Get PDF
    © 2020 ACM. Commodity network devices support adding in-band telemetry measurements into data packets, enabling a wide range of applications, including network troubleshooting, congestion control, and path tracing. However, including such information on packets adds significant overhead that impacts both flow completion times and application-level performance. We introduce PINT, an in-band network telemetry framework that bounds the amount of information added to each packet. PINT encodes the requested data on multiple packets, allowing per-packet overhead limits that can be as low as one bit. We analyze PINT and prove performance bounds, including cases when multiple queries are running simultaneously. PINT is implemented in P4 and can be deployed on network devices.Using real topologies and traffic characteristics, we show that PINT concurrently enables applications such as congestion control, path tracing, and computing tail latencies, using only sixteen bits per packet, with performance comparable to the state of the art

    Exploiting random walks for robust, scalable, structure-free data aggregation and routing in mobile ad-hoc networks (MANETs)

    Get PDF
    The focus of this thesis is on the design of scalable data aggregation protocols for Mobile Ad-hoc Networks (MANETs). Data aggregation Protocols that rely on network structures such as trees or backbones are not well suited for MANETs because the underlying topology of MANETs is constantly changing. On the other hand, unstructured techniques such as flooding and gossiping have a high messaging overhead and take a long time to finish. Therefore, in this thesis, we explore the use of random walks as a structure-free alternative for data aggregation in MANETs.;The basic idea is to introduce one or more tokens that successively visit each node in a MANET by executing a random walk and compute the aggregate state. While random walks are simple, robust and overhead-free, plain random walks tend to be slow in visiting all nodes because the token can get stuck in regions of already visited nodes. Therefore, we first introduce self-repelling random walks (SRRW) in which at each step, the token chooses a neighbor that has been visited the least number of times. While SRRW significantly speeds up random walks in the initial stages, towards the end a slowdown is observed when a significant fraction of nodes are already visited. To address this shortcoming, we then develop two complementary strategies that speed up data aggregation.;First, we introduce gradient biased random walks (a pull-based strategy) where short temporary multi-hop gradients are used to pull the tokens toward unvisited node. We prove that gradient biased random walks achieve a cover time of O(N) and message overhead of O(NlogN) where N is the number of nodes in the network. Next, we introduce a push-based strategy in which self-repelling random walks are complemented by a single step push phase before the random walk phase, in which each node broadcasts its information to its neighbors. We show that this small push goes a long way in speeding up data aggregation. Push based random walks finish data aggregation in O(N) message and time. Finally, we describe hierarchical extension of the push-based protocol which can produce multi-resolution aggregates at each node using only O(NlogN) messages.;All our results are validated using simulations in ns-3 in networks ranging from 100 to 4000 nodes under different network densities, node speed and mobility models
    • 

    corecore