610 research outputs found

    How to Secure Matchings Against Edge Failures

    Get PDF
    Suppose we are given a bipartite graph that admits a perfect matching and an adversary may delete any edge from the graph with the intention of destroying all perfect matchings. We consider the task of adding a minimum cost edge-set to the graph, such that the adversary never wins. We show that this problem is equivalent to covering a digraph with non-trivial strongly connected components at minimal cost. We provide efficient exact and approximation algorithms for this task. In particular, for the unit-cost problem, we give a log_2 n-factor approximation algorithm and a polynomial-time algorithm for chordal-bipartite graphs. Furthermore, we give a fixed parameter algorithm for the problem parameterized by the treewidth of the input graph. For general non-negative weights we give tight upper and lower approximation bounds relative to the Directed Steiner Forest problem. Additionally we prove a dichotomy theorem characterizing minor-closed graph classes which allow for a polynomial-time algorithm. To obtain our results, we exploit a close relation to the classical Strong Connectivity Augmentation problem as well as directed Steiner problems

    FLOCK: Fast, Lightweight, and Scalable Allocation for Decentralized Services on Blockchain

    Get PDF
    Many decentralized services have recently emerged on top of blockchain, offering benefits like privacy, and allowing any node in the network to share its resources. In order to be a competitive alternative to their central counterparts, their performance needs to match up. Specifically, service allocation remains a performance bottleneck for many decentralized services.In this paper we present FLOCK, an allocation system which is highly scalable, fast, and lightweight. Furthermore, it allows nodes to indicate their preference for clients/sellers without needing to submit bids by using stable matching algorithms. We decouple the price discovery and outsource this function to a smart contract on the blockchain.Additionally, another smart contract is used to orchestrate the allocation and take care of service discovery, while trusted execution environments securely compute allocation solutions, and off-chain payment networks are used to send rewards.Evaluation of FLOCK shows that gas costs are manageable and improve upon other solutions which leverage auctions, and that our instance of the stable matching algorithm greatly improves run-time and throughput over auction counterparts. Finally, our discussion outlines practical improvements to further increase performance

    Graph Theoretic Modeling: Case Studies In Redundant Arrays Of Independent Disks And Network Defense

    Get PDF
    Graph theoretic modeling has served as an invaluable tool for solving a variety of problems since its introduction in Euler\u27s paper on the Bridges of Königsberg in 1736 . Two amongst them of contemporary interest are the modeling of Redundant Arrays of Inexpensive Disks (RAID), and the identification of network attacks. While the former is vital to the protection and uninterrupted availability of data, the latter is crucial to the integrity of systems comprising networks. Both are of practical importance due to the continuing growth of data and its demand at increasing numbers of geographically distributed locations through the use of networks such as the Internet. The popularity of RAID has soared because of the enhanced I/O bandwidths and large capacities they offer at low cost. However, the demand for bigger capacities has led to the use of larger arrays with increased probability of random disk failures. This has motivated the need for RAID systems to tolerate two or more disk failures, without sacrificing performance or storage space. To this end, we shall first perform a comparative study of the existing techniques that achieve this objective. Next, we shall devise novel graph-theoretic algorithms for placing data and parity in arrays of n disks (n ≥ 3) that can recover from two random disk failures, for n = p - 1, n = p and n = 2p - 2, where p is a prime number. Each shall be shown to utilize an optimal ratio of space for storing parity. We shall also show how to extend the algorithms to arrays with an arbitrary number of disks, albeit with non-optimal values for the aforementioned ratio. The growth of the Internet has led to the increased proliferation of malignant applications seeking to breach the security of networked systems. Hence, considerable effort has been focused on detecting and predicting the attacks they perpetrate. However, the enormity of the Internet poses a challenge to representing and analyzing them by using scalable models. Furthermore, forecasting the systems that they are likely to exploit in the future is difficult due to the unavailability of complete information on network vulnerabilities. We shall present a technique that identifies attacks on large networks using a scalable model, while filtering for false positives and negatives. Furthermore, it also forecasts the propagation of security failures proliferated by attacks over time and their likely targets in the future

    A System for Distributed Mechanisms: Design, Implementation and Applications

    Full text link
    We describe here a structured system for distributed mechanism design appropriate for both Intranet and Internet applications. In our approach the players dynamically form a network in which they know neither their neighbours nor the size of the network and interact to jointly take decisions. The only assumption concerning the underlying communication layer is that for each pair of processes there is a path of neighbours connecting them. This allows us to deal with arbitrary network topologies. We also discuss the implementation of this system which consists of a sequence of layers. The lower layers deal with the operations that implement the basic primitives of distributed computing, namely low level communication and distributed termination, while the upper layers use these primitives to implement high level communication among players, including broadcasting and multicasting, and distributed decision making. This yields a highly flexible distributed system whose specific applications are realized as instances of its top layer. This design is implemented in Java. The system supports at various levels fault-tolerance and includes a provision for distributed policing the purpose of which is to exclude `dishonest' players. Also, it can be used for repeated creation of dynamically formed networks of players interested in a joint decision making implemented by means of a tax-based mechanism. We illustrate its flexibility by discussing a number of implemented examples.Comment: 36 pages; revised and expanded versio

    Fast Discrete Consensus Based on Gossip for Makespan Minimization in Networked Systems

    Get PDF
    In this paper we propose a novel algorithm to solve the discrete consensus problem, i.e., the problem of distributing evenly a set of tokens of arbitrary weight among the nodes of a networked system. Tokens are tasks to be executed by the nodes and the proposed distributed algorithm minimizes monotonically the makespan of the assigned tasks. The algorithm is based on gossip-like asynchronous local interactions between the nodes. The convergence time of the proposed algorithm is superior with respect to the state of the art of discrete and quantized consensus by at least a factor O(n) in both theoretical and empirical comparisons

    Load-Balance and Fault-Tolerance for Massively Parallel Phylogenetic Inference

    Get PDF

    Communication-Efficient BFT Using Small Trusted Hardware to Tolerate Minority Corruption

    Get PDF
    Agreement protocols for partially synchronous networks tolerate fewer than one-third Byzantine faults. If parties are equipped with trusted hardware that prevents equivocation, then fault tolerance can be improved to fewer than one-half Byzantine faults, but typically at the cost of increased communication complexity. In this work, we present results that use small trusted hardware without worsening communication complexity assuming the adversary controls a fraction of the network that is less than one-half. In particular, we show a version of HotStuff that retains linear communication complexity in each view, leveraging trusted hardware to tolerate a minority of corruptions. Our result uses expander graph techniques to achieve efficient communication in a manner that may be of independent interest

    Computer Science & Technology Series : XXI Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’15 was the 21thCongress in the CACIC series. It was organized by the School of Technology at the UNNOBA (North-West of Buenos Aires National University) in Junín, Buenos Aires. The Congress included 13 Workshops with 131 accepted papers, 4 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 6 courses. CACIC 2015 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 202 submissions. An average of 2.5 review reports werecollected for each paper, for a grand total of 495 review reports that involved about 191 different reviewers. A total of 131 full papers, involving 404 authors and 75 Universities, were accepted and 24 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Freely Scalable Quantum Technologies using Cells of 5-to-50 Qubits with Very Lossy and Noisy Photonic Links

    Full text link
    Exquisite quantum control has now been achieved in small ion traps, in nitrogen-vacancy centres and in superconducting qubit clusters. We can regard such a system as a universal cell with diverse technological uses from communication to large-scale computing, provided that the cell is able to network with others and overcome any noise in the interlinks. Here we show that loss-tolerant entanglement purification makes quantum computing feasible with the noisy and lossy links that are realistic today: With a modestly complex cell design, and using a surface code protocol with a network noise threshold of 13.3%, we find that interlinks which attempt entanglement at a rate of 2MHz but suffer 98% photon loss can result in kilohertz computer clock speeds (i.e. rate of high fidelity stabilizer measurements). Improved links would dramatically increase the clock speed. Our simulations employed local gates of a fidelity already achieved in ion trap devices.Comment: corrected typos, additional references, additional figur
    corecore