669 research outputs found

    Secured Distributed Algorithms Without Hardness Assumptions

    Get PDF
    We study algorithms in the distributed message-passing model that produce secured output, for an input graph G. Specifically, each vertex computes its part in the output, the entire output is correct, but each vertex cannot discover the output of other vertices, with a certain probability. This is motivated by high-performance processors that are embedded nowadays in a large variety of devices. Furthermore, sensor networks were established to monitor physical areas for scientific research, smart-cities control, and other purposes. In such situations, it no longer makes sense, and in many cases it is not feasible, to leave the whole processing task to a single computer or even a group of central computers. As the extensive research in the distributed algorithms field yielded efficient decentralized algorithms for many classic problems, the discussion about the security of distributed algorithms was somewhat neglected. Nevertheless, many protocols and algorithms were devised in the research area of secure multi-party computation problem (MPC or SMC). However, the notions and terminology of these protocols are quite different than in classic distributed algorithms. As a consequence, the focus in those protocols was to work for every function f at the expense of increasing the round complexity, or the necessity of several computational assumptions. In this work, we present a novel approach, which rather than turning existing algorithms into secure ones, identifies and develops those algorithms that are inherently secure (which means they do not require any further constructions). This approach yields efficient secure algorithms for various locality problems, such as coloring, network decomposition, forest decomposition, and a variety of additional labeling problems. Remarkably, our approach does not require any hardness assumption, but only a private randomness generator in each vertex. This is in contrast to previously known techniques in this setting that are based on public-key encryption schemes

    Cryptographic techniques used to provide integrity of digital content in long-term storage

    Get PDF
    The main objective of the project was to obtain advanced mathematical methods to guarantee the verification that a required level of data integrity is maintained in long-term storage. The secondary objective was to provide methods for the evaluation of data loss and recovery. Additionally, we have provided the following initial constraints for the problem: a limitation of additional storage space, a minimal threshold for desired level of data integrity and a defined probability of a single-bit corruption. With regard to the main objective, the study group focused on the exploration methods based on hash values. It has been indicated that in the case of tight constraints, suggested by PWPW, it is not possible to provide any method based only on the hash values. This observation stems from the fact that the high probability of bit corruption leads to unacceptably large number of broken hashes, which in turn stands in contradiction with the limitation for additional storage space. However, having loosened the initial constraints to some extent, the study group has proposed two methods that use only the hash values. The first method, based on a simple scheme of data subdivision in disjoint subsets, has been provided as a benchmark for other methods discussed in this report. The second method ("hypercube" method), introduced as a type of the wider class of clever-subdivision methods, is built on the concept of rewriting data-stream into a n-dimensional hypercube and calculating hash values for some particular (overlapping) sections of the cube. We have obtained interesting results by combining hash value methods with error-correction techniques. The proposed framework, based on the BCH codes, appears to have promising properties, hence further research in this field is strongly recommended. As a part of the report we have also presented features of secret sharing methods for the benefit of novel distributed data-storage scenarios. We have provided an overview of some interesting aspects of secret sharing techniques and several examples of possible applications

    Optimal redundancy against disjoint vulnerabilities in networks

    Get PDF
    Redundancy is commonly used to guarantee continued functionality in networked systems. However, often many nodes are vulnerable to the same failure or adversary. A "backup" path is not sufficient if both paths depend on nodes which share a vulnerability.For example, if two nodes of the Internet cannot be connected without using routers belonging to a given untrusted entity, then all of their communication-regardless of the specific paths utilized-will be intercepted by the controlling entity.In this and many other cases, the vulnerabilities affecting the network are disjoint: each node has exactly one vulnerability but the same vulnerability can affect many nodes. To discover optimal redundancy in this scenario, we describe each vulnerability as a color and develop a "color-avoiding percolation" which uncovers a hidden color-avoiding connectivity. We present algorithms for color-avoiding percolation of general networks and an analytic theory for random graphs with uniformly distributed colors including critical phenomena. We demonstrate our theory by uncovering the hidden color-avoiding connectivity of the Internet. We find that less well-connected countries are more likely able to communicate securely through optimally redundant paths than highly connected countries like the US. Our results reveal a new layer of hidden structure in complex systems and can enhance security and robustness through optimal redundancy in a wide range of systems including biological, economic and communications networks.Comment: 15 page

    On the Rainbow Connection Number for Snowflake Graph

    Get PDF
    Let G be an arbitrary non-trivial connected graph. An edge-colored graph G is called a rainbow connected if any two vertices are connected by a path whose edges have distinct colors, such path is called a rainbow path. The smallest number of colors required to make G rainbow connected is called the rainbow connection number of G, denoted by rc(G). A snowflake graph is a graph obtained by resembling one of the snowflake shapes into vertices and edges so that it forms a simple graph. Let  be a generalized snowflake graph, i.e., a graph with  paths of the stem,  pair of outer leaves,  middle circles, and  pairs of inner leaves. In this paper we determine the rainbow connection number for generalized snowflake graph

    Tunable Security for Deployable Data Outsourcing

    Get PDF
    Security mechanisms like encryption negatively affect other software quality characteristics like efficiency. To cope with such trade-offs, it is preferable to build approaches that allow to tune the trade-offs after the implementation and design phase. This book introduces a methodology that can be used to build such tunable approaches. The book shows how the proposed methodology can be applied in the domains of database outsourcing, identity management, and credential management

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques
    • …
    corecore