24 research outputs found

    Secure multi-party protocols under a modern lens

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 263-272).A secure multi-party computation (MPC) protocol for computing a function f allows a group of parties to jointly evaluate f over their private inputs, such that a computationally bounded adversary who corrupts a subset of the parties can not learn anything beyond the inputs of the corrupted parties and the output of the function f. General MPC completeness theorems in the 1980s showed that every efficiently computable function can be evaluated securely in this fashion [Yao86, GMW87, CCD87, BGW88] using the existence of cryptography. In the following decades, progress has been made toward making MPC protocols efficient enough to be deployed in real-world applications. However, recent technological developments have brought with them a slew of new challenges, from new security threats to a question of whether protocols can scale up with the demand of distributed computations on massive data. Before one can make effective use of MPC, these challenges must be addressed. In this thesis, we focus on two lines of research toward this goal: " Protocols resilient to side-channel attacks. We consider a strengthened adversarial model where, in addition to corrupting a subset of parties, the adversary may leak partial information on the secret states of honest parties during the protocol. In presence of such adversary, we first focus on preserving the correctness guarantees of MPC computations. We then proceed to address security guarantees, using cryptography. We provide two results: an MPC protocol whose security provably "degrades gracefully" with the amount of leakage information obtained by the adversary, and a second protocol which provides complete security assuming a (necessary) one-time preprocessing phase during which leakage cannot occur. * Protocols with scalable communication requirements. We devise MPC protocols with communication locality: namely, each party only needs to communicate with a small (polylog) number of dynamically chosen parties. Our techniques use digital signatures and extend particularly well to the case when the function f is a sublinear algorithm whose execution depends on o(n) of the n parties' inputs.by Elette Chantae Boyle.Ph.D

    Complexity in Economic and Social Systems

    Get PDF
    There is no term that better describes the essential features of human society than complexity. On various levels, from the decision-making processes of individuals, through to the interactions between individuals leading to the spontaneous formation of groups and social hierarchies, up to the collective, herding processes that reshape whole societies, all these features share the property of irreducibility, i.e., they require a holistic, multi-level approach formed by researchers from different disciplines. This Special Issue aims to collect research studies that, by exploiting the latest advances in physics, economics, complex networks, and data science, make a step towards understanding these economic and social systems. The majority of submissions are devoted to financial market analysis and modeling, including the stock and cryptocurrency markets in the COVID-19 pandemic, systemic risk quantification and control, wealth condensation, the innovation-related performance of companies, and more. Looking more at societies, there are papers that deal with regional development, land speculation, and the-fake news-fighting strategies, the issues which are of central interest in contemporary society. On top of this, one of the contributions proposes a new, improved complexity measure

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Sparse Probabilistic Models:Phase Transitions and Solutions via Spatial Coupling

    Get PDF
    This thesis is concerned with a number of novel uses of spatial coupling, applied to a class of probabilistic graphical models. These models include error correcting codes, random constraint satisfaction problems (CSPs) and statistical physics models called diluted spin systems. Spatial coupling is a technique initially developed for channel coding, which provides a recipe to transform a class of sparse linear codes into codes that are longer but more robust at high noise level. In fact it was observed that for coupled codes there are efficient algorithms whose decoding threshold is the optimal one, a phenomenon called threshold saturation. The main aim of this thesis is to explore alternative applications of spatial coupling. The goal is to study properties of uncoupled probabilistic models (not just coding) through the use of the corresponding spatially coupled models. The methods employed are ranging from the mathematically rigorous to the purely experimental. We first explore spatial coupling as a proof technique in the realm of LDPC codes. The Maxwell conjecture states that for arbitrary BMS channels the optimal (MAP) threshold of the standard (uncoupled) LDPC codes is given by the Maxwell construction. We are able to prove the Maxwell Conjecture for any smooth family of BMS channels by using (i) the fact that coupled codes perform optimally (which was already proved) and (ii) that the optimal thresholds of the coupled and uncoupled LDPC codes coincide. The method is used to derive two more results, namely the equality of GEXIT curves above the MAP threshold and the exactness of the averaged Bethe free energy formula derived under the RS cavity method from statistical physics. As a second application of spatial coupling we show how to derive novel bounds on the phase transitions in random constraint satisfaction problems, and possibly a general class of diluted spin systems. In the case of coloring, we investigate what happens to the dynamic and freezing thresholds. The phenomenon of threshold saturation is present also in this case, with the dynamic threshold moving to the condensation threshold, and the freezing moving to colorability. These claims are supported by experimental evidence, but in some cases, such as the saturation of the freezing threshold it is possible to make part of this claim more rigorous. This allows in principle for the computation of thresholds by use of spatial coupling. The proof is in the spirit of the potential method introduced by Kumar, Young, Macris and Pfister for LDPC codes. Finally, we explore how to find solutions in (uncoupled) probabilistic models. To test this, we start with a typical instance of random K-SAT (the base problem), and we build a spatially coupled structure that locally inherits the structure of the base problem. The goal is to run an algorithm for finding a suitable solution in the coupled structure and then "project" this solution to obtain a solution for the base problem. Experimental evidence points to the fact it is indeed possible to use a form of unit-clause propagation (UCP), a simple algorithm, to achieve this goal. This approach works also in regimes where the standard UCP fails on the base problem

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Permutation Admissibility in Shuffle-Exchange Networks with Arbitrary Number of Stages

    No full text
    The set of input-output permutations that are routable through a multistage interconnection network without any conflict (known as the admissible set), plays an important role in determining the capability of the network. Recent works on the permutation admissibity problem of shuffleexchange networks (SEN) of size N \Theta N , deal with (n + k) stages, where n = log2N , and k denotes the number of extra stages. For k = 0 or 1, O(Nn) algorithms exist to check if any permutation is admissible, but for k 2, a polynomial time solution is not yet known. The more general problem of finding the minimum number (m) of shuffleexchange stages required to realize an arbitrary permutation, 1 m 2n \Gamma 1, is also an open problem. In this paper, we present an O(Nn) algorithm that checks whether a given permutation P is admissible in an m stage SEN, 1 m n, and determines in O(Nnlogn) time the minimum number of stages m of shuffle-exchange, required to realize P . Thus, a single-stage shuffle-..
    corecore