56 research outputs found

    Shuffling and Unshuffling

    Get PDF
    We consider various shuffling and unshuffling operations on languages and words, and examine their closure properties. Although the main goal is to provide some good and novel exercises and examples for undergraduate formal language theory classes, we also provide some new results and some open problems

    Maximum Likelihood Estimation and Graph Matching in Errorfully Observed Networks

    Full text link
    Given a pair of graphs with the same number of vertices, the inexact graph matching problem consists in finding a correspondence between the vertices of these graphs that minimizes the total number of induced edge disagreements. We study this problem from a statistical framework in which one of the graphs is an errorfully observed copy of the other. We introduce a corrupting channel model, and show that in this model framework, the solution to the graph matching problem is a maximum likelihood estimator. Necessary and sufficient conditions for consistency of this MLE are presented, as well as a relaxed notion of consistency in which a negligible fraction of the vertices need not be matched correctly. The results are used to study matchability in several families of random graphs, including edge independent models, random regular graphs and small-world networks. We also use these results to introduce measures of matching feasibility, and experimentally validate the results on simulated and real-world networks

    Divide-and-Conquer Distributed Learning: Privacy-Preserving Offloading of Neural Network Computations

    Get PDF
    Machine learning has become a highly utilized technology to perform decision making on high dimensional data. As dataset sizes have become increasingly large so too have the neural networks to learn the complex patterns hidden within. This expansion has continued to the degree that it may be infeasible to train a model from a singular device due to computational or memory limitations of underlying hardware. Purpose built computing clusters for training large models are commonplace while access to networks of heterogeneous devices is still typically more accessible. In addition, with the rise of 5G networks, computation at the edge becoming more commonplace, and inspired by the successes of the folding@home project utilizing crowdsourced computation, we consider the scenario of the crowdsourcing the computation required for training of a neural network particularly appealing. Distributed learning promises to bridge the widening gap between singular device performance and large-scale model computational requirements, but unfortunately, current distributed learning techniques do not maintain privacy of both the model and input with- out an accuracy or computational tradeoff. In response, we present Divide and Conquer Learning (DCL), an innovative approach that enables quantifiable privacy guarantees while offloading the computational burden of training to a network of devices. A user can divide the training computation of its neural network into neuron-sized computation tasks and dis- tribute them to devices based on their available resources. The results will be returned to the user and aggregated in an iterative process to obtain the final neural network model. To protect the privacy of the user’s data and model, shuffling is done to both the data and the neural network model before the computation task is distributed to devices. Our strict adherence to the order of operations allows a user to verify the correctness of performed computations through assigning a task to multiple devices and cross-validating their results. This can protect against network churns and detect faulty or misbehaving devices

    An Efficient Secure Three-Party Sorting Protocol with an Honest Majority

    Get PDF
    We present a novel three-party sorting protocol secure against passive adversaries in the honest majority setting. The protocol can be easily combined with other secure protocols which work on shared data, and thus enable different data analysis tasks, such as data deduplication, set intersection, and computing percentiles. The new sorting protocol is based on radix sort. It is asymptotically better compared to previous sorting protocols since it does not need to shuffle the entire length of the items after each comparison step. We further improve the concrete efficiency by using not only optimizations but also novel protocols, which are independent of interest. We implemented our sorting protocol with those optimizations and protocols. Our experiments show that our implementation is concretely fast. For example, sorting one million 2020-bit items takes 4.6 seconds in 1G connection. It enables a new set of applications on large-scale datasets since the known implementations handle thousands of items about 10 seconds
    • …
    corecore