6,090 research outputs found

    Load-Balanced Fractional Repetition Codes

    Full text link
    We introduce load-balanced fractional repetition (LBFR) codes, which are a strengthening of fractional repetition (FR) codes. LBFR codes have the additional property that multiple node failures can be sequentially repaired by downloading no more than one block from any other node. This allows for better use of the network, and can additionally reduce the number of disk reads necessary to repair multiple nodes. We characterize LBFR codes in terms of their adjacency graphs, and use this characterization to present explicit constructions LBFR codes with storage capacity comparable existing FR codes. Surprisingly, in some parameter regimes, our constructions of LBFR codes match the parameters of the best constructions of FR codes

    Constructions of Batch Codes via Finite Geometry

    Full text link
    A primitive kk-batch code encodes a string xx of length nn into string yy of length NN, such that each multiset of kk symbols from xx has kk mutually disjoint recovering sets from yy. We develop new explicit and random coding constructions of linear primitive batch codes based on finite geometry. In some parameter regimes, our proposed codes have lower redundancy than previously known batch codes.Comment: 7 pages, 1 figure, 1 tabl

    Multilevel Artificial Neural Network Training for Spatially Correlated Learning

    Get PDF
    Multigrid modeling algorithms are a technique used to accelerate relaxation models running on a hierarchy of similar graphlike structures. We introduce and demonstrate a new method for training neural networks which uses multilevel methods. Using an objective function derived from a graph-distance metric, we perform orthogonally-constrained optimization to find optimal prolongation and restriction maps between graphs. We compare and contrast several methods for performing this numerical optimization, and additionally present some new theoretical results on upper bounds of this type of objective function. Once calculated, these optimal maps between graphs form the core of Multiscale Artificial Neural Network (MsANN) training, a new procedure we present which simultaneously trains a hierarchy of neural network models of varying spatial resolution. Parameter information is passed between members of this hierarchy according to standard coarsening and refinement schedules from the multiscale modelling literature. In our machine learning experiments, these models are able to learn faster than default training, achieving a comparable level of error in an order of magnitude fewer training examples.Comment: Manuscript (24 pages) and Supplementary Material (4 pages). Updated January 2019 to reflect new formulation of MsANN structure and new training procedur
    • …
    corecore