5,875 research outputs found

    On the existence of perfect codes

    Get PDF

    Solution generating in scalar-tensor theories with a massless scalar field and stiff perfect fluid as a source

    Get PDF
    We present a method for generating solutions in some scalar-tensor theories with a minimally coupled massless scalar field or irrotational stiff perfect fluid as a source. The method is based on the group of symmetries of the dilaton-matter sector in the Einstein frame. In the case of Barker's theory the dilaton-matter sector possesses SU(2) group of symmetries. In the case of Brans-Dicke and the theory with "conformal coupling", the dilaton- matter sector has SL(2,R)SL(2,R) as a group of symmetries. We describe an explicit algorithm for generating exact scalar-tensor solutions from solutions of Einstein-minimally-coupled-scalar-field equations by employing the nonlinear action of the symmetry group of the dilaton-matter sector. In the general case, when the Einstein frame dilaton-matter sector may not possess nontrivial symmetries we also present a solution generating technique which allows us to construct exact scalar-tensor solutions starting with the solutions of Einstein-minimally-coupled-scalar-field equations. As an illustration of the general techniques, examples of explicit exact solutions are constructed. In particular, we construct inhomogeneous cosmological scalar-tensor solutions whose curvature invariants are everywhere regular in space-time. A generalization of the method for scalar-tensor-Maxwell gravity is outlined.Comment: 10 pages,Revtex; v2 extended version, new parts added and some parts rewritten, results presented more concisely, some simple examples of homogeneous solutions replaced with new regular inhomogeneous solutions, typos corrected, references and acknowledgements added, accepted for publication in Phys.Rev.

    A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication

    Full text link
    This paper has two contributions. First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints. This technique uses "garbage alignment," i.e., aligning computations in coded computing that are not a part of the desired output. Generalized PolyDot codes bridge between Polynomial codes and MatDot codes, trading off between recovery threshold and communication costs. Second, we demonstrate that Generalized PolyDot can be used for training large Deep Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires us to address three additional challenges: (i) prohibitively large overhead of coding the weight matrices in each layer of the DNN at each iteration; (ii) nonlinear operations during training, which are incompatible with linear coding; and (iii) not assuming presence of an error-free master node, requiring us to architect a fully decentralized implementation without any "single point of failure." We allow all primary DNN training steps, namely, matrix multiplication, nonlinear activation, Hadamard product, and update steps as well as the encoding/decoding to be error-prone. We consider the case of mini-batch size B=1B=1, as well as B>1B>1, leveraging coded matrix-vector products, and matrix-matrix products respectively. The problem of DNN training under soft-errors also motivates an interesting, probabilistic error model under which a real number (P,Q)(P,Q) MDS code is shown to correct P−Q−1P-Q-1 errors with probability 11 as compared to ⌊P−Q2⌋\lfloor \frac{P-Q}{2} \rfloor for the more conventional, adversarial error model. We also demonstrate that our proposed strategy can provide unbounded gains in error tolerance over a competing replication strategy and a preliminary MDS-code-based strategy for both these error models.Comment: Presented in part at the IEEE International Symposium on Information Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the IEEE Transactions on Information Theor
    • …
    corecore