2,187 research outputs found

    Unifying metric approach to the triple parity

    Get PDF
    AbstractThe even-odd parity problem is a tough one for neural networks to handle because they assume a finite dimensional vector space. Typically, the size of the neural network increases as the size of the problem increases. The triple parity problem is even tougher. In this paper, a method is proposed for supervised and unsupervised learning to classify bit strings of arbitrary length in terms of their triple parity. The learner is modeled by two formal concepts, transformation system and stability optimization. Even though a small set of short examples were used in the training stage, all bit strings of any length were classified correctly in the online recognition stage. The proposed learner has successfully learned to devise a way by means of metric calculations to classify bit strings of any length according to their triple parity. The system was able to acquire the concept of counting, dividing, and then taking the remainder, by autonomously evolving a set of string-editing rules along with their appropriate weights to solve the difficult problem

    Unifying metric approach to the triple parity

    Get PDF
    AbstractThe even-odd parity problem is a tough one for neural networks to handle because they assume a finite dimensional vector space. Typically, the size of the neural network increases as the size of the problem increases. The triple parity problem is even tougher. In this paper, a method is proposed for supervised and unsupervised learning to classify bit strings of arbitrary length in terms of their triple parity. The learner is modeled by two formal concepts, transformation system and stability optimization. Even though a small set of short examples were used in the training stage, all bit strings of any length were classified correctly in the online recognition stage. The proposed learner has successfully learned to devise a way by means of metric calculations to classify bit strings of any length according to their triple parity. The system was able to acquire the concept of counting, dividing, and then taking the remainder, by autonomously evolving a set of string-editing rules along with their appropriate weights to solve the difficult problem

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    Strong Homotopy Lie Algebras, Generalized Nahm Equations and Multiple M2-branes

    Full text link
    We review various generalizations of the notion of Lie algebras, in particular those appearing in the recently proposed Bagger-Lambert-Gustavsson model, and study their interrelations. We find that Filippov's n-Lie algebras are a special case of strong homotopy Lie algebras. Furthermore, we define a class of homotopy Maurer-Cartan equations, which contains both the Nahm and the Basu-Harvey equations as special cases. Finally, we show how the super Yang-Mills equations describing a Dp-brane and the Bagger-Lambert-Gustavsson equations supposedly describing M2-branes can be rewritten as homotopy Maurer-Cartan equations, as well.Comment: 1+28 page

    Is Quantum Gravity a Chern-Simons Theory?

    Get PDF
    We propose a model of quantum gravity in arbitrary dimensions defined in terms of the BV quantization of a supersymmetric, infinite dimensional matrix model. This gives an (AKSZ-type) Chern-Simons theory with gauge algebra the space of observables of a quantum mechanical Hilbert space H. The model is motivated by previous attempts to formulate gravity in terms of non-commutative, phase space, field theories as well as the Fefferman-Graham curved analog of Dirac spaces for conformally invariant wave equations. The field equations are flat connection conditions amounting to zero curvature and parallel conditions on operators acting on H. This matrix-type model may give a better defined setting for a quantum gravity path integral. We demonstrate that its underlying physics is a summation over Hamiltonians labeled by a conformal class of metrics and thus a sum over causal structures. This gives in turn a model summing over fluctuating metrics plus a tower of additional modes-we speculate that these could yield improved UV behavior.Comment: 22 pages, LaTeX, 3 figures, references added, version to appear in PR

    Noncommutative geometry, Lorentzian structures and causality

    Full text link
    The theory of noncommutative geometry provides an interesting mathematical background for developing new physical models. In particular, it allows one to describe the classical Standard Model coupled to Euclidean gravity. However, noncommutative geometry has mainly been developed using the Euclidean signature, and the typical Lorentzian aspects of space-time, the causal structure in particular, are not taken into account. We present an extension of noncommutative geometry \`a la Connes suitable the for accommodation of Lorentzian structures. In this context, we show that it is possible to recover the notion of causality from purely algebraic data. We explore the causal structure of a simple toy model based on an almost commutative geometry and we show that the coupling between the space-time and an internal noncommutative space establishes a new `speed of light constraint'.Comment: 24 pages, review article. in `Mathematical Structures of the Universe', eds. M. Eckstein, M. Heller, S.J. Szybka, CCPress 201
    corecore