26,822 research outputs found

    Non-traditional Calculations of Elementary Mathematical Operations: Part 1. Multiplication and Division

    Get PDF
    Different computational systems are a set of functional units and processors that can work together and exchange data with each other if required. In most cases, data transmission is organized in such a way that enables for the possibility of connecting each node of the system to the other node of the system. Thus, a computer system consists of components for performing arithmetic operations, and an integrated data communication system, which allows for information interaction between the nodes, and combines them into a single unit. When designing a given type of computer systems, problems might occur if:– computing nodes of the system cannot simultaneously start and finish data processing over a certain time interval;– procedures for processing data in the nodes of the system do not start and do not end at a certain time;– the number of computational nodes of the inputs and outputs of the system is different.This article proposes an unconventional approach to constructing a mathematical model of adaptive-quantum computation of arithmetic operations of multiplication and division using the principle of predetermined random self-organization proposed by Ashby in 1966, as well as the method of the dynamics of averages and of the adaptive system of integration of the system of logical-differential equations for the dynamics of number-average states of particles S1, S2 of sets. This would make it easier to solve some of the problems listed above

    Numerical analysis of least squares and perceptron learning for classification problems

    Get PDF
    This work presents study on regularized and non-regularized versions of perceptron learning and least squares algorithms for classification problems. Fr'echet derivatives for regularized least squares and perceptron learning algorithms are derived. Different Tikhonov's regularization techniques for choosing the regularization parameter are discussed. Decision boundaries obtained by non-regularized algorithms to classify simulated and experimental data sets are analyzed

    A New Approach to Numerical Quantum Field Theory

    Full text link
    In this note we present a new numerical method for solving Lattice Quantum Field Theory. This Source Galerkin Method is fundamentally different in concept and application from Monte Carlo based methods which have been the primary mode of numerical solution in Quantum Field Theory. Source Galerkin is not probabilistic and treats fermions and bosons in an equivalent manner.Comment: 10 pages, LaTeX, BROWN-HET-908([email protected]), ([email protected]), ([email protected]

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    • …
    corecore