5,153 research outputs found

    Constructing exact representations of quantum many-body systems with deep neural networks

    Full text link
    We develop a constructive approach to generate artificial neural networks representing the exact ground states of a large class of many-body lattice Hamiltonians. It is based on the deep Boltzmann machine architecture, in which two layers of hidden neurons mediate quantum correlations among physical degrees of freedom in the visible layer. The approach reproduces the exact imaginary-time Hamiltonian evolution, and is completely deterministic. In turn, compact and exact network representations for the ground states are obtained without stochastic optimization of the network parameters. The number of neurons grows linearly with the system size and total imaginary time, respectively. Physical quantities can be measured by sampling configurations of both physical and neuron degrees of freedom. We provide specific examples for the transverse-field Ising and Heisenberg models by implementing efficient sampling. As a compact, classical representation for many-body quantum systems, our approach is an alternative to the standard path integral, and it is potentially useful also to systematically improve on numerical approaches based on the restricted Boltzmann machine architecture

    Active Calculus 1.0

    Get PDF
    Active Calculus is different from most existing calculus texts in at least the following ways: the text is free for download by students and instructors in .pdf format; in the electronic format, graphics are in full color and there are live html links to java applets; the text is open source, and interested instructors can gain access to the original source files upon request; the style of the text requires students to be active learners — there are very few worked examples in the text, with there instead being 3-4 activities per section that engage students in connecting ideas, solving problems, and developing understanding of key calculus concepts; each section begins with motivating questions, a brief introduction, and a preview activity, all of which are designed to be read and completed prior to class; the exercises are few in number and challenging in nature.https://scholarworks.gvsu.edu/books/1010/thumbnail.jp

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Renormalization: an advanced overview

    Full text link
    We present several approaches to renormalization in QFT: the multi-scale analysis in perturbative renormalization, the functional methods \`a la Wetterich equation, and the loop-vertex expansion in non-perturbative renormalization. While each of these is quite well-established, they go beyond standard QFT textbook material, and may be little-known to specialists of each other approach. This review is aimed at bridging this gap.Comment: Review, 130 pages, 33 figures; v2: misprints corrected, refs. added, minor improvements; v3: some changes to sect. 5, refs. adde

    The Hyperdimensional Transform for Distributional Modelling, Regression and Classification

    Full text link
    Hyperdimensional computing (HDC) is an increasingly popular computing paradigm with immense potential for future intelligent applications. Although the main ideas already took form in the 1990s, HDC recently gained significant attention, especially in the field of machine learning and data science. Next to efficiency, interoperability and explainability, HDC offers attractive properties for generalization as it can be seen as an attempt to combine connectionist ideas from neural networks with symbolic aspects. In recent work, we introduced the hyperdimensional transform, revealing deep theoretical foundations for representing functions and distributions as high-dimensional holographic vectors. Here, we present the power of the hyperdimensional transform to a broad data science audience. We use the hyperdimensional transform as a theoretical basis and provide insight into state-of-the-art HDC approaches for machine learning. We show how existing algorithms can be modified and how this transform can lead to a novel, well-founded toolbox. Next to the standard regression and classification tasks of machine learning, our discussion includes various aspects of statistical modelling, such as representation, learning and deconvolving distributions, sampling, Bayesian inference, and uncertainty estimation
    • …
    corecore