13,119 research outputs found

    AI Feynman: a Physics-Inspired Method for Symbolic Regression

    Full text link
    A core challenge for both physics and artificial intellicence (AI) is symbolic regression: finding a symbolic expression that matches data from an unknown function. Although this problem is likely to be NP-hard in principle, functions of practical interest often exhibit symmetries, separability, compositionality and other simplifying properties. In this spirit, we develop a recursive multidimensional symbolic regression algorithm that combines neural network fitting with a suite of physics-inspired techniques. We apply it to 100 equations from the Feynman Lectures on Physics, and it discovers all of them, while previous publicly available software cracks only 71; for a more difficult test set, we improve the state of the art success rate from 15% to 90%.Comment: 15 pages, 2 figs. Our code is available at https://github.com/SJ001/AI-Feynman and our Feynman Symbolic Regression Database for benchmarking can be downloaded at https://space.mit.edu/home/tegmark/aifeynman.htm

    Noise-enhanced computation in a model of a cortical column

    Get PDF
    Varied sensory systems use noise in order to enhance detection of weak signals. It has been conjectured in the literature that this effect, known as stochastic resonance, may take place in central cognitive processes such as the memory retrieval of arithmetical multiplication. We show in a simplified model of cortical tissue, that complex arithmetical calculations can be carried out and are enhanced in the presence of a stochastic background. The performance is shown to be positively correlated to the susceptibility of the network, defined as its sensitivity to a variation of the mean of its inputs. For nontrivial arithmetic tasks such as multiplication, stochastic resonance is an emergent property of the microcircuitry of the model network

    Variational Hamiltonian Monte Carlo via Score Matching

    Full text link
    Traditionally, the field of computational Bayesian statistics has been divided into two main subfields: variational methods and Markov chain Monte Carlo (MCMC). In recent years, however, several methods have been proposed based on combining variational Bayesian inference and MCMC simulation in order to improve their overall accuracy and computational efficiency. This marriage of fast evaluation and flexible approximation provides a promising means of designing scalable Bayesian inference methods. In this paper, we explore the possibility of incorporating variational approximation into a state-of-the-art MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient computation in the simulation of Hamiltonian flow, which is the bottleneck for many applications of HMC in big data problems. To this end, we use a {\it free-form} approximation induced by a fast and flexible surrogate function based on single-hidden layer feedforward neural networks. The surrogate provides sufficiently accurate approximation while allowing for fast exploration of parameter space, resulting in an efficient approximate inference algorithm. We demonstrate the advantages of our method on both synthetic and real data problems
    • …
    corecore