80,110 research outputs found

    Quantum dynamics in transverse-field Ising models from classical networks

    Get PDF
    The efficient representation of quantum many-body states with classical resources is a key challenge in quantum many-body theory. In this work we analytically construct classical networks for the description of the quantum dynamics in transverse-field Ising models that can be solved efficiently using Monte Carlo techniques. Our perturbative construction encodes time-evolved quantum states of spin-1/2 systems in a network of classical spins with local couplings and can be directly generalized to other spin systems and higher spins. Using this construction we compute the transient dynamics in one, two, and three dimensions including local observables, entanglement production, and Loschmidt amplitudes using Monte Carlo algorithms and demonstrate the accuracy of this approach by comparisons to exact results. We include a mapping to equivalent artificial neural networks, which were recently introduced to provide a universal structure for classical network wave functions

    Over-parameterisation, a major obstacle to the use of artificial neural networks in hydrology?

    Get PDF
    International audienceRecently Feed-Forward Artificial Neural Networks (FNN) have been gaining popularity for stream flow forecasting. However, despite the promising results presented in recent papers, their use is questionable. In theory, their "universal approximator? property guarantees that, if a sufficient number of neurons is selected, good performance of the models for interpolation purposes can be achieved. But the choice of a more complex model does not ensure a better prediction. Models with many parameters have a high capacity to fit the noise and the particularities of the calibration dataset, at the cost of diminishing their generalisation capacity. In support of the principle of model parsimony, a model selection method based on the validation performance of the models, "traditionally" used in the context of conceptual rainfall-runoff modelling, was adapted to the choice of a FFN structure. This method was applied to two different case studies: river flow prediction based on knowledge of upstream flows, and rainfall-runoff modelling. The predictive powers of the neural networks selected are compared to the results obtained with a linear model and a conceptual model (GR4j). In both case studies, the method leads to the selection of neural network structures with a limited number of neurons in the hidden layer (two or three). Moreover, the validation results of the selected FNN and of the linear model are very close. The conceptual model, specifically dedicated to rainfall-runoff modelling, appears to outperform the other two approaches. These conclusions, drawn on specific case studies using a particular evaluation method, add to the debate on the usefulness of Artificial Neural Networks in hydrology. Keywords: forecasting; stream-flow; rainfall-runoff; Artificial Neural Network

    Large statistical learning models effectively forecast diverse chaotic systems

    Full text link
    Chaos and unpredictability are traditionally synonymous, yet recent advances in statistical forecasting suggest that large machine learning models can derive unexpected insight from extended observation of complex systems. Here, we study the forecasting of chaos at scale, by performing a large-scale comparison of 24 representative state-of-the-art multivariate forecasting methods on a crowdsourced database of 135 distinct low-dimensional chaotic systems. We find that large, domain-agnostic time series forecasting methods based on artificial neural networks consistently exhibit strong forecasting performance, in some cases producing accurate predictions lasting for dozens of Lyapunov times. Best-in-class results for forecasting chaos are achieved by recently-introduced hierarchical neural basis function models, though even generic transformers and recurrent neural networks perform strongly. However, physics-inspired hybrid methods like neural ordinary equations and reservoir computers contain inductive biases conferring greater data efficiency and lower training times in data-limited settings. We observe consistent correlation across all methods despite their widely-varying architectures, as well as universal structure in how predictions decay over long time intervals. Our results suggest that a key advantage of modern forecasting methods stems not from their architectural details, but rather from their capacity to learn the large-scale structure of chaotic attractors.Comment: 5 pages, 3 figure

    Machine Learning for the Prediction of Converged Energies from Ab Initio Nuclear Structure Calculations

    Full text link
    The prediction of nuclear observables beyond the finite model spaces that are accessible through modern ab initio methods, such as the no-core shell model, pose a challenging task in nuclear structure theory. It requires reliable tools for the extrapolation of observables to infinite many-body Hilbert spaces along with reliable uncertainty estimates. In this work we present a universal machine learning tool capable of capturing observable-specific convergence patterns independent of nucleus and interaction. We show that, once trained on few-body systems, artificial neural networks can produce accurate predictions for a broad range of light nuclei. In particular, we discuss neural-network predictions of ground-state energies from no-core shell model calculations for 6Li, 12C and 16O based on training data for 2H, 3H and 4He and compare them to classical extrapolations.Comment: 7 pages, 5 figures, 1 tabl

    MI-NODES multiscale models of metabolic reactions, brain connectome, ecological, epidemic, world trade, and legal-social networks

    Get PDF
    [Abstract] Complex systems and networks appear in almost all areas of reality. We find then from proteins residue networks to Protein Interaction Networks (PINs). Chemical reactions form Metabolic Reactions Networks (MRNs) in living beings or Atmospheric reaction networks in planets and moons. Network of neurons appear in the worm C. elegans, in Human brain connectome, or in Artificial Neural Networks (ANNs). Infection spreading networks exist for contagious outbreaks networks in humans and in malware epidemiology for infection with viral software in internet or wireless networks. Social-legal networks with different rules evolved from swarm intelligence, to hunter-gathered societies, or citation networks of U.S. Supreme Court. In all these cases, we can see the same question. Can we predict the links based on structural information? We propose to solve the problem using Quantitative Structure-Property Relationship (QSPR) techniques commonly used in chemo-informatics. In so doing, we need software able to transform all types of networks/graphs like drug structure, drug-target interactions, protein structure, protein interactions, metabolic reactions, brain connectome, or social networks into numerical parameters. Consequently, we need to process in alignment-free mode multitarget, multiscale, and multiplexing, information. Later, we have to seek the QSPR model with Machine Learning techniques. MI-NODES is this type of software. Here we review the evolution of the software from chemoinformatics to bioinformatics and systems biology. This is an effort to develop a universal tool to study structure-property relationships in complex systems

    Artificial Neural Network in Cosmic Landscape

    Get PDF
    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.Comment: v2, add some new content

    Morphological Network: How Far Can We Go with Morphological Neurons?

    Full text link
    In recent years, the idea of using morphological operations as networks has received much attention. Mathematical morphology provides very efficient and useful image processing and image analysis tools based on basic operators like dilation and erosion, defined in terms of kernels. Many other morphological operations are built up using the dilation and erosion operations. Although the learning of structuring elements such as dilation or erosion using the backpropagation algorithm is not new, the order and the way these morphological operations are used is not standard. In this paper, we have theoretically analyzed the use of morphological operations for processing 1D feature vectors and shown that this gets extended to the 2D case in a simple manner. Our theoretical results show that a morphological block represents a sum of hinge functions. Hinge functions are used in many places for classification and regression tasks (Breiman (1993)). We have also proved a universal approximation theorem -- a stack of two morphological blocks can approximate any continuous function over arbitrary compact sets. To experimentally validate the efficacy of this network in real-life applications, we have evaluated its performance on satellite image classification datasets since morphological operations are very sensitive to geometrical shapes and structures. We have also shown results on a few tasks like segmentation of blood vessels from fundus images, segmentation of lungs from chest x-ray and image dehazing. The results are encouraging and further establishes the potential of morphological networks.Comment: 35 pages, 19 figures, 7 table
    • …
    corecore