12,905 research outputs found

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    Neural network setups for a precise detection of the many-body localization transition: finite-size scaling and limitations

    Full text link
    Determining phase diagrams and phase transitions semi-automatically using machine learning has received a lot of attention recently, with results in good agreement with more conventional approaches in most cases. When it comes to more quantitative predictions, such as the identification of universality class or precise determination of critical points, the task is more challenging. As an exacting test-bed, we study the Heisenberg spin-1/2 chain in a random external field that is known to display a transition from a many-body localized to a thermalizing regime, which nature is not entirely characterized. We introduce different neural network structures and dataset setups to achieve a finite-size scaling analysis with the least possible physical bias (no assumed knowledge on the phase transition and directly inputing wave-function coefficients), using state-of-the-art input data simulating chains of sizes up to L=24. In particular, we use domain adversarial techniques to ensure that the network learns scale-invariant features. We find a variability of the output results with respect to network and training parameters, resulting in relatively large uncertainties on final estimates of critical point and correlation length exponent which tend to be larger than the values obtained from conventional approaches. We put the emphasis on interpretability throughout the paper and discuss what the network appears to learn for the various used architectures. Our findings show that a it quantitative analysis of phase transitions of unknown nature remains a difficult task with neural networks when using the minimally engineered physical input.Comment: v2: published versio

    Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks

    No full text
    The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks
    • 

    corecore