6,725 research outputs found

    A mathematical theory of semantic development in deep neural networks

    Full text link
    An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities

    Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

    Full text link
    Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep networks incur only a finite, depth independent, delay in learning speed relative to shallow networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep nonlinear networks, as long as they operate in a special regime known as the edge of chaos.Comment: Submission to ICLR2014. Revised based on reviewer feedbac

    Testing multi-alternative decision models with non-stationary evidence

    Get PDF
    Recent research has investigated the process of integrating perceptual evidence toward a decision, converging on a number of sequential sampling choice models, such as variants of race and diffusion models and the non-linear leaky competing accumulator (LCA) model. Here we study extensions of these models to multi-alternative choice, considering how well they can account for data from a psychophysical experiment in which the evidence supporting each of the alternatives changes dynamically during the trial, in a way that creates temporal correlations. We find that participants exhibit a tendency to choose an alternative whose evidence profile is temporally anti-correlated with (or dissimilar from) that of other alternatives. This advantage of the anti-correlated alternative is well accounted for in the LCA, and provides constraints that challenge several other models of multi-alternative choice

    GaAs monolithic frequency doublers with series connected varactor diodes

    Get PDF
    GaAs monolithic frequency doublers using series connected varactor diodes have been fabricated for the first time. Output powers of 150 mW at 36.9 GHz with 24% efficiency and 300 mW at 24.8 GHz with 18% efficiency have been obtained. Peak efficiencies of 35% at output power levels near 100 mW have been achieved at both frequencies. Both K-band and Ka-band frequency doublers are derived from a lower power, single-diode design by series connection of two diodes and scaling to achieve different power and frequency specifications. Their fabrication was accomplished using the same process sequence

    Performance of alkaline battery cells used in emergency locator transmitters

    Get PDF
    The characteristics of battery power supplies for emergency locator transmitters (ELT's) were investigated by testing alkaline zinc/manganese dioxide cells of the type typically used in ELT's. Cells from four manufacturers were tested. The cells were subjected to simulated environmental and load conditions representative of those required for survival and operation. Battery cell characteristics that may contribute to ELT malfunctions and limitations were evaluated. Experimental results from the battery cell study are discussed, and an evaluation of ELT performance while operating under a representative worst-case environmental condition is presented

    Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks

    Full text link
    Transformer networks have seen great success in natural language processing and machine vision, where task objectives such as next word prediction and image classification benefit from nuanced context sensitivity across high-dimensional inputs. However, there is an ongoing debate about how and when transformers can acquire highly structured behavior and achieve systematic generalization. Here, we explore how well a causal transformer can perform a set of algorithmic tasks, including copying, sorting, and hierarchical compositions of these operations. We demonstrate strong generalization to sequences longer than those used in training by replacing the standard positional encoding typically used in transformers with labels arbitrarily paired with items in the sequence. We search for the layer and head configuration sufficient to solve these tasks, then probe for signs of systematic processing in latent representations and attention patterns. We show that two-layer transformers learn reliable solutions to multi-level problems, develop signs of task decomposition, and encode input items in a way that encourages the exploitation of shared computation across related tasks. These results provide key insights into how attention layers support structured computation both within a task and across multiple tasks.Comment: 18 page
    corecore