2,449 research outputs found

    Do correlations create an energy gap in electronic bilayers? Critical analysis of different approaches

    Full text link
    This paper investigates the effect of correlations in electronic bilayers on the longitudinal collective mode structure. We employ the dielectric permeability constructed by means of the classical theory of moments. It is shown that the neglection of damping processes overestimates the role of correlations. We conclude that the correct account of damping processes leads to an absence of an energy gap.Comment: 4 page

    Super Linear Learning in Back Propagation Neural Nets

    Get PDF
    The important feature of this work is the combination of minimizing a function with desirable properties, using the conjugate gradient method (cgm). The method has resulted in significant improvements for both easy and difficult training tasks. Two major problems slow the rate at which large back propagation neural networks (bpnns) can be taught. First is the linear convergence of gradient descent used by modified steepest descent method (msdm). Second is the abundance of saddle points which occur because of the minimization of the sum of squared errors. This work offers a solution to both difficulties. The cgm which is super linearly convergent replaces gradient descent. Division of each squared error term by its derivative and then summing the terms produces a minimization function with a significantly reduced number of saddle points

    In Search of Miniature Books

    Get PDF
    A Collection of References Pertaining to Miniature Books. Annotated bibliopgrahic citations of books, bibliographies, catalogues, pamphlets, periodicals, articles, newsletters, book lists with prices, foreign language references, and miscellaneous newspaper clippings.https://openscholarship.wustl.edu/books/1032/thumbnail.jp

    Preserving the Past: Historic Preservation Regulations and the Taking Clause

    Get PDF

    Time Variability While Training a Parallel Neural Net Network

    Get PDF
    The algorithmic analysis, data collection, and statistical analysis required to isolate the cause of time variability observed while an Elman style recurrent neural network is trained in parallel on a twenty processor SPARCcenter 2000 is described in detail. Correlations of system metrics indicate the operating system scheduler or an interaction of kernel processes is the most probable explanation for the variability

    Magnetometry via a double-pass continuous quantum measurement of atomic spin

    Full text link
    We argue that it is possible in principle to reduce the uncertainty of an atomic magnetometer by double-passing a far-detuned laser field through the atomic sample as it undergoes Larmor precession. Numerical simulations of the quantum Fisher information suggest that, despite the lack of explicit multi-body coupling terms in the system's magnetic Hamiltonian, the parameter estimation uncertainty in such a physical setup scales better than the conventional Heisenberg uncertainty limit over a specified but arbitrary range of particle number N. Using the methods of quantum stochastic calculus and filtering theory, we demonstrate numerically an explicit parameter estimator (called a quantum particle filter) whose observed scaling follows that of our calculated quantum Fisher information. Moreover, the quantum particle filter quantitatively surpasses the uncertainty limit calculated from the quantum Cramer-Rao inequality based on a magnetic coupling Hamiltonian with only single-body operators. We also show that a quantum Kalman filter is insufficient to obtain super-Heisenberg scaling, and present evidence that such scaling necessitates going beyond the manifold of Gaussian atomic states.Comment: 17 pages, updated to match print versio

    Tail-Recursive Distributed Representations and Simple Recurrent Networks

    Get PDF
    Representation poses important challenges to connectionism. The ability to structurally compose representaitons is critical in achieving the capability considered necessary for cognition. We are investigating distributed patterns that represent structure as part of a larger effort to develop a natural language processor. Recursive Auto-Associative Memory (RAAM) representations show unusual promise as a general vehicle for representing classical symbolic structures in a way that supports compositionality. However, RAAMs are limited to representations for fixed-valence structures and can often be difficult to train. We provide a technique for mapping any ordered collection (forest) of hierarchical structures (trees) into a set of training patterns which can be used effectivelyin training a simple recurrent network (SRN) to develop RAAM-style distributed representations. The advantages in our technique are three-fold: first, the fixed-valence restriction on structures represented by patterns trained with RAAMs is removed; second, representations resulting from training corresponds to ordered forests of labeled trees thereby extending what can be represented in this fashion; and third, training can be accomplished with an auto-associative SRN, making training a much more straightforward process and one which optimally utilizes the n-dimensional space of patterns

    Strategies for the Parallel Training of Simple Recurrent Neural Networks

    Get PDF
    Two concurrent implementations of the method of conjugate gradients for training Elman networks are discussed. The parallelism is obtained in the computation of the error gradient and the method is therefore applicable to any gradient descent training technique for this form of network. The experimental results were obtained on a Sun Sparc Center 2000 multiprocessor. The Sparc 2000 is a shared memory machine well suited to coarse-grained distributed computations, but the concurrency could be extended to other architectures as well

    High-Performance Training of Feedforward & Simple Recurrent Networks

    Get PDF
    TRAINREC is a system for training feedforward and recurrent neural networks that incorporates several ideas. It uses the conjugate-gradient method which is demonstrably more efficient than traditional backward error propagation. We assume epoch-based training and derive a new error function having several desirable properties absent from the traditional sum-of-squared-error function. We argue for skip (shortcut) connections where appropriate and the preference for a sigmoidal yielding values over the [-1,1] interval. The input feature space is often over-analyzed, but by using singular value decomposition, input patterns can be conditioned for better learning often with a reduced number of input units. Recurrent networks, in their most general form, require special handling and cannot be simply a re-wiring of the architecture without a corresponding revision of the derivative calculations. There is a careful balance required among the network architeucture (specifically, hidden and feedback units), the amount of training applied, and the ability of the network to generalize. These issues often hinge on selecting the proper stopping criterion. Discovering methods that work in theory as well as in practice is difficult and we have spent a substantial amount of effort evaluating and testing these ideas on real problems to determine their value. This paper encapsulates a number of such ideas ranging from those motivated by a desire for efficiency of training to those motivated by correctness and accuracy of the result. While this paper is intended to be self-contained, several references are provided to other work upon which many of our claims are based
    • …
    corecore