2,114 research outputs found

    On lunar ''temperatures''

    Get PDF
    Thermal measurements on lunar surface at radio wavelength

    Numerical Implementation of Gradient Algorithms

    Get PDF
    A numerical method for computational implementation of gradient dynamical systems is presented. The method is based upon the development of geometric integration numerical methods, which aim at preserving the dynamical properties of the original ordinary differential equation under discretization. In particular, the proposed method belongs to the class of discrete gradients methods, which substitute the gradient of the continuous equation with a discrete gradient, leading to a map that possesses the same Lyapunov function of the dynamical system, thus preserving the qualitative properties regardless of the step size. In this work, we apply a discrete gradient method to the implementation of Hopfield neural networks. Contrary to most geometric integration methods, the proposed algorithm can be rewritten in explicit form, which considerably improves its performance and stability. Simulation results show that the preservation of the Lyapunov function leads to an improved performance, compared to the conventional discretization.Spanish Government project no. TIN2010-16556 Junta de Andalucía project no. P08-TIC-04026 Agencia Española de Cooperación Internacional para el Desarrollo project no. A2/038418/1

    An Information-Based Neural Approach to Constraint Satisfaction

    Full text link
    A novel artificial neural network approach to constraint satisfaction problems is presented. Based on information-theoretical considerations, it differs from a conventional mean-field approach in the form of the resulting free energy. The method, implemented as an annealing algorithm, is numerically explored on a testbed of K-SAT problems. The performance shows a dramatic improvement to that of a conventional mean-field approach, and is comparable to that of a state-of-the-art dedicated heuristic (Gsat+Walk). The real strength of the method, however, lies in its generality -- with minor modifications it is applicable to arbitrary types of discrete constraint satisfaction problems.Comment: 13 pages, 3 figures,(to appear in Neural Computation

    Dense Associative Memory for Pattern Recognition

    Full text link
    A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.Comment: Accepted for publication at NIPS 201

    Dense Associative Memory is Robust to Adversarial Inputs

    Full text link
    Deep neural networks (DNN) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation, so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNN and humans classify patterns, and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our paper examines these questions within the framework of Dense Associative Memory (DAM) models. These models are defined by the energy function, with higher order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions. This opens up a possibility to use higher order models for detecting and stopping malicious adversarial attacks. The presented results suggest that DAM with higher order energy functions are closer to human visual perception than DNN with ReLUs

    Airline Crew Scheduling with Potts Neurons

    Full text link
    A Potts feedback neural network approach for finding good solutions to resource allocation problems with a non-fixed topology is presented. As a target application the airline crew scheduling problem is chosen. The topological complication is handled by means of a propagator defined in terms of Potts neurons. The approach is tested on artificial random problems tuned to resemble real-world conditions. Very good results are obtained for a variety of problem sizes. The computer time demand for the approach only grows like \mbox{(number of flights)}^3. A realistic problem typically is solved within minutes, partly due to a prior reduction of the problem size, based on an analysis of the local arrival/departure structure at the single airportsComment: 9 pages LaTeX, 3 postscript figures, uufiles forma

    Earthquake cycles and neural reverberations

    Get PDF
    Driven systems of interconnected blocks with stick-slip friction capture main features of earthquake processes. The microscopic dynamics closely resemble those of spiking nerve cells. We analyze the differences in the collective behavior and introduce a class of solvable models. We prove that the models exhibit rapid phase locking, a phenomenon of particular interest to both geophysics and neurobiology. We study the dependence upon initial conditions and system parameters, and discuss implications for earthquake modeling and neural computation

    Neural network computation by in vitro transcriptional circuits

    Get PDF
    The structural similarity of neural networks and genetic regulatory networks to digital circuits, and hence to each other, was noted from the very beginning of their study [1, 2]. In this work, we propose a simple biochemical system whose architecture mimics that of genetic regulation and whose components allow for in vitro implementation of arbitrary circuits. We use only two enzymes in addition to DNA and RNA molecules: RNA polymerase (RNAP) and ribonuclease (RNase). We develop a rate equation for in vitro transcriptional networks, and derive a correspondence with general neural network rate equations [3]. As proof-of-principle demonstrations, an associative memory task and a feedforward network computation are shown by simulation. A difference between the neural network and biochemical models is also highlighted: global coupling of rate equations through enzyme saturation can lead to global feedback regulation, thus allowing a simple network without explicit mutual inhibition to perform the winner-take-all computation. Thus, the full complexity of the cell is not necessary for biochemical computation: a wide range of functional behaviors can be achieved with a small set of biochemical components
    • …
    corecore