5,136 research outputs found
Tropospheric range error parameters: Further studies
Improved parameters are presented for predicting the tropospheric effect on electromagnetic range measurements from surface meteorological data. More geographic locations have been added to the earlier list. Parameters are given for computing the dry component of the zenith radio range effect from surface pressure alone with an rms error of 1 to 2 mm, or the total range effect from the dry and wet components of the surface refractivity and a two-part quartic profile model. The new parameters are obtained, as before, from meteorological balloon data but with improved procedures, including the conversion of the geopotential heights of the balloon data to actual or geometric heights before using the data. The revised values of the parameter k (dry component of vertical radio range effect per unit pressure at the surface) show more latitude variation than is accounted for by the variation of g, the acceleration of gravity
Dense Associative Memory for Pattern Recognition
A model of associative memory is studied, which stores and reliably retrieves
many more patterns than the number of neurons in the network. We propose a
simple duality between this dense associative memory and neural networks
commonly used in deep learning. On the associative memory side of this duality,
a family of models that smoothly interpolates between two limiting cases can be
constructed. One limit is referred to as the feature-matching mode of pattern
recognition, and the other one as the prototype regime. On the deep learning
side of the duality, this family corresponds to feedforward neural networks
with one hidden layer and various activation functions, which transmit the
activities of the visible neurons to the hidden layer. This family of
activation functions includes logistics, rectified linear units, and rectified
polynomials of higher degrees. The proposed duality makes it possible to apply
energy-based intuition from associative memory to analyze computational
properties of neural networks with unusual activation functions - the higher
rectified polynomials which until now have not been used in deep learning. The
utility of the dense memories is illustrated for two test cases: the logical
gate XOR and the recognition of handwritten digits from the MNIST data set.Comment: Accepted for publication at NIPS 201
Dense Associative Memory is Robust to Adversarial Inputs
Deep neural networks (DNN) trained in a supervised way suffer from two known
problems. First, the minima of the objective function used in learning
correspond to data points (also known as rubbish examples or fooling images)
that lack semantic similarity with the training data. Second, a clean input can
be changed by a small, and often imperceptible for human vision, perturbation,
so that the resulting deformed input is misclassified by the network. These
findings emphasize the differences between the ways DNN and humans classify
patterns, and raise a question of designing learning algorithms that more
accurately mimic human perception compared to the existing methods.
Our paper examines these questions within the framework of Dense Associative
Memory (DAM) models. These models are defined by the energy function, with
higher order (higher than quadratic) interactions between the neurons. We show
that in the limit when the power of the interaction vertex in the energy
function is sufficiently large, these models have the following three
properties. First, the minima of the objective function are free from rubbish
images, so that each minimum is a semantically meaningful pattern. Second,
artificial patterns poised precisely at the decision boundary look ambiguous to
human subjects and share aspects of both classes that are separated by that
decision boundary. Third, adversarial images constructed by models with small
power of the interaction vertex, which are equivalent to DNN with rectified
linear units (ReLU), fail to transfer to and fool the models with higher order
interactions. This opens up a possibility to use higher order models for
detecting and stopping malicious adversarial attacks. The presented results
suggest that DAM with higher order energy functions are closer to human visual
perception than DNN with ReLUs
Numerical Implementation of Gradient Algorithms
A numerical method for computational implementation of gradient dynamical systems is presented. The method is based upon the development of geometric integration numerical methods, which aim at preserving the dynamical properties of the original ordinary differential
equation under discretization. In particular, the proposed method belongs to the class of discrete gradients methods, which substitute the gradient of the continuous equation with a discrete gradient, leading to a map that possesses the same Lyapunov function of the dynamical system,
thus preserving the qualitative properties regardless of the step size. In this work, we apply a discrete gradient method to the implementation of Hopfield neural networks. Contrary to most geometric integration
methods, the proposed algorithm can be rewritten in explicit form, which considerably improves its performance and stability. Simulation results show that the preservation of the Lyapunov function leads to an improved performance, compared to the conventional discretization.Spanish Government project no. TIN2010-16556 Junta de Andalucía project no. P08-TIC-04026 Agencia Española de Cooperación Internacional
para el Desarrollo project no. A2/038418/1
An Information-Based Neural Approach to Constraint Satisfaction
A novel artificial neural network approach to constraint satisfaction
problems is presented. Based on information-theoretical considerations, it
differs from a conventional mean-field approach in the form of the resulting
free energy. The method, implemented as an annealing algorithm, is numerically
explored on a testbed of K-SAT problems. The performance shows a dramatic
improvement to that of a conventional mean-field approach, and is comparable to
that of a state-of-the-art dedicated heuristic (Gsat+Walk). The real strength
of the method, however, lies in its generality -- with minor modifications it
is applicable to arbitrary types of discrete constraint satisfaction problems.Comment: 13 pages, 3 figures,(to appear in Neural Computation
Airline Crew Scheduling with Potts Neurons
A Potts feedback neural network approach for finding good solutions to
resource allocation problems with a non-fixed topology is presented. As a
target application the airline crew scheduling problem is chosen. The
topological complication is handled by means of a propagator defined in terms
of Potts neurons. The approach is tested on artificial random problems tuned to
resemble real-world conditions. Very good results are obtained for a variety of
problem sizes. The computer time demand for the approach only grows like
\mbox{(number of flights)}^3. A realistic problem typically is solved within
minutes, partly due to a prior reduction of the problem size, based on an
analysis of the local arrival/departure structure at the single airportsComment: 9 pages LaTeX, 3 postscript figures, uufiles forma
Seeing chips : analog VLSI circuits for computer vision
Vision is simple. We open our eyes and, instantly, the world surrounding us is perceived in all its splendor. Yet Artificial Intelligence has been trying with very limited success for over 20 years to endow machines with similar abilities. A large van, filled with computers and driving unguided at a mile per hour across gently sloping hills in Colorado and using a laser-range system to “see” is the most we have accomplished so far. On the other hand, computers can play a decent game of chess or prove simple mathematical theorems. It is ironic that we are unable to reproduce perceptual abilities which we share with most animals while some of the features distinguishing us from even our closest cousins, chimpanzees, can be carried out by machines. Vision is difficult
Thermodynamic and Kinetic Analysis of Sensitivity Amplification in Biological Signal Transduction
Based on a thermodynamic analysis of the kinetic model for the protein
phosphorylation-dephosphorylation cycle, we study the ATP (or GTP) energy
utilization of this ubiquitous biological signal transduction process. It is
shown that the free energy from hydrolysis inside cells,
(phosphorylation potential), controls the amplification and sensitivity of the
switch-like cellular module; the response coefficient of the sensitivity
amplification approaches the optimal 1 and the Hill coefficient increases with
increasing . We discover that zero-order ultrasensitivity is
mathematically equivalent to allosteric cooperativity. Furthermore, we show
that the high amplification in ultrasensitivity is mechanistically related to
the proofreading kinetics for protein biosynthesis. Both utilize multiple
kinetic cycles in time to gain temporal cooperativity, in contrast to
allosteric cooperativity that utilizes multiple subunits in a protein.Comment: 19 pages, 7 figure
- …
