110,856 research outputs found
Morphological feature extraction for statistical learning with applications to solar image data
Abstract: Many areas of science are generating large volumes of digital image data. In order to take full advantage of the high-resolution and high-cadence images modern technology is producing, methods to automatically process and analyze large batches of such images are needed. This involves reducing complex images to simple representations such as binary sketches or numerical summaries that capture embedded scientific information. Using techniques derived from mathematical morphology, we demonstrate how to reduce solar images into simple ‘sketch ’ representations and numerical summaries that can be used for statistical learning. We demonstrate our general techniques on two specific examples: classifying sunspot groups and recognizing coronal loop structures. Our methodology reproduces manual classifications at an overall rate of 90 % on a set of 119 magnetogram and white light images of sunspot groups. We also show that our methodology is competitive with other automated algorithms at producing coronal loop tracings and demonstrate robustness through noise simulations. 2013 Wile
Saccade learning with concurrent cortical and subcortical basal ganglia loops
The Basal Ganglia is a central structure involved in multiple cortical and
subcortical loops. Some of these loops are believed to be responsible for
saccade target selection. We study here how the very specific structural
relationships of these saccadic loops can affect the ability of learning
spatial and feature-based tasks.
We propose a model of saccade generation with reinforcement learning
capabilities based on our previous basal ganglia and superior colliculus
models. It is structured around the interactions of two parallel cortico-basal
loops and one tecto-basal loop. The two cortical loops separately deal with
spatial and non-spatial information to select targets in a concurrent way. The
subcortical loop is used to make the final target selection leading to the
production of the saccade. These different loops may work in concert or disturb
each other regarding reward maximization. Interactions between these loops and
their learning capabilities are tested on different saccade tasks.
The results show the ability of this model to correctly learn basic target
selection based on different criteria (spatial or not). Moreover the model
reproduces and explains training dependent express saccades toward targets
based on a spatial criterion.
Finally, the model predicts that in absence of prefrontal control, the
spatial loop should dominate
Learning to Infer Graphics Programs from Hand-Drawn Images
We introduce a model that learns to convert simple hand drawings into
graphics programs written in a subset of \LaTeX. The model combines techniques
from deep learning and program synthesis. We learn a convolutional neural
network that proposes plausible drawing primitives that explain an image. These
drawing primitives are like a trace of the set of primitive commands issued by
a graphics program. We learn a model that uses program synthesis techniques to
recover a graphics program from that trace. These programs have constructs like
variable bindings, iterative loops, or simple kinds of conditionals. With a
graphics program in hand, we can correct errors made by the deep network,
measure similarity between drawings by use of similar high-level geometric
structures, and extrapolate drawings. Taken together these results are a step
towards agents that induce useful, human-readable programs from perceptual
input
Dynamics and Performance of Susceptibility Propagation on Synthetic Data
We study the performance and convergence properties of the Susceptibility
Propagation (SusP) algorithm for solving the Inverse Ising problem. We first
study how the temperature parameter (T) in a Sherrington-Kirkpatrick model
generating the data influences the performance and convergence of the
algorithm. We find that at the high temperature regime (T>4), the algorithm
performs well and its quality is only limited by the quality of the supplied
data. In the low temperature regime (T<4), we find that the algorithm typically
does not converge, yielding diverging values for the couplings. However, we
show that by stopping the algorithm at the right time before divergence becomes
serious, good reconstruction can be achieved down to T~2. We then show that
dense connectivity, loopiness of the connectivity, and high absolute
magnetization all have deteriorating effects on the performance of the
algorithm. When absolute magnetization is high, we show that other methods can
be work better than SusP. Finally, we show that for neural data with high
absolute magnetization, SusP performs less well than TAP inversion.Comment: 9 pages, 7 figure
- …