5,767 research outputs found
Contractive De-noising Auto-encoder
Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.Comment: Figures edite
Energy-based temporal neural networks for imputing missing values
Imputing missing values in high dimensional time series is a difficult problem. There have been some approaches to the problem [11,8] where neural architectures were trained as probabilistic models of the data. However, we argue that this approach is not optimal. We propose to view temporal neural networks with latent variables as energy-based models and train them for missing value recovery directly. In this paper we introduce two energy-based models. The first model is based on a one dimensional convolution and the second model utilizes a recurrent neural network. We demonstrate how ideas from the energy-based learning framework can be used to train these models to recover missing values. The models are evaluated on a motion capture dataset
Recommended from our members
Low-cost representation for restricted Boltzmann machines
This paper presents a method for extracting a low-cost representation from restricted Boltzmann machines. The new representation can be considered as a compression of the network, requiring much less storage capacity while reasonably preserving the network's performance at feature learning. We show that the compression can be done by converting the weight matrix of real numbers into a matrix of three values {-1, 0, 1} associated with a score vector of real numbers. This set of values is similar enough to Boolean values which help us further translate the representation into logical rules. In the experiments reported in this paper, we evaluate the performance of our compression method on image datasets, obtaining promising results. Experiments on the MNIST handwritten digit classification dataset, for example, have shown that a 95% saving in memory can be achieved with no significant drop in accuracy
A study of the economic benefits of meteorological satellite data
Satellite data, while most useful in data poor areas, serves to fine tune forecasts in data rich areas. It consequently has a resulting significant economic benefit because, as previously stated, even one improved forecast per client per year can save each client thousands of dollars. Multiply this by several hundred clients and the dollar savings are sizeable. The great educational value which experience with satellite data gives undoubtedly leads to improved forecasts. Any type of future satellite data delivery system should take into account the needs and facilities of the user community to make it most useful
A simple model of unbounded evolutionary versatility as a largest-scale trend in organismal evolution
The idea that there are any large-scale trends in the evolution of biological organisms is highly controversial. It is commonly believed, for example, that there is a large-scale trend in evolution towards increasing complexity, but empirical and theoretical arguments undermine this belief. Natural selection results in organisms that are well adapted to their local environments, but it is not clear how local adaptation can produce a global trend. In this paper, I present a simple computational model, in which local adaptation to a randomly changing environment results in a global trend towards increasing evolutionary versatility. In this model, for evolutionary versatility to increase without bound, the environment must be highly dynamic. The model also shows that unbounded evolutionary versatility implies an accelerating evolutionary pace. I believe that unbounded increase in evolutionary versatility is a large-scale trend in evolution. I discuss some of the testable predictions about organismal evolution that are suggested by the model
New results from H.E.S.S. observations of galaxy clusters
Clusters of galaxies are believed to contain a significant population of
cosmic rays. From the radio and probably hard X-ray bands it is known that
clusters are the spatially most extended emitters of non-thermal radiation in
the Universe. Due to their content of cosmic rays, galaxy clusters are also
potential sources of VHE (>100 GeV) gamma rays. Recently, the massive, nearby
cluster Abell 85 has been observed with the H.E.S.S. experiment in VHE gamma
rays with a very deep exposure as part of an ongoing campaign. No significant
gamma-ray signal has been found at the position of the cluster. The
non-detection of this object with H.E.S.S. constrains the total energy of
cosmic rays in this system. For a hard spectral index of the cosmic rays of
-2.1 and if the cosmic-ray energy density follows the large scale gas density
profile, the limit on the fraction of energy in these non-thermal particles
with respect to the total thermal energy of the intra-cluster medium is 8% for
this particular cluster. This value is at the lower bounds of model
predictions.Comment: 4 pages, one figure, invited talk at the 2nd Heidelberg workshop:
"High-Energy Gamma-rays and Neutrinos from Extra-Galactic Sources", January
13 - 16, 2009, to be published in Int. J. Mod. Phys.
Comparing Probabilistic Models for Melodic Sequences
Modelling the real world complexity of music is a challenge for machine
learning. We address the task of modeling melodic sequences from the same music
genre. We perform a comparative analysis of two probabilistic models; a
Dirichlet Variable Length Markov Model (Dirichlet-VMM) and a Time Convolutional
Restricted Boltzmann Machine (TC-RBM). We show that the TC-RBM learns
descriptive music features, such as underlying chords and typical melody
transitions and dynamics. We assess the models for future prediction and
compare their performance to a VMM, which is the current state of the art in
melody generation. We show that both models perform significantly better than
the VMM, with the Dirichlet-VMM marginally outperforming the TC-RBM. Finally,
we evaluate the short order statistics of the models, using the
Kullback-Leibler divergence between test sequences and model samples, and show
that our proposed methods match the statistics of the music genre significantly
better than the VMM.Comment: in Proceedings of the ECML-PKDD 2011. Lecture Notes in Computer
Science, vol. 6913, pp. 289-304. Springer (2011
- …