9,796 research outputs found
Classifying Amharic News Text Using Self-Organizing Maps
The paper addresses using artificial neural networks for classification of Amharic news items. Amharic is the language for countrywide communication in Ethiopia and has its own writing system containing extensive systematic redundancy. It is quite dialectally diversified and probably representative of the languages of a continent that so far has received little attention within the language processing field.
The experiments investigated document clustering around user queries using Self-Organizing Maps, an unsupervised learning neural network strategy. The best ANN model showed a precision of 60.0% when trying to cluster unseen data, and a 69.5% precision when trying to classify it
Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net
We propose a novel method to directly learn a stochastic transition operator
whose repeated application provides generated samples. Traditional undirected
graphical models approach this problem indirectly by learning a Markov chain
model whose stationary distribution obeys detailed balance with respect to a
parameterized energy function. The energy function is then modified so the
model and data distributions match, with no guarantee on the number of steps
required for the Markov chain to converge. Moreover, the detailed balance
condition is highly restrictive: energy based models corresponding to neural
networks must have symmetric weights, unlike biological neural circuits. In
contrast, we develop a method for directly learning arbitrarily parameterized
transition operators capable of expressing non-equilibrium stationary
distributions that violate detailed balance, thereby enabling us to learn more
biologically plausible asymmetric neural networks and more general non-energy
based dynamical systems. The proposed training objective, which we derive via
principled variational methods, encourages the transition operator to "walk
back" in multi-step trajectories that start at data-points, as quickly as
possible back to the original data points. We present a series of experimental
results illustrating the soundness of the proposed approach, Variational
Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating
superior samples compared to earlier attempts to learn a transition operator.
We also show that although each rapid training trajectory is limited to a
finite but variable number of steps, our transition operator continues to
generate good samples well past the length of such trajectories, thereby
demonstrating the match of its non-equilibrium stationary distribution to the
data distribution. Source Code: http://github.com/anirudh9119/walkback_nips17Comment: To appear at NIPS 201
Neural networks as surrogate models for nonlinear, transient aerodynamics within an aeroelastic coupling-scheme in the time domain
In this paper the creation of a nonlinear, transient surrogate model is described
that can be used within an aeroelastic coupling-scheme in the transonic range. The
method is based on the theory of artifical neural networks as well as the autoregressive
moving average method (ARMA). It can be shown that the method is able to approximate
the nonlinear aeroelastic behaviour of the NLR7301 airfoil. Also limit cycle oscillations
can be approximated with acceptable accuracy
Learning Neural Implicit Representations with Surface Signal Parameterizations
Neural implicit surface representations have recently emerged as popular
alternative to explicit 3D object encodings, such as polygonal meshes,
tabulated points, or voxels. While significant work has improved the geometric
fidelity of these representations, much less attention is given to their final
appearance. Traditional explicit object representations commonly couple the 3D
shape data with auxiliary surface-mapped image data, such as diffuse color
textures and fine-scale geometric details in normal maps that typically require
a mapping of the 3D surface onto a plane, i.e., a surface parameterization;
implicit representations, on the other hand, cannot be easily textured due to
lack of configurable surface parameterization. Inspired by this digital content
authoring methodology, we design a neural network architecture that implicitly
encodes the underlying surface parameterization suitable for appearance data.
As such, our model remains compatible with existing mesh-based digital content
with appearance data. Motivated by recent work that overfits compact networks
to individual 3D objects, we present a new weight-encoded neural implicit
representation that extends the capability of neural implicit surfaces to
enable various common and important applications of texture mapping. Our method
outperforms reasonable baselines and state-of-the-art alternatives
Feed-Forward Propagation of Temporal and Rate Information between Cortical Populations during Coherent Activation in Engineered In Vitro Networks.
Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura's and van Rossum's spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized network burst events that propagated between layers and highlight the potential applications of these MEMs devices as a tool for further investigation of structure and functional dynamics among neural populations
- …