8,809 research outputs found
TopologyNet: Topology based deep convolutional neural networks for biomolecular property predictions
Although deep learning approaches have had tremendous success in image, video
and audio processing, computer vision, and speech recognition, their
applications to three-dimensional (3D) biomolecular structural data sets have
been hindered by the entangled geometric complexity and biological complexity.
We introduce topology, i.e., element specific persistent homology (ESPH), to
untangle geometric complexity and biological complexity. ESPH represents 3D
complex geometry by one-dimensional (1D) topological invariants and retains
crucial biological information via a multichannel image representation. It is
able to reveal hidden structure-function relationships in biomolecules. We
further integrate ESPH and convolutional neural networks to construct a
multichannel topological neural network (TopologyNet) for the predictions of
protein-ligand binding affinities and protein stability changes upon mutation.
To overcome the limitations to deep learning arising from small and noisy
training sets, we present a multitask topological convolutional neural network
(MT-TCNN). We demonstrate that the present TopologyNet architectures outperform
other state-of-the-art methods in the predictions of protein-ligand binding
affinities, globular protein mutation impacts, and membrane protein mutation
impacts.Comment: 20 pages, 8 figures, 5 table
Entropy-based parametric estimation of spike train statistics
We consider the evolution of a network of neurons, focusing on the asymptotic
behavior of spikes dynamics instead of membrane potential dynamics. The spike
response is not sought as a deterministic response in this context, but as a
conditional probability : "Reading out the code" consists of inferring such a
probability. This probability is computed from empirical raster plots, by using
the framework of thermodynamic formalism in ergodic theory. This gives us a
parametric statistical model where the probability has the form of a Gibbs
distribution. In this respect, this approach generalizes the seminal and
profound work of Schneidman and collaborators. A minimal presentation of the
formalism is reviewed here, while a general algorithmic estimation method is
proposed yielding fast convergent implementations. It is also made explicit how
several spike observables (entropy, rate, synchronizations, correlations) are
given in closed-form from the parametric estimation. This paradigm does not
only allow us to estimate the spike statistics, given a design choice, but also
to compare different models, thus answering comparative questions about the
neural code such as : "are correlations (or time synchrony or a given set of
spike patterns, ..) significant with respect to rate coding only ?" A numerical
validation of the method is proposed and the perspectives regarding spike-train
code analysis are also discussed.Comment: 37 pages, 8 figures, submitte
Visualization of AE's Training on Credit Card Transactions with Persistent Homology
Auto-encoders are among the most popular neural network architecture for
dimension reduction. They are composed of two parts: the encoder which maps the
model distribution to a latent manifold and the decoder which maps the latent
manifold to a reconstructed distribution. However, auto-encoders are known to
provoke chaotically scattered data distribution in the latent manifold
resulting in an incomplete reconstructed distribution. Current distance
measures fail to detect this problem because they are not able to acknowledge
the shape of the data manifolds, i.e. their topological features, and the scale
at which the manifolds should be analyzed. We propose Persistent Homology for
Wasserstein Auto-Encoders, called PHom-WAE, a new methodology to assess and
measure the data distribution of a generative model. PHom-WAE minimizes the
Wasserstein distance between the true distribution and the reconstructed
distribution and uses persistent homology, the study of the topological
features of a space at different spatial resolutions, to compare the nature of
the latent manifold and the reconstructed distribution. Our experiments
underline the potential of persistent homology for Wasserstein Auto-Encoders in
comparison to Variational Auto-Encoders, another type of generative model. The
experiments are conducted on a real-world data set particularly challenging for
traditional distance measures and auto-encoders. PHom-WAE is the first
methodology to propose a topological distance measure, the bottleneck distance,
for Wasserstein Auto-Encoders used to compare decoded samples of high quality
in the context of credit card transactions.Comment: arXiv admin note: substantial text overlap with arXiv:1905.0989
Network synchronization: Optimal and Pessimal Scale-Free Topologies
By employing a recently introduced optimization algorithm we explicitely
design optimally synchronizable (unweighted) networks for any given scale-free
degree distribution. We explore how the optimization process affects
degree-degree correlations and observe a generic tendency towards
disassortativity. Still, we show that there is not a one-to-one correspondence
between synchronizability and disassortativity. On the other hand, we study the
nature of optimally un-synchronizable networks, that is, networks whose
topology minimizes the range of stability of the synchronous state. The
resulting ``pessimal networks'' turn out to have a highly assortative
string-like structure. We also derive a rigorous lower bound for the Laplacian
eigenvalue ratio controlling synchronizability, which helps understanding the
impact of degree correlations on network synchronizability.Comment: 11 pages, 4 figs, submitted to J. Phys. A (proceedings of Complex
Networks 2007
One-class classifiers based on entropic spanning graphs
One-class classifiers offer valuable tools to assess the presence of outliers
in data. In this paper, we propose a design methodology for one-class
classifiers based on entropic spanning graphs. Our approach takes into account
the possibility to process also non-numeric data by means of an embedding
procedure. The spanning graph is learned on the embedded input data and the
outcoming partition of vertices defines the classifier. The final partition is
derived by exploiting a criterion based on mutual information minimization.
Here, we compute the mutual information by using a convenient formulation
provided in terms of the -Jensen difference. Once training is
completed, in order to associate a confidence level with the classifier
decision, a graph-based fuzzy model is constructed. The fuzzification process
is based only on topological information of the vertices of the entropic
spanning graph. As such, the proposed one-class classifier is suitable also for
data characterized by complex geometric structures. We provide experiments on
well-known benchmarks containing both feature vectors and labeled graphs. In
addition, we apply the method to the protein solubility recognition problem by
considering several representations for the input samples. Experimental results
demonstrate the effectiveness and versatility of the proposed method with
respect to other state-of-the-art approaches.Comment: Extended and revised version of the paper "One-Class Classification
Through Mutual Information Minimization" presented at the 2016 IEEE IJCNN,
Vancouver, Canad
- …