3,971 research outputs found
Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs
Laplacian mixture models identify overlapping regions of influence in
unlabeled graph and network data in a scalable and computationally efficient
way, yielding useful low-dimensional representations. By combining Laplacian
eigenspace and finite mixture modeling methods, they provide probabilistic or
fuzzy dimensionality reductions or domain decompositions for a variety of input
data types, including mixture distributions, feature vectors, and graphs or
networks. Provable optimal recovery using the algorithm is analytically shown
for a nontrivial class of cluster graphs. Heuristic approximations for scalable
high-performance implementations are described and empirically tested.
Connections to PageRank and community detection in network analysis demonstrate
the wide applicability of this approach. The origins of fuzzy spectral methods,
beginning with generalized heat or diffusion equations in physics, are reviewed
and summarized. Comparisons to other dimensionality reduction and clustering
methods for challenging unsupervised machine learning problems are also
discussed.Comment: 13 figures, 35 reference
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks
Networks of randomly connected neurons are among the most popular models in
theoretical neuroscience. The connectivity between neurons in the cortex is
however not fully random, the simplest and most prominent deviation from
randomness found in experimental data being the overrepresentation of
bidirectional connections among pyramidal cells. Using numerical and analytical
methods, we investigated the effects of partially symmetric connectivity on
dynamics in networks of rate units. We considered the two dynamical regimes
exhibited by random neural networks: the weak-coupling regime, where the firing
activity decays to a single fixed point unless the network is stimulated, and
the strong-coupling or chaotic regime, characterized by internally generated
fluctuating firing rates. In the weak-coupling regime, we computed analytically
for an arbitrary degree of symmetry the auto-correlation of network activity in
presence of external noise. In the chaotic regime, we performed simulations to
determine the timescale of the intrinsic fluctuations. In both cases, symmetry
increases the characteristic asymptotic decay time of the autocorrelation
function and therefore slows down the dynamics in the network.Comment: 17 pages, 7 figure
- …