1,276 research outputs found
Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization
Dynamic graph neural networks (DGNNs) are increasingly pervasive in
exploiting spatio-temporal patterns on dynamic graphs. However, existing works
fail to generalize under distribution shifts, which are common in real-world
scenarios. As the generation of dynamic graphs is heavily influenced by latent
environments, investigating their impacts on the out-of-distribution (OOD)
generalization is critical. However, it remains unexplored with the following
two major challenges: (1) How to properly model and infer the complex
environments on dynamic graphs with distribution shifts? (2) How to discover
invariant patterns given inferred spatio-temporal environments? To solve these
challenges, we propose a novel Environment-Aware dynamic Graph LEarning (EAGLE)
framework for OOD generalization by modeling complex coupled environments and
exploiting spatio-temporal invariant patterns. Specifically, we first design
the environment-aware EA-DGNN to model environments by multi-channel
environments disentangling. Then, we propose an environment instantiation
mechanism for environment diversification with inferred distributions. Finally,
we discriminate spatio-temporal invariant patterns for out-of-distribution
prediction by the invariant pattern recognition mechanism and perform
fine-grained causal interventions node-wisely with a mixture of instantiated
environment samples. Experiments on real-world and synthetic dynamic graph
datasets demonstrate the superiority of our method against state-of-the-art
baselines under distribution shifts. To the best of our knowledge, we are the
first to study OOD generalization on dynamic graphs from the environment
learning perspective.Comment: Accepted by the 37th Conference on Neural Information Processing
Systems (NeurIPS 2023
Generating Infinite-Resolution Texture using GANs with Patch-by-Patch Paradigm
In this paper, we introduce a novel approach for generating texture images of
infinite resolutions using Generative Adversarial Networks (GANs) based on a
patch-by-patch paradigm. Existing texture synthesis techniques often rely on
generating a large-scale texture using a one-forward pass to the generating
model, this limits the scalability and flexibility of the generated images. In
contrast, the proposed approach trains GANs models on a single texture image to
generate relatively small patches that are locally correlated and can be
seamlessly concatenated to form a larger image while using a constant GPU
memory footprint. Our method learns the local texture structure and is able to
generate arbitrary-size textures, while also maintaining coherence and
diversity. The proposed method relies on local padding in the generator to
ensure consistency between patches and utilizes spatial stochastic modulation
to allow for local variations and diversity within the large-scale image.
Experimental results demonstrate superior scalability compared to existing
approaches while maintaining visual coherence of generated textures
Interpretable machine learning of amino acid patterns in proteins: a statistical ensemble approach
Explainable and interpretable unsupervised machine learning helps understand
the underlying structure of data. We introduce an ensemble analysis of machine
learning models to consolidate their interpretation. Its application shows that
restricted Boltzmann machines compress consistently into a few bits the
information stored in a sequence of five amino acids at the start or end of
-helices or -sheets. The weights learned by the machines reveal
unexpected properties of the amino acids and the secondary structure of
proteins: (i) His and Thr have a negligible contribution to the amphiphilic
pattern of -helices; (ii) there is a class of -helices
particularly rich in Ala at their end; (iii) Pro occupies most often slots
otherwise occupied by polar or charged amino acids, and its presence at the
start of helices is relevant; (iv) Glu and especially Asp on one side, and Val,
Leu, Iso, and Phe on the other, display the strongest tendency to mark
amphiphilic patterns, i.e., extreme values of an "effective hydrophobicity",
though they are not the most powerful (non) hydrophobic amino acids.Comment: 15 pages, 9 figure
Spontaneous activity patterns in human motor cortex replay evoked activity patterns for hand movements
Spontaneous brain activity, measured with resting state fMRI (R-fMRI), is correlated among regions that are co-activated by behavioral tasks. It is unclear, however, whether spatial patterns of spontaneous activity within a cortical region correspond to spatial patterns of activity evoked by specific stimuli, actions, or mental states. The current study investigated the hypothesis that spontaneous activity in motor cortex represents motor patterns commonly occurring in daily life. To test this hypothesis 15 healthy participants were scanned while performing four different hand movements. Three movements (Grip, Extend, Pinch) were ecological involving grip and grasp hand movements; one control movement involving the rotation of the wrist was not ecological and infrequent (Shake). They were also scanned at rest before and after the execution of the motor tasks (resting-state scans). Using the task data, we identified movement-specific patterns in the primary motor cortex. These task-defined patterns were compared to resting-state patterns in the same motor region. We also performed a control analysis within the primary visual cortex. We found that spontaneous activity patterns in the primary motor cortex were more like task patterns for ecological than control movements. In contrast, there was no difference between ecological and control hand movements in the primary visual area. These findings provide evidence that spontaneous activity in human motor cortex forms fine-scale, patterned representations associated with behaviors that frequently occur in daily life
Regularization, early-stopping and dreaming: a Hopfield-like setup to address generalization and overfitting
In this work we approach attractor neural networks from a machine learning
perspective: we look for optimal network parameters by applying a gradient
descent over a regularized loss function. Within this framework, the optimal
neuron-interaction matrices turn out to be a class of matrices which correspond
to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the
extent of such unlearning is proved to be related to the regularization
hyperparameter of the loss function and to the training time. Thus, we can
design strategies to avoid overfitting that are formulated in terms of
regularization and early-stopping tuning. The generalization capabilities of
these attractor networks are also investigated: analytical results are obtained
for random synthetic datasets, next, the emerging picture is corroborated by
numerical experiments that highlight the existence of several regimes (i.e.,
overfitting, failure and success) as the dataset parameters are varied.Comment: 29 pages, 10 figures, 4 appendice
Hierarchical Consistent Contrastive Learning for Skeleton-Based Action Recognition with Growing Augmentations
Contrastive learning has been proven beneficial for self-supervised
skeleton-based action recognition. Most contrastive learning methods utilize
carefully designed augmentations to generate different movement patterns of
skeletons for the same semantics. However, it is still a pending issue to apply
strong augmentations, which distort the images/skeletons' structures and cause
semantic loss, due to their resulting unstable training. In this paper, we
investigate the potential of adopting strong augmentations and propose a
general hierarchical consistent contrastive learning framework (HiCLR) for
skeleton-based action recognition. Specifically, we first design a gradual
growing augmentation policy to generate multiple ordered positive pairs, which
guide to achieve the consistency of the learned representation from different
views. Then, an asymmetric loss is proposed to enforce the hierarchical
consistency via a directional clustering operation in the feature space,
pulling the representations from strongly augmented views closer to those from
weakly augmented views for better generalizability. Meanwhile, we propose and
evaluate three kinds of strong augmentations for 3D skeletons to demonstrate
the effectiveness of our method. Extensive experiments show that HiCLR
outperforms the state-of-the-art methods notably on three large-scale datasets,
i.e., NTU60, NTU120, and PKUMMD.Comment: Accepted by AAAI 2023. Project page:
https://jhang2020.github.io/Projects/HiCLR/HiCLR.htm
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
Unveiling the intrinsic dynamics of biological and artificial neural networks: from criticality to optimal representations
Deciphering the underpinnings of the dynamical processes leading to
information transmission, processing, and storing in the brain is a crucial
challenge in neuroscience. An inspiring but speculative theoretical idea is
that such dynamics should operate at the brink of a phase transition, i.e., at
the edge between different collective phases, to entail a rich dynamical
repertoire and optimize functional capabilities. In recent years, research
guided by the advent of high-throughput data and new theoretical developments
has contributed to making a quantitative validation of such a hypothesis. Here
we review recent advances in this field, stressing our contributions. In
particular, we use data from thousands of individually recorded neurons in the
mouse brain and tools such as a phenomenological renormalization group
analysis, theory of disordered systems, and random matrix theory. These
combined approaches provide novel evidence of quasi-universal scaling and
near-critical behavior emerging in different brain regions. Moreover, we design
artificial neural networks under the reservoir-computing paradigm and show that
their internal dynamical states become near critical when we tune the networks
for optimal performance. These results not only open new perspectives for
understanding the ultimate principles guiding brain function but also towards
the development of brain-inspired, neuromorphic computation
- …