132,468 research outputs found
Recommended from our members
Ischemic axonal injury up-regulates MARK4 in cortical neurons and primes tau phosphorylation and aggregation.
Ischemic injury to white matter tracts is increasingly recognized to play a key role in age-related cognitive decline, vascular dementia, and Alzheimer's disease. Knowledge of the effects of ischemic axonal injury on cortical neurons is limited yet critical to identifying molecular pathways that link neurodegeneration and ischemia. Using a mouse model of subcortical white matter ischemic injury coupled with retrograde neuronal tracing, we employed magnetic affinity cell sorting with fluorescence-activated cell sorting to capture layer-specific cortical neurons and performed RNA-sequencing. With this approach, we identified a role for microtubule reorganization within stroke-injured neurons acting through the regulation of tau. We find that subcortical stroke-injured Layer 5 cortical neurons up-regulate the microtubule affinity-regulating kinase, Mark4, in response to axonal injury. Stroke-induced up-regulation of Mark4 is associated with selective remodeling of the apical dendrite after stroke and the phosphorylation of tau in vivo. In a cell-based tau biosensor assay, Mark4 promotes the aggregation of human tau in vitro. Increased expression of Mark4 after ischemic axonal injury in deep layer cortical neurons provides new evidence for synergism between axonal and neurodegenerative pathologies by priming of tau phosphorylation and aggregation
Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning
Federated learning (FL) inevitably confronts the challenge of system
heterogeneity in practical scenarios. To enhance the capabilities of most
model-homogeneous FL methods in handling system heterogeneity, we propose a
training scheme that can extend their capabilities to cope with this challenge.
In this paper, we commence our study with a detailed exploration of homogeneous
and heterogeneous FL settings and discover three key observations: (1) a
positive correlation between client performance and layer similarities, (2)
higher similarities in the shallow layers in contrast to the deep layers, and
(3) the smoother gradients distributions indicate the higher layer
similarities. Building upon these observations, we propose InCo Aggregation
that leverags internal cross-layer gradients, a mixture of gradients from
shallow and deep layers within a server model, to augment the similarity in the
deep layers without requiring additional communication between clients.
Furthermore, our methods can be tailored to accommodate model-homogeneous FL
methods such as FedAvg, FedProx, FedNova, Scaffold, and MOON, to expand their
capabilities to handle the system heterogeneity. Copious experimental results
validate the effectiveness of InCo Aggregation, spotlighting internal
cross-layer gradients as a promising avenue to enhance the performance in
heterogenous FL.Comment: Preprint. Under revie
Set Aggregation Network as a Trainable Pooling Layer
Global pooling, such as max- or sum-pooling, is one of the key ingredients in
deep neural networks used for processing images, texts, graphs and other types
of structured data. Based on the recent DeepSets architecture proposed by
Zaheer et al. (NIPS 2017), we introduce a Set Aggregation Network (SAN) as an
alternative global pooling layer. In contrast to typical pooling operators, SAN
allows to embed a given set of features to a vector representation of arbitrary
size. We show that by adjusting the size of embedding, SAN is capable of
preserving the whole information from the input. In experiments, we demonstrate
that replacing global pooling layer by SAN leads to the improvement of
classification accuracy. Moreover, it is less prone to overfitting and can be
used as a regularizer.Comment: ICONIP 201
Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks
Information fusion is an essential part of numerous engineering systems and
biological functions, e.g., human cognition. Fusion occurs at many levels,
ranging from the low-level combination of signals to the high-level aggregation
of heterogeneous decision-making processes. While the last decade has witnessed
an explosion of research in deep learning, fusion in neural networks has not
observed the same revolution. Specifically, most neural fusion approaches are
ad hoc, are not understood, are distributed versus localized, and/or
explainability is low (if present at all). Herein, we prove that the fuzzy
Choquet integral (ChI), a powerful nonlinear aggregation function, can be
represented as a multi-layer network, referred to hereafter as ChIMP. We also
put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient
descent-based optimization in light of the exponential number of ChI inequality
constraints. An additional benefit of ChIMP/iChIMP is that it enables
eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP
is applied to the fusion of a set of heterogeneous architecture deep models in
remote sensing. We show an improvement in model accuracy and our previously
established XAI indices shed light on the quality of our data, model, and its
decisions.Comment: IEEE Transactions on Fuzzy System
- …