44 research outputs found

    Sparse Representation of Brain Aging: Extracting Covariance Patterns from Structural MRI

    Get PDF
    An enhanced understanding of how normal aging alters brain structure is urgently needed for the early diagnosis and treatment of age-related mental diseases. Structural magnetic resonance imaging (MRI) is a reliable technique used to detect age-related changes in the human brain. Currently, multivariate pattern analysis (MVPA) enables the exploration of subtle and distributed changes of data obtained from structural MRI images. In this study, a new MVPA approach based on sparse representation has been employed to investigate the anatomical covariance patterns of normal aging. Two groups of participants (group 1∶290 participants; group 2∶56 participants) were evaluated in this study. These two groups were scanned with two 1.5 T MRI machines. In the first group, we obtained the discriminative patterns using a t-test filter and sparse representation step. We were able to distinguish the young from old cohort with a very high accuracy using only a few voxels of the discriminative patterns (group 1∶98.4%; group 2∶96.4%). The experimental results showed that the selected voxels may be categorized into two components according to the two steps in the proposed method. The first component focuses on the precentral and postcentral gyri, and the caudate nucleus, which play an important role in sensorimotor tasks. The strongest volume reduction with age was observed in these clusters. The second component is mainly distributed over the cerebellum, thalamus, and right inferior frontal gyrus. These regions are not only critical nodes of the sensorimotor circuitry but also the cognitive circuitry although their volume shows a relative resilience against aging. Considering the voxels selection procedure, we suggest that the aging of the sensorimotor and cognitive brain regions identified in this study has a covarying relationship with each other

    Uniform pooling for graph networks

    No full text
    Abstract The graph convolution network has received a lot of attention because it extends the convolution to non-Euclidean domains. However, the graph pooling method is still less concerned, which can learn coarse graph embedding to facilitate graph classification. Previous pooling methods were based on assigning a score to each node and then pooling only the highest-scoring nodes, which might throw away whole neighbourhoods of nodes and therefore information. Here, we proposed a novel pooling method UGPool with a new point-of-view on selecting nodes. UGPool learns node scores based on node features and uniformly pools neighboring nodes instead of top nodes in the score-space, resulting in a uniformly coarsened graph. In multiple graph classification tasks, including the protein graphs, the biological graphs and the brain connectivity graphs, we demonstrated that UGPool outperforms other graph pooling methods while maintaining high efficiency. Moreover, we also show that UGPool can be integrated with multiple graph convolution networks to effectively improve performance compared to no pooling

    Transferable discriminative feature mining for unsupervised domain adaptation

    No full text
    Abstract Unsupervised Domain Adaptation (UDA) aims to seek an effective model for unlabeled target domain by leveraging knowledge from a labeled source domain with a related but different distribution. Many existing approaches ignore the underlying discriminative features of the target data and the discrepancy of conditional distributions. To address these two issues simultaneously, the paper presents a Transferable Discriminative Feature Mining (TDFM) approach for UDA, which can naturally unify the mining of domain-invariant discriminative features and the alignment of class-wise features into one single framework. To be specific, to achieve the domain-invariant discriminative features, TDFM jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data. It then conducts the class-wise alignment by decreasing intra-class variations and increasing inter-class differences across domains, encouraging the emergence of transferable discriminative features. When combined, these two procedures are mutually beneficial. Comprehensive experiments verify that TDFM can obtain remarkable margins over state-of-the-art domain adaptation methods

    Coarse-to-fine pseudo supervision guided meta-task optimization for few-shot object classification

    No full text
    Abstract Few-Shot Learning (FSL) is a challenging and practical learning pattern, aiming to solve a target task which has only a few labeled examples. Currently, the field of FSL has made great progress, but largely in the supervised setting, where a large auxiliary labeled dataset is required for offline training. However, the unsupervised FSL (UFSL) problem where the auxiliary dataset is fully unlabeled has been seldom investigated despite of its significant value. This paper focuses on the more general and challenging UFSL problem and presents a novel method named Coarse-to-Fine Pseudo Supervision-guided Meta-Learning (C2FPS-ML) for unsupervised few-shot object classification. It first obtains prior knowledge from an unlabeled auxiliary dataset during unsupervised meta-training, and then use the prior knowledge to assist the downstream few-shot classification task. Coarse-to-Fine Pseudo Supervisions in C2FPS-ML aim to optimize meta-task sampling process in unsupervised meta-training stage which is one of the dominant factors for improving the performance of meta-learning based FSL algorithms. Human can learn new concepts progressively or hierarchically following the coarse-to-fine manners. By simulating this human’s behaviour, we develop two versions of C2FPS-ML for two different scenarios: one is natural object dataset and another one is other kinds of dataset (e.g., handwritten character dataset). For natural object dataset scenario, we propose to exploit the potential hierarchical semantics of the unlabeled auxiliary dataset to build a tree-like structure of visual concepts. For another scenario, progressive pseudo supervision is obtained by forming clusters in different similarity aspects and is represented by a pyramid-like structure. The obtained structure is applied as the supervision to construct meta-tasks in meta-training stage, and prior knowledge from the unlabeled auxiliary dataset is learned from the coarse-grained level to the fine-grained level. The proposed method sets the new state of the art on the gold-standard miniImageNet and achieves remarkable results on Omniglot while simultaneously increases efficiency

    Deep ladder-suppression network for unsupervised domain adaptation

    No full text
    Abstract Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks

    Dynamic group convolution for accelerating convolutional neural networks

    No full text
    Abstract Replacing normal convolutions with group convolutions can significantly increase the computational efficiency of modern deep convolutional networks, which has been widely adopted in compact network architecture designs. However, existing group convolutions undermine the original network structures by cutting off some connections permanently resulting in significant accuracy degradation. In this paper, we propose dynamic group convolution (DGC) that adaptively selects which part of input channels to be connected within each group for individual samples on the fly. Specifically, we equip each group with a small feature selector to automatically select the most important input channels conditioned on the input images. Multiple groups can adaptively capture abundant and complementary visual/semantic features for each input image. The DGC preserves the original network structure and has similar computational efficiency as the conventional group convolution simultaneously. Extensive experiments on multiple image classification benchmarks including CIFAR-10, CIFAR-100 and ImageNet demonstrate its superiority over the existing group convolution techniques and dynamic execution methods. The code is available at https://github.com/zhuogege1943/dgc

    Joint clustering and discriminative feature alignment for unsupervised domain adaptation

    No full text
    Abstract Unsupervised Domain Adaptation (UDA) aims to learn a classifier for the unlabeled target domain by leveraging knowledge from a labeled source domain with a different but related distribution. Many existing approaches typically learn a domain-invariant representation space by directly matching the marginal distributions of the two domains. However, they ignore exploring the underlying discriminative features of the target data and align the cross-domain discriminative features, which may lead to suboptimal performance. To tackle these two issues simultaneously, this paper presents a Joint Clustering and Discriminative Feature Alignment (JCDFA) approach for UDA, which is capable of naturally unifying the mining of discriminative features and the alignment of class-discriminative features into one single framework. Specifically, in order to mine the intrinsic discriminative information of the unlabeled target data, JCDFA jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data, where the classification of the source domain can guide the clustering learning of the target domain to locate the object category. We then conduct the cross-domain discriminative feature alignment by separately optimizing two new metrics: 1) an extended supervised contrastive learning, i.e. , semi-supervised contrastive learning 2) an extended Maximum Mean Discrepancy (MMD), i.e. , conditional MMD, explicitly minimizing the intra-class dispersion and maximizing the inter-class compactness. When these two procedures, i.e. , discriminative features mining and alignment are integrated into one framework, they tend to benefit from each other to enhance the final performance from a cooperative learning perspective. Experiments are conducted on four real-world benchmarks ( e.g. , Office-31, ImageCLEF-DA, Office-Home and VisDA-C). All the results demonstrate that our JCDFA can obtain remarkable margins over state-of-the-art domain adaptation methods. Comprehensive ablation studies also verify the importance of each key component of our proposed algorithm and the effectiveness of combining two learning strategies into a framework

    Informative class-conditioned feature alignment for unsupervised domain adaptation

    No full text
    Abstract The goal of unsupervised domain adaptation is to learn a task classifier that performs well for the unlabeled target domain by borrowing rich knowledge from a well-labeled source domain. Although remarkable breakthroughs have been achieved in learning transferable representation across domains, two bottlenecks remain to be further explored. First, many existing approaches focus primarily on the adaptation of the entire image, ignoring the limitation that not all features are transferable and informative for the object classification task. Second, the features of the two domains are typically aligned without considering the class labels; this can lead the resulting representations to be domain-invariant but non-discriminative to the category. To overcome the two issues, we present a novel Informative Class-Conditioned Feature Alignment (IC2FA) approach for UDA, which utilizes a twofold method: informative feature disentanglement and class-conditioned feature alignment, designed to address the above two challenges, respectively. More specifically, to surmount the first drawback, we cooperatively disentangle the two domains to obtain informative transferable features; here, Variational Information Bottleneck (VIB) is employed to encourage the learning of task-related semantic representations and suppress task-unrelated information. With regard to the second bottleneck, we optimize a new metric, termed Conditional Sliced Wasserstein Distance (CSWD), which explicitly estimates the intra-class discrepancy and the inter-class margin. The intra-class and inter-class CSWDs are minimized and maximized, respectively, to yield the domain-invariant discriminative features. IC2FA equips class-conditioned feature alignment with informative feature disentanglement and causes the two procedures to work cooperatively, which facilitates informative discriminative features adaptation. Extensive experimental results on three domain adaptation datasets confirm the superiority of IC2FA
    corecore