104,572 research outputs found

    Multi-modal feature selection with self-expression topological manifold for end-stage renal disease associated with mild cognitive impairment

    Get PDF
    Effectively selecting discriminative brain regions in multi-modal neuroimages is one of the effective means to reveal the neuropathological mechanism of end-stage renal disease associated with mild cognitive impairment (ESRDaMCI). Existing multi-modal feature selection methods usually depend on the Euclidean distance to measure the similarity between data, which tends to ignore the implied data manifold. A self-expression topological manifold based multi-modal feature selection method (SETMFS) is proposed to address this issue employing self-expression topological manifold. First, a dynamic brain functional network is established using functional magnetic resonance imaging (fMRI), after which the betweenness centrality is extracted. The feature matrix of fMRI is constructed based on this centrality measure. Second, the feature matrix of arterial spin labeling (ASL) is constructed by extracting the cerebral blood flow (CBF). Then, the topological relationship matrices are constructed by calculating the topological relationship between each data point in the two feature matrices to measure the intrinsic similarity between the features, respectively. Subsequently, the graph regularization is utilized to embed the self-expression model into topological manifold learning to identify the linear self-expression of the features. Finally, the selected well-represented feature vectors are fed into a multicore support vector machine (MKSVM) for classification. The experimental results show that the classification performance of SETMFS is significantly superior to several state-of-the-art feature selection methods, especially its classification accuracy reaches 86.10%, which is at least 4.34% higher than other comparable methods. This method fully considers the topological correlation between the multi-modal features and provides a reference for ESRDaMCI auxiliary diagnosis

    Mining Brain Networks using Multiple Side Views for Neurological Disorder Identification

    Full text link
    Mining discriminative subgraph patterns from graph data has attracted great interest in recent years. It has a wide variety of applications in disease diagnosis, neuroimaging, etc. Most research on subgraph mining focuses on the graph representation alone. However, in many real-world applications, the side information is available along with the graph data. For example, for neurological disorder identification, in addition to the brain networks derived from neuroimaging data, hundreds of clinical, immunologic, serologic and cognitive measures may also be documented for each subject. These measures compose multiple side views encoding a tremendous amount of supplemental information for diagnostic purposes, yet are often ignored. In this paper, we study the problem of discriminative subgraph selection using multiple side views and propose a novel solution to find an optimal set of subgraph features for graph classification by exploring a plurality of side views. We derive a feature evaluation criterion, named gSide, to estimate the usefulness of subgraph patterns based upon side views. Then we develop a branch-and-bound algorithm, called gMSV, to efficiently search for optimal subgraph features by integrating the subgraph mining process and the procedure of discriminative feature selection. Empirical studies on graph classification tasks for neurological disorders using brain networks demonstrate that subgraph patterns selected by the multi-side-view guided subgraph selection approach can effectively boost graph classification performances and are relevant to disease diagnosis.Comment: in Proceedings of IEEE International Conference on Data Mining (ICDM) 201

    Ranking to Learn: Feature Ranking and Selection via Eigenvector Centrality

    Full text link
    In an era where accumulating data is easy and storing it inexpensive, feature selection plays a central role in helping to reduce the high-dimensionality of huge amounts of otherwise meaningless data. In this paper, we propose a graph-based method for feature selection that ranks features by identifying the most important ones into arbitrary set of cues. Mapping the problem on an affinity graph-where features are the nodes-the solution is given by assessing the importance of nodes through some indicators of centrality, in particular, the Eigen-vector Centrality (EC). The gist of EC is to estimate the importance of a feature as a function of the importance of its neighbors. Ranking central nodes individuates candidate features, which turn out to be effective from a classification point of view, as proved by a thoroughly experimental section. Our approach has been tested on 7 diverse datasets from recent literature (e.g., biological data and object recognition, among others), and compared against filter, embedded and wrappers methods. The results are remarkable in terms of accuracy, stability and low execution time.Comment: Preprint version - Lecture Notes in Computer Science - Springer 201
    • …
    corecore