944 research outputs found

    The Incremental Multiresolution Matrix Factorization Algorithm

    Full text link
    Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices -- an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct global factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision.Comment: Computer Vision and Pattern Recognition (CVPR) 2017, 10 page

    Space-by-time non-negative matrix factorization for single-trial decoding of M/EEG activity

    Get PDF
    We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals

    Sparse and Non-Negative BSS for Noisy Data

    Full text link
    Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved. Sparsity is known to enhance such contrast between the sources while producing very robust approaches, especially to noise. In this paper we introduce a new algorithm in order to tackle the blind separation of non-negative sparse sources from noisy measurements. We first show that sparsity and non-negativity constraints have to be carefully applied on the sought-after solution. In fact, improperly constrained solutions are unlikely to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), makes use of proximal calculus techniques to provide properly constrained solutions. The performance of nGMCA compared to other state-of-the-art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings, with negligible parameter tuning. In particular, nGMCA is shown to provide robustness to noise and performs well on synthetic mixtures of real NMR spectra.Comment: 13 pages, 18 figures, to be published in IEEE Transactions on Signal Processin

    Non-Negative Blind Source Separation Algorithm Based on Minimum Aperture Simplicial Cone

    Get PDF
    International audienceWe address the problem of Blind Source Separation (BSS) when the hidden sources are Nonnegative (N-BSS). In this case, the scatter plot of the mixed data is contained within the simplicial cone generated by the columns of the mixing matrix. The proposed method, termed SCSA-UNS for Simplicial Cone Shrinking Algorithm for Unmixing Non-negative Sources, aims at estimating the mixing matrix and the sources by fitting a Minimum Aperture Simplicial Cone (MASC) to the cloud of mixed data points. SCSA-UNS is evaluated on both independent and correlated synthetic data and compared to other N-BSS methods. Simulations are also performed on real Liquid Chromatography-Mass Spectrum (LC-MS) data for the metabolomic analysis of a chemical sample, and on real dynamic Positron Emission Tomography (PET) images, in order to study the pharmacokinetics of the [18F]-FDG (FluoroDeoxyGlucose) tracer in the brain

    Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

    Get PDF
    Due to its causal semantics, Bayesian networks (BN) have been widely employed to discover the underlying data relationship in exploratory studies, such as brain research. Despite its success in modeling the probability distribution of variables, BN is naturally a generative model, which is not necessarily discriminative. This may cause the ignorance of subtle but critical network changes that are of investigation values across populations. In this paper, we propose to improve the discriminative power of BN models for continuous variables from two different perspectives. This brings two general discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the first framework, we employ Fisher kernel to bridge the generative models of GBN and the discriminative classifiers of SVMs, and convert the GBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. In the second framework, we employ the max-margin criterion and build it directly upon GBN models to explicitly optimize the classification performance of the GBNs. The advantages and disadvantages of the two frameworks are discussed and experimentally compared. Both of them demonstrate strong power in learning discriminative parameters of GBNs for neuroimaging based brain network analysis, as well as maintaining reasonable representation capacity. The contributions of this paper also include a new Directed Acyclic Graph (DAG) constraint with theoretical guarantee to ensure the graph validity of GBN.Comment: 16 pages and 5 figures for the article (excluding appendix

    Generative-Discriminative Low Rank Decomposition for Medical Imaging Applications

    Get PDF
    In this thesis, we propose a method that can be used to extract biomarkers from medical images toward early diagnosis of abnormalities. Surge of demand for biomarkers and availability of medical images in the recent years call for accurate, repeatable, and interpretable approaches for extracting meaningful imaging features. However, extracting such information from medical images is a challenging task because the number of pixels (voxels) in a typical image is in order of millions while even a large sample-size in medical image dataset does not usually exceed a few hundred. Nevertheless, depending on the nature of an abnormality, only a parsimonious subset of voxels is typically relevant to the disease; therefore various notions of sparsity are exploited in this thesis to improve the generalization performance of the prediction task. We propose a novel discriminative dimensionality reduction method that yields good classification performance on various datasets without compromising the clinical interpretability of the results. This is achieved by combining the modelling strength of generative learning framework and the classification performance of discriminative learning paradigm. Clinical interpretability can be viewed as an additional measure of evaluation and is also helpful in designing methods that account for the clinical prior such as association of certain areas in a brain to a particular cognitive task or connectivity of some brain regions via neural fibres. We formulate our method as a large-scale optimization problem to solve a constrained matrix factorization. Finding an optimal solution of the large-scale matrix factorization renders off-the-shelf solver computationally prohibitive; therefore, we designed an efficient algorithm based on the proximal method to address the computational bottle-neck of the optimization problem. Our formulation is readily extended for different scenarios such as cases where a large cohort of subjects has uncertain or no class labels (semi-supervised learning) or a case where each subject has a battery of imaging channels (multi-channel), \etc. We show that by using various notions of sparsity as feasible sets of the optimization problem, we can encode different forms of prior knowledge ranging from brain parcellation to brain connectivity

    Visualisation of multi-dimensional medical images with application to brain electrical impedance tomography

    Get PDF
    Medical imaging plays an important role in modem medicine. With the increasing complexity and information presented by medical images, visualisation is vital for medical research and clinical applications to interpret the information presented in these images. The aim of this research is to investigate improvements to medical image visualisation, particularly for multi-dimensional medical image datasets. A recently developed medical imaging technique known as Electrical Impedance Tomography (EIT) is presented as a demonstration. To fulfil the aim, three main efforts are included in this work. First, a novel scheme for the processmg of brain EIT data with SPM (Statistical Parametric Mapping) to detect ROI (Regions of Interest) in the data is proposed based on a theoretical analysis. To evaluate the feasibility of this scheme, two types of experiments are carried out: one is implemented with simulated EIT data, and the other is performed with human brain EIT data under visual stimulation. The experimental results demonstrate that: SPM is able to localise the expected ROI in EIT data correctly; and it is reasonable to use the balloon hemodynamic change model to simulate the impedance change during brain function activity. Secondly, to deal with the absence of human morphology information in EIT visualisation, an innovative landmark-based registration scheme is developed to register brain EIT image with a standard anatomical brain atlas. Finally, a new task typology model is derived for task exploration in medical image visualisation, and a task-based system development methodology is proposed for the visualisation of multi-dimensional medical images. As a case study, a prototype visualisation system, named EIT5DVis, has been developed, following this methodology. to visualise five-dimensional brain EIT data. The EIT5DVis system is able to accept visualisation tasks through a graphical user interface; apply appropriate methods to analyse tasks, which include the ROI detection approach and registration scheme mentioned in the preceding paragraphs; and produce various visualisations

    Hybrid Approach for Alzheimer’s Disease Diagnosis For 3D Brain MR Image

    Get PDF
    Because of headways in deep learning and clinical imaging innovation, a few specialists are presently utilizing convolutional neural networks (CNNs) to extricate profound level properties from clinical pictures to all the more exactly classify Alzheimer's disease (AD) and expect clinical scores. A limited scale profound learning network called PCANet utilizes principal component analysis (PCA) to make multi-facet channel banks for the incorporated learning of information. Blockwise histograms are made after binarization to get picture ascribes. PCANet is less versatile than different frameworks since the multi-facet channel banks are made involving test information and the produced highlights have aspects during the many thousands or even many thousands. To conquer these issues, we present in this study a PCANet-based, information free organization called the nonnegative matrix factorization tensor decomposition network (NMF-TDNet). To deliver the last picture highlights, we first form higher-request tensors and utilize tensor decomposition (TD) to achieve information dimensionality decrease. Specifically, we foster staggered channel banks for test getting the hang of utilizing nonnegative matrix factorization(NMF) as opposed to PCA. These properties serve as input to the support vector machine (SVM) that our technique employs to diagnose AD, forecast clinical score, and categorise AD
    corecore