1,525 research outputs found
LOCUS: A Novel Decomposition Method for Brain Network Connectivity Matrices using Low-rank Structure with Uniform Sparsity
Network-oriented research has been increasingly popular in many scientific
areas. In neuroscience research, imaging-based network connectivity measures
have become the key for understanding brain organizations, potentially serving
as individual neural fingerprints. There are major challenges in analyzing
connectivity matrices including the high dimensionality of brain networks,
unknown latent sources underlying the observed connectivity, and the large
number of brain connections leading to spurious findings. In this paper, we
propose a novel blind source separation method with low-rank structure and
uniform sparsity (LOCUS) as a fully data-driven decomposition method for
network measures. Compared with the existing method that vectorizes
connectivity matrices ignoring brain network topology, LOCUS achieves more
efficient and accurate source separation for connectivity matrices using
low-rank structure. We propose a novel angle-based uniform sparsity
regularization that demonstrates better performance than the existing sparsity
controls for low-rank tensor methods. We propose a highly efficient iterative
Node-Rotation algorithm that exploits the block multi-convexity of the
objective function to solve the non-convex optimization problem for learning
LOCUS. We illustrate the advantage of LOCUS through extensive simulation
studies. Application of LOCUS to Philadelphia Neurodevelopmental Cohort
neuroimaging study reveals biologically insightful connectivity traits which
are not found using the existing method
Covariance-domain Dictionary Learning for Overcomplete EEG Source Identification
We propose an algorithm targeting the identification of more sources than
channels for electroencephalography (EEG). Our overcomplete source
identification algorithm, Cov-DL, leverages dictionary learning methods applied
in the covariance-domain. Assuming that EEG sources are uncorrelated within
moving time-windows and the scalp mixing is linear, the forward problem can be
transferred to the covariance domain which has higher dimensionality than the
original EEG channel domain. This allows for learning the overcomplete mixing
matrix that generates the scalp EEG even when there may be more sources than
sensors active at any time segment, i.e. when there are non-sparse sources.
This is contrary to straight-forward dictionary learning methods that are based
on the assumption of sparsity, which is not a satisfied condition in the case
of low-density EEG systems. We present two different learning strategies for
Cov-DL, determined by the size of the target mixing matrix. We demonstrate that
Cov-DL outperforms existing overcomplete ICA algorithms under various scenarios
of EEG simulations and real EEG experiments
Sparse Signal Processing Concepts for Efficient 5G System Design
As it becomes increasingly apparent that 4G will not be able to meet the
emerging demands of future mobile communication systems, the question what
could make up a 5G system, what are the crucial challenges and what are the key
drivers is part of intensive, ongoing discussions. Partly due to the advent of
compressive sensing, methods that can optimally exploit sparsity in signals
have received tremendous attention in recent years. In this paper we will
describe a variety of scenarios in which signal sparsity arises naturally in 5G
wireless systems. Signal sparsity and the associated rich collection of tools
and algorithms will thus be a viable source for innovation in 5G wireless
system design. We will discribe applications of this sparse signal processing
paradigm in MIMO random access, cloud radio access networks, compressive
channel-source network coding, and embedded security. We will also emphasize
important open problem that may arise in 5G system design, for which sparsity
will potentially play a key role in their solution.Comment: 18 pages, 5 figures, accepted for publication in IEEE Acces
Voxel selection in fMRI data analysis based on sparse representation
Multivariate pattern analysis approaches toward detection of brain regions from fMRI data have been gaining attention recently. In this study, we introduce an iterative sparse-representation-based algorithm for detection of voxels in functional MRI (fMRI) data with task relevant information. In each iteration of the algorithm, a linear programming problem is solved and a sparse weight vector is subsequently obtained. The final weight vector is the mean of those obtained in all iterations. The characteristics of our algorithm are as follows: 1) the weight vector (output) is sparse; 2) the magnitude of each entry of the weight vector represents the significance of its corresponding variable or feature in a classification or regression problem; and 3) due to the convergence of this algorithm, a stable weight vector is obtained. To demonstrate the validity of our algorithm and illustrate its application, we apply the algorithm to the Pittsburgh Brain Activity Interpretation Competition 2007 functional fMRI dataset for selecting the voxels, which are the most relevant to the tasks of the subjects. Based on this dataset, the aforementioned characteristics of our algorithm are analyzed, and a comparison between our method with the univariate general-linear-model-based statistical parametric mapping is performed. Using our method, a combination of voxels are selected based on the principle of effective/sparse representation of a task. Data analysis results in this paper show that this combination of voxels is suitable for decoding tasks and demonstrate the effectiveness of our method
Source Separation in Chemical Analysis : Recent Achievements and Perspectives
International audienceSource separation is one of the most relevant estimation problems found in chemistry. Indeed, dealing with mixtures is paramount in different kinds of chemical analysis. For instance, there are some cases where the analyte is a chemical mixture of different components, e.g., in the analysis of rocks and heterogeneous materials through spectroscopy. Moreover, a mixing process can also take place even when the components are not chemically mixed. For instance, in ionic analysis of liquid samples, the ions are not chemically connected, but, due to the lack of selectivity of the chemical sensors, the acquired responses may be influenced by ions that are not the desired ones. Finally, there are some situations where the pure components cannot be isolated chemically since they appear only in the presence of other components. In this case, BSS may provide these components that cannot be retrieved otherwise. In this paper, our aim is to shed some light on the use of BSS in chemical analysis. In this context, we firstly provide a brief overview on source separation (Section II), with particular attention to the classes of linear and nonlinear mixing models (Sections III and IV, respectively). Then, (in Section V), we will give some conclusions and focus on challenging aspects that are found in chemical analysis. Although dealing with a relatively new field of applications, this article is not an exhaustive survey of source separation methods and algorithms, since there are solutions originated in closely related domains (e.g. remote sensing and hyperspectral imaging) that suit well several problems found in chemical analysis. Moreover, we do not discuss the supervised source separation methods, which are basically multivariate regression techniques, that one can find in chemometrics
A review of blind source separation in NMR spectroscopy
27 pagesInternational audienceFourier transform is the data processing naturally associated to most NMR experiments. Notable exceptions are Pulse Field Gradient and relaxation analysis, the structure of which is only partially suitable for FT. With the revamp of NMR of complex mixtures, fueled by analytical challenges such as metabolomics, alternative and more apt mathematical methods for data processing have been sought, with the aim of decomposing the NMR signal into simpler bits. Blind source separation is a very broad definition regrouping several classes of mathematical methods for complex signal decomposition that use no hypothesis on the form of the data. Developed outside NMR, these algorithms have been increasingly tested on spectra of mixtures. In this review, we shall provide an historical overview of the application of blind source separation methodologies to NMR, including methods specifically designed for the specificity of this spectroscopy
Blind source separation using statistical nonnegative matrix factorization
PhD ThesisBlind Source Separation (BSS) attempts to automatically extract and track a signal of interest in real world scenarios with other signals present. BSS addresses the problem of recovering the original signals from an observed mixture without relying on training knowledge. This research studied three novel approaches for solving the BSS problem based on the extensions of non-negative matrix factorization model and the sparsity regularization methods.
1) A framework of amalgamating pruning and Bayesian regularized cluster nonnegative tensor factorization with Itakura-Saito divergence for separating sources mixed in a stereo channel format: The sparse regularization term was adaptively tuned using a hierarchical Bayesian approach to yield the desired sparse decomposition. The modified Gaussian prior was formulated to express the correlation between different basis vectors. This algorithm automatically detected the optimal number of latent components of the individual source.
2) Factorization for single-channel BSS which decomposes an information-bearing matrix into complex of factor matrices that represent the spectral dictionary and temporal codes: A variational Bayesian approach was developed for computing the sparsity parameters for optimizing the matrix factorization. This approach combined the advantages of both complex matrix factorization (CMF) and variational -sparse analysis.
BLIND SOURCE SEPARATION USING STATISTICAL NONNEGATIVE MATRIX FACTORIZATION
ii
3) An imitated-stereo mixture model developed by weighting and time-shifting the original single-channel mixture where source signals can be modelled by the AR processes. The proposed mixing mixture is analogous to a stereo signal created by two microphones with one being real and another virtual. The imitated-stereo mixture employed the nonnegative tensor factorization for separating the observed mixture. The separability analysis of the imitated-stereo mixture was derived using Wiener masking.
All algorithms were tested with real audio signals. Performance of source separation was assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. The experimental results demonstrate that the proposed uninformed audio separation algorithms have surpassed among the conventional BSS methods; i.e. IS-cNTF, SNMF and CMF methods, with average SDR improvement in the ranges from 2.6dB to 6.4dB per source.Payap Universit
- …