7,127 research outputs found
Greedy low-rank algorithm for spatial connectome regression
Recovering brain connectivity from tract tracing data is an important
computational problem in the neurosciences. Mesoscopic connectome
reconstruction was previously formulated as a structured matrix regression
problem (Harris et al., 2016), but existing techniques do not scale to the
whole-brain setting. The corresponding matrix equation is challenging to solve
due to large scale, ill-conditioning, and a general form that lacks a
convergent splitting. We propose a greedy low-rank algorithm for connectome
reconstruction problem in very high dimensions. The algorithm approximates the
solution by a sequence of rank-one updates which exploit the sparse and
positive definite problem structure. This algorithm was described previously
(Kressner and Sirkovi\'c, 2015) but never implemented for this connectome
problem, leading to a number of challenges. We have had to design judicious
stopping criteria and employ efficient solvers for the three main sub-problems
of the algorithm, including an efficient GPU implementation that alleviates the
main bottleneck for large datasets. The performance of the method is evaluated
on three examples: an artificial "toy" dataset and two whole-cortex instances
using data from the Allen Mouse Brain Connectivity Atlas. We find that the
method is significantly faster than previous methods and that moderate ranks
offer good approximation. This speedup allows for the estimation of
increasingly large-scale connectomes across taxa as these data become available
from tracing experiments. The data and code are available online
A Unique "Nonnegative" Solution to an Underdetermined System: from Vectors to Matrices
This paper investigates the uniqueness of a nonnegative vector solution and
the uniqueness of a positive semidefinite matrix solution to underdetermined
linear systems. A vector solution is the unique solution to an underdetermined
linear system only if the measurement matrix has a row-span intersecting the
positive orthant. Focusing on two types of binary measurement matrices,
Bernoulli 0-1 matrices and adjacency matrices of general expander graphs, we
show that, in both cases, the support size of a unique nonnegative solution can
grow linearly, namely O(n), with the problem dimension n. We also provide
closed-form characterizations of the ratio of this support size to the signal
dimension. For the matrix case, we show that under a necessary and sufficient
condition for the linear compressed observations operator, there will be a
unique positive semidefinite matrix solution to the compressed linear
observations. We further show that a randomly generated Gaussian linear
compressed observations operator will satisfy this condition with
overwhelmingly high probability
Sequential Dimensionality Reduction for Extracting Localized Features
Linear dimensionality reduction techniques are powerful tools for image
analysis as they allow the identification of important features in a data set.
In particular, nonnegative matrix factorization (NMF) has become very popular
as it is able to extract sparse, localized and easily interpretable features by
imposing an additive combination of nonnegative basis elements. Nonnegative
matrix underapproximation (NMU) is a closely related technique that has the
advantage to identify features sequentially. In this paper, we propose a
variant of NMU that is particularly well suited for image analysis as it
incorporates the spatial information, that is, it takes into account the fact
that neighboring pixels are more likely to be contained in the same features,
and favors the extraction of localized features by looking for sparse basis
elements. We show that our new approach competes favorably with comparable
state-of-the-art techniques on synthetic, facial and hyperspectral image data
sets.Comment: 24 pages, 12 figures. New numerical experiments on synthetic data
sets, discussion about the convergenc
- …