3,336 research outputs found
Sparse deconvolution using support vector machines
Sparse deconvolution is a classical subject in digital signal processing, having many practical applications. Support vector machine (SVM) algorithms show a series of characteristics, such as sparse solutions and implicit regularization, which make them attractive for solving sparse deconvolution problems. Here, a sparse deconvolution algorithm based on the SVM framework for signal processing is presented and analyzed, including comparative evaluations of its performance from the points of view of estimation and detection capabilities, and of robustness with respect to non-Gaussian additive noise.Publicad
Sparse Predictive Structure of Deconvolved Functional Brain Networks
The functional and structural representation of the brain as a complex
network is marked by the fact that the comparison of noisy and intrinsically
correlated high-dimensional structures between experimental conditions or
groups shuns typical mass univariate methods. Furthermore most network
estimation methods cannot distinguish between real and spurious correlation
arising from the convolution due to nodes' interaction, which thus introduces
additional noise in the data. We propose a machine learning pipeline aimed at
identifying multivariate differences between brain networks associated to
different experimental conditions. The pipeline (1) leverages the deconvolved
individual contribution of each edge and (2) maps the task into a sparse
classification problem in order to construct the associated "sparse deconvolved
predictive network", i.e., a graph with the same nodes of those compared but
whose edge weights are defined by their relevance for out of sample predictions
in classification. We present an application of the proposed method by decoding
the covert attention direction (left or right) based on the single-trial
functional connectivity matrix extracted from high-frequency
magnetoencephalography (MEG) data. Our results demonstrate how network
deconvolution matched with sparse classification methods outperforms typical
approaches for MEG decoding
New convergence results for the scaled gradient projection method
The aim of this paper is to deepen the convergence analysis of the scaled
gradient projection (SGP) method, proposed by Bonettini et al. in a recent
paper for constrained smooth optimization. The main feature of SGP is the
presence of a variable scaling matrix multiplying the gradient, which may
change at each iteration. In the last few years, an extensive numerical
experimentation showed that SGP equipped with a suitable choice of the scaling
matrix is a very effective tool for solving large scale variational problems
arising in image and signal processing. In spite of the very reliable numerical
results observed, only a weak, though very general, convergence theorem is
provided, establishing that any limit point of the sequence generated by SGP is
stationary. Here, under the only assumption that the objective function is
convex and that a solution exists, we prove that the sequence generated by SGP
converges to a minimum point, if the scaling matrices sequence satisfies a
simple and implementable condition. Moreover, assuming that the gradient of the
objective function is Lipschitz continuous, we are also able to prove the
O(1/k) convergence rate with respect to the objective function values. Finally,
we present the results of a numerical experience on some relevant image
restoration problems, showing that the proposed scaling matrix selection rule
performs well also from the computational point of view
- …