6,756 research outputs found
Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection
Recently, sparsity-based algorithms are proposed for super-resolution
spectrum estimation. However, to achieve adequately high resolution in
real-world signal analysis, the dictionary atoms have to be close to each other
in frequency, thereby resulting in a coherent design. The popular convex
compressed sensing methods break down in presence of high coherence and large
noise. We propose a new regularization approach to handle model collinearity
and obtain parsimonious frequency selection simultaneously. It takes advantage
of the pairing structure of sine and cosine atoms in the frequency dictionary.
A probabilistic spectrum screening is also developed for fast computation in
high dimensions. A data-resampling version of high-dimensional Bayesian
Information Criterion is used to determine the regularization parameters.
Experiments show the efficacy and efficiency of the proposed algorithms in
challenging situations with small sample size, high frequency resolution, and
low signal-to-noise ratio
Fully-Automatic Multiresolution Idealization for Filtered Ion Channel Recordings: Flickering Event Detection
We propose a new model-free segmentation method, JULES, which combines recent
statistical multiresolution techniques with local deconvolution for
idealization of ion channel recordings. The multiresolution criterion takes
into account scales down to the sampling rate enabling the detection of
flickering events, i.e., events on small temporal scales, even below the filter
frequency. For such small scales the deconvolution step allows for a precise
determination of dwell times and, in particular, of amplitude levels, a task
which is not possible with common thresholding methods. This is confirmed
theoretically and in a comprehensive simulation study. In addition, JULES can
be applied as a preprocessing method for a refined hidden Markov analysis. Our
new methodolodgy allows us to show that gramicidin A flickering events have the
same amplitude as the slow gating events. JULES is available as an R function
jules in the package clampSeg
Practical recommendations for gradient-based training of deep architectures
Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures
Covariance Eigenvector Sparsity for Compression and Denoising
Sparsity in the eigenvectors of signal covariance matrices is exploited in
this paper for compression and denoising. Dimensionality reduction (DR) and
quantization modules present in many practical compression schemes such as
transform codecs, are designed to capitalize on this form of sparsity and
achieve improved reconstruction performance compared to existing
sparsity-agnostic codecs. Using training data that may be noisy a novel
sparsity-aware linear DR scheme is developed to fully exploit sparsity in the
covariance eigenvectors and form noise-resilient estimates of the principal
covariance eigenbasis. Sparsity is effected via norm-one regularization, and
the associated minimization problems are solved using computationally efficient
coordinate descent iterations. The resulting eigenspace estimator is shown
capable of identifying a subset of the unknown support of the eigenspace basis
vectors even when the observation noise covariance matrix is unknown, as long
as the noise power is sufficiently low. It is proved that the sparsity-aware
estimator is asymptotically normal, and the probability to correctly identify
the signal subspace basis support approaches one, as the number of training
data grows large. Simulations using synthetic data and images, corroborate that
the proposed algorithms achieve improved reconstruction quality relative to
alternatives.Comment: IEEE Transcations on Signal Processing, 2012 (to appear
Multi-view Regularized Gaussian Processes
Gaussian processes (GPs) have been proven to be powerful tools in various
areas of machine learning. However, there are very few applications of GPs in
the scenario of multi-view learning. In this paper, we present a new GP model
for multi-view learning. Unlike existing methods, it combines multiple views by
regularizing marginal likelihood with the consistency among the posterior
distributions of latent functions from different views. Moreover, we give a
general point selection scheme for multi-view learning and improve the proposed
model by this criterion. Experimental results on multiple real world data sets
have verified the effectiveness of the proposed model and witnessed the
performance improvement through employing this novel point selection scheme
- …