158 research outputs found
Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection
Recently, sparsity-based algorithms are proposed for super-resolution
spectrum estimation. However, to achieve adequately high resolution in
real-world signal analysis, the dictionary atoms have to be close to each other
in frequency, thereby resulting in a coherent design. The popular convex
compressed sensing methods break down in presence of high coherence and large
noise. We propose a new regularization approach to handle model collinearity
and obtain parsimonious frequency selection simultaneously. It takes advantage
of the pairing structure of sine and cosine atoms in the frequency dictionary.
A probabilistic spectrum screening is also developed for fast computation in
high dimensions. A data-resampling version of high-dimensional Bayesian
Information Criterion is used to determine the regularization parameters.
Experiments show the efficacy and efficiency of the proposed algorithms in
challenging situations with small sample size, high frequency resolution, and
low signal-to-noise ratio
Complex wavelet regularization for solving inverse problems in remote sensing
International audienceMany problems in remote sensing can be modeled as the minimization of the sum of a data term and a prior term. We propose to use a new complex wavelet based prior and an efficient scheme to solve these problems. We show some results on a problem of image reconstruction with noise, irregular sampling and blur. We also show a comparison between two widely used priors in image processing: sparsity and regularity priors
Complex wavelet regularization for solving inverse problems in remote sensing
International audienceMany problems in remote sensing can be modeled as the minimization of the sum of a data term and a prior term. We propose to use a new complex wavelet based prior and an efficient scheme to solve these problems. We show some results on a problem of image reconstruction with noise, irregular sampling and blur. We also show a comparison between two widely used priors in image processing: sparsity and regularity priors
STEER: Simple Temporal Regularization For Neural ODEs
Training Neural Ordinary Differential Equations (ODEs) is often
computationally expensive. Indeed, computing the forward pass of such models
involves solving an ODE which can become arbitrarily complex during training.
Recent works have shown that regularizing the dynamics of the ODE can partially
alleviate this. In this paper we propose a new regularization technique:
randomly sampling the end time of the ODE during training. The proposed
regularization is simple to implement, has negligible overhead and is effective
across a wide variety of tasks. Further, the technique is orthogonal to several
other methods proposed to regularize the dynamics of ODEs and as such can be
used in conjunction with them. We show through experiments on normalizing
flows, time series models and image recognition that the proposed
regularization can significantly decrease training time and even improve
performance over baseline models.Comment: Neurips 202
Closed-form Continuous-Depth Models
Continuous-depth neural models, where the derivative of the model's hidden
state is defined by a neural network, have enabled strong sequential data
processing capabilities. However, these models rely on advanced numerical
differential equation (DE) solvers resulting in a significant overhead both in
terms of computational cost and model complexity. In this paper, we present a
new family of models, termed Closed-form Continuous-depth (CfC) networks, that
are simple to describe and at least one order of magnitude faster while
exhibiting equally strong modeling abilities compared to their ODE-based
counterparts. The models are hereby derived from the analytical closed-form
solution of an expressive subset of time-continuous models, thus alleviating
the need for complex DE solvers all together. In our experimental evaluations,
we demonstrate that CfC networks outperform advanced, recurrent models over a
diverse set of time-series prediction tasks, including those with long-term
dependencies and irregularly sampled data. We believe our findings open new
opportunities to train and deploy rich, continuous neural models in
resource-constrained settings, which demand both performance and efficiency.Comment: 17 page
Probabilistic Imputation for Time-series Classification with Missing Data
Multivariate time series data for real-world applications typically contain a
significant amount of missing values. The dominant approach for classification
with such missing values is to impute them heuristically with specific values
(zero, mean, values of adjacent time-steps) or learnable parameters. However,
these simple strategies do not take the data generative process into account,
and more importantly, do not effectively capture the uncertainty in prediction
due to the multiple possibilities for the missing values. In this paper, we
propose a novel probabilistic framework for classification with multivariate
time series data with missing values. Our model consists of two parts; a deep
generative model for missing value imputation and a classifier. Extending the
existing deep generative models to better capture structures of time-series
data, our deep generative model part is trained to impute the missing values in
multiple plausible ways, effectively modeling the uncertainty of the
imputation. The classifier part takes the time series data along with the
imputed missing values and classifies signals, and is trained to capture the
predictive uncertainty due to the multiple possibilities of imputations.
Importantly, we show that na\"ively combining the generative model and the
classifier could result in trivial solutions where the generative model does
not produce meaningful imputations. To resolve this, we present a novel
regularization technique that can promote the model to produce useful
imputation values that help classification. Through extensive experiments on
real-world time series data with missing values, we demonstrate the
effectiveness of our method
- …