31,599 research outputs found
Computationally Efficient Estimation of Multi-Dimensional Spectral Lines
In this work, we propose a computationally efficient algorithm for estimating multi-dimensional spectral lines. The method treats the data tensor's dimensions separately, yielding the corresponding frequency estimates for each dimension. Then, in a second step, the estimates are ordered over dimensions, thus forming the resulting multidimensional parameter estimates. For high dimensional data, the proposed method offers statistically efficient estimates for moderate to high signal to noise ratios, at a computational cost substantially lower than typical non-parametric Fourier-transform based periodogram solutions, as well as to state-of-the-art parametric estimators
A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR
regularization is used for finding sparse solutions to an
underdetermined linear system. As sparse signals are widely expected in remote
sensing, this type of regularization scheme and its extensions have been widely
employed in many remote sensing problems, such as image fusion, target
detection, image super-resolution, and others and have led to promising
results. However, solving such sparse reconstruction problems is
computationally expensive and has limitations in its practical use. In this
paper, we proposed a novel efficient algorithm for solving the complex-valued
regularized least squares problem. Taking the high-dimensional
tomographic synthetic aperture radar (TomoSAR) as a practical example, we
carried out extensive experiments, both with simulation data and real data, to
demonstrate that the proposed approach can retain the accuracy of second order
methods while dramatically speeding up the processing by one or two orders.
Although we have chosen TomoSAR as the example, the proposed method can be
generally applied to any spectral estimation problems.Comment: 11 pages, IEEE Transactions on Geoscience and Remote Sensin
Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
Training neural networks is a challenging non-convex optimization problem,
and backpropagation or gradient descent can get stuck in spurious local optima.
We propose a novel algorithm based on tensor decomposition for guaranteed
training of two-layer neural networks. We provide risk bounds for our proposed
method, with a polynomial sample complexity in the relevant parameters, such as
input dimension and number of neurons. While learning arbitrary target
functions is NP-hard, we provide transparent conditions on the function and the
input for learnability. Our training method is based on tensor decomposition,
which provably converges to the global optimum, under a set of mild
non-degeneracy conditions. It consists of simple embarrassingly parallel linear
and multi-linear operations, and is competitive with standard stochastic
gradient descent (SGD), in terms of computational complexity. Thus, we propose
a computationally efficient method with guaranteed risk bounds for training
neural networks with one hidden layer.Comment: The tensor decomposition analysis is expanded, and the analysis of
ridge regression is added for recovering the parameters of last layer of
neural networ
An Efficient Approximation to the Likelihood for Gravitational Wave Stochastic Background Detection Using Pulsar Timing Data
Direct detection of gravitational waves by pulsar timing arrays will become
feasible over the next few years. In the low frequency regime ( Hz --
Hz), we expect that a superposition of gravitational waves from many
sources will manifest itself as an isotropic stochastic gravitational wave
background. Currently, a number of techniques exist to detect such a signal;
however, many detection methods are computationally challenging. Here we
introduce an approximation to the full likelihood function for a pulsar timing
array that results in computational savings proportional to the square of the
number of pulsars in the array. Through a series of simulations we show that
the approximate likelihood function reproduces results obtained from the full
likelihood function. We further show, both analytically and through
simulations, that, on average, this approximate likelihood function gives
unbiased parameter estimates for astrophysically realistic stochastic
background amplitudes.Comment: 10 pages, 3 figure
- …