9,613 research outputs found
A weakly stable algorithm for general Toeplitz systems
We show that a fast algorithm for the QR factorization of a Toeplitz or
Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A.
Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx =
A^Tb, we obtain a weakly stable method for the solution of a nonsingular
Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the
solution of the full-rank Toeplitz or Hankel least squares problem.Comment: 17 pages. An old Technical Report with postscript added. For further
details, see http://wwwmaths.anu.edu.au/~brent/pub/pub143.htm
Fast computation of the matrix exponential for a Toeplitz matrix
The computation of the matrix exponential is a ubiquitous operation in
numerical mathematics, and for a general, unstructured matrix it
can be computed in operations. An interesting problem arises
if the input matrix is a Toeplitz matrix, for example as the result of
discretizing integral equations with a time invariant kernel. In this case it
is not obvious how to take advantage of the Toeplitz structure, as the
exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself.
The main contribution of this work are fast algorithms for the computation of
the Toeplitz matrix exponential. The algorithms have provable quadratic
complexity if the spectrum is real, or sectorial, or more generally, if the
imaginary parts of the rightmost eigenvalues do not vary too much. They may be
efficient even outside these spectral constraints. They are based on the
scaling and squaring framework, and their analysis connects classical results
from rational approximation theory to matrices of low displacement rank. As an
example, the developed methods are applied to Merton's jump-diffusion model for
option pricing
Learning detectors quickly using structured covariance matrices
Computer vision is increasingly becoming interested in the rapid estimation
of object detectors. Canonical hard negative mining strategies are slow as they
require multiple passes of the large negative training set. Recent work has
demonstrated that if the distribution of negative examples is assumed to be
stationary, then Linear Discriminant Analysis (LDA) can learn comparable
detectors without ever revisiting the negative set. Even with this insight,
however, the time to learn a single object detector can still be on the order
of tens of seconds on a modern desktop computer. This paper proposes to
leverage the resulting structured covariance matrix to obtain detectors with
identical performance in orders of magnitude less time and memory. We elucidate
an important connection to the correlation filter literature, demonstrating
that these can also be trained without ever revisiting the negative set
Rates of convergence for the approximation of dual shift-invariant systems in
A shift-invariant system is a collection of functions of the
form . Such systems play an important role in
time-frequency analysis and digital signal processing. A principal problem is
to find a dual system such that each
function can be written as . The
mathematical theory usually addresses this problem in infinite dimensions
(typically in or ), whereas numerical methods have to operate
with a finite-dimensional model. Exploiting the link between the frame operator
and Laurent operators with matrix-valued symbol, we apply the finite section
method to show that the dual functions obtained by solving a finite-dimensional
problem converge to the dual functions of the original infinite-dimensional
problem in . For compactly supported (FIR filter banks) we
prove an exponential rate of convergence and derive explicit expressions for
the involved constants. Further we investigate under which conditions one can
replace the discrete model of the finite section method by the periodic
discrete model, which is used in many numerical procedures. Again we provide
explicit estimates for the speed of convergence. Some remarks on tight frames
complete the paper
- …