2,358 research outputs found
A weakly stable algorithm for general Toeplitz systems
We show that a fast algorithm for the QR factorization of a Toeplitz or
Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A.
Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx =
A^Tb, we obtain a weakly stable method for the solution of a nonsingular
Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the
solution of the full-rank Toeplitz or Hankel least squares problem.Comment: 17 pages. An old Technical Report with postscript added. For further
details, see http://wwwmaths.anu.edu.au/~brent/pub/pub143.htm
Learning detectors quickly using structured covariance matrices
Computer vision is increasingly becoming interested in the rapid estimation
of object detectors. Canonical hard negative mining strategies are slow as they
require multiple passes of the large negative training set. Recent work has
demonstrated that if the distribution of negative examples is assumed to be
stationary, then Linear Discriminant Analysis (LDA) can learn comparable
detectors without ever revisiting the negative set. Even with this insight,
however, the time to learn a single object detector can still be on the order
of tens of seconds on a modern desktop computer. This paper proposes to
leverage the resulting structured covariance matrix to obtain detectors with
identical performance in orders of magnitude less time and memory. We elucidate
an important connection to the correlation filter literature, demonstrating
that these can also be trained without ever revisiting the negative set
A fast algorithm for LR-2 factorization of Toeplitz matrices
In this paper a new order recursive algorithm for the efficient â1 factorization of Toeplitz matrices is described. The proposed algorithm can be seen as a fast modified Gram-Schmidt method which recursively computes the orthonormal columns i, i = 1,2, âŠ,p, of , as well as the elements of Râ1, of a Toeplitz matrix with dimensions L Ă p. The factor estimation requires 8Lp MADS (multiplications and divisions). Matrix â1 is subsequently estimated using 3p2 MADS. A faster algorithm, based on a mixed and â1 updating scheme, is also derived. It requires 7Lp + 3.5p2 MADS. The algorithm can be efficiently applied to batch least squares FIR filtering and system identification. When determination of the optimal filter is the desired task it can be utilized to compute the least squares filter in an order recursive way. The algorithm operates directly on the experimental data, overcoming the need for covariance estimates. An orthogonalized version of the proposed â1 algorithm is derived. Matlab code implementing the algorithm is also supplied
Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections
Block projections have been used, in [Eberly et al. 2006], to obtain an
efficient algorithm to find solutions for sparse systems of linear equations. A
bound of softO(n^(2.5)) machine operations is obtained assuming that the input
matrix can be multiplied by a vector with constant-sized entries in softO(n)
machine operations. Unfortunately, the correctness of this algorithm depends on
the existence of efficient block projections, and this has been conjectured. In
this paper we establish the correctness of the algorithm from [Eberly et al.
2006] by proving the existence of efficient block projections over sufficiently
large fields. We demonstrate the usefulness of these projections by deriving
improved bounds for the cost of several matrix problems, considering, in
particular, ``sparse'' matrices that can be be multiplied by a vector using
softO(n) field operations. We show how to compute the inverse of a sparse
matrix over a field F using an expected number of softO(n^(2.27)) operations in
F. A basis for the null space of a sparse matrix, and a certification of its
rank, are obtained at the same cost. An application to Kaltofen and Villard's
Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an
integer matrix yields algorithms requiring softO(n^(2.66)) machine operations.
The derived algorithms are all probabilistic of the Las Vegas type
- âŠ