8,588 research outputs found
Fast Computation of Smith Forms of Sparse Matrices Over Local Rings
We present algorithms to compute the Smith Normal Form of matrices over two
families of local rings.
The algorithms use the \emph{black-box} model which is suitable for sparse
and structured matrices. The algorithms depend on a number of tools, such as
matrix rank computation over finite fields, for which the best-known time- and
memory-efficient algorithms are probabilistic.
For an \nxn matrix over the ring \Fzfe, where is a power of an
irreducible polynomial f \in \Fz of degree , our algorithm requires
\bigO(\eta de^2n) operations in \F, where our black-box is assumed to
require \bigO(\eta) operations in \F to compute a matrix-vector product by
a vector over \Fzfe (and is assumed greater than \Pden). The
algorithm only requires additional storage for \bigO(\Pden) elements of \F.
In particular, if \eta=\softO(\Pden), then our algorithm requires only
\softO(n^2d^2e^3) operations in \F, which is an improvement on known dense
methods for small and .
For the ring \ZZ/p^e\ZZ, where is a prime, we give an algorithm which
is time- and memory-efficient when the number of nontrivial invariant factors
is small. We describe a method for dimension reduction while preserving the
invariant factors. The time complexity is essentially linear in where is the number of operations in \ZZ/p\ZZ to evaluate the
black-box (assumed greater than ) and is the total number of non-zero
invariant factors.
To avoid the practical cost of conditioning, we give a Monte Carlo
certificate, which at low cost, provides either a high probability of success
or a proof of failure. The quest for a time- and memory-efficient solution
without restrictions on the number of nontrivial invariant factors remains
open. We offer a conjecture which may contribute toward that end.Comment: Preliminary version to appear at ISSAC 201
An Incidence Geometry approach to Dictionary Learning
We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a
sparse representation of data points, by learning \emph{dictionary vectors}
upon which the data points can be written as sparse linear combinations. We
view this problem from a geometry perspective as the spanning set of a subspace
arrangement, and focus on understanding the case when the underlying hypergraph
of the subspace arrangement is specified. For this Fitted Dictionary Learning
problem, we completely characterize the combinatorics of the associated
subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a
combinatorial rigidity-type theorem is proven for a type of geometric incidence
system. The theorem characterizes the hypergraphs of subspace arrangements that
generically yield (a) at least one dictionary (b) a locally unique dictionary
(i.e.\ at most a finite number of isolated dictionaries) of the specified size.
We are unaware of prior application of combinatorial rigidity techniques in the
setting of Dictionary Learning, or even in machine learning. We also provide a
systematic classification of problems related to Dictionary Learning together
with various algorithms, their assumptions and performance
Sampling algebraic sets in local intrinsic coordinates
Numerical data structures for positive dimensional solution sets of
polynomial systems are sets of generic points cut out by random planes of
complimentary dimension. We may represent the linear spaces defined by those
planes either by explicit linear equations or in parametric form. These
descriptions are respectively called extrinsic and intrinsic representations.
While intrinsic representations lower the cost of the linear algebra
operations, we observe worse condition numbers. In this paper we describe the
local adaptation of intrinsic coordinates to improve the numerical conditioning
of sampling algebraic sets. Local intrinsic coordinates also lead to a better
stepsize control. We illustrate our results with Maple experiments and
computations with PHCpack on some benchmark polynomial systems.Comment: 13 pages, 2 figures, 2 algorithms, 2 table
Numerical Schubert calculus
We develop numerical homotopy algorithms for solving systems of polynomial
equations arising from the classical Schubert calculus. These homotopies are
optimal in that generically no paths diverge. For problems defined by
hypersurface Schubert conditions we give two algorithms based on extrinsic
deformations of the Grassmannian: one is derived from a Gr\"obner basis for the
Pl\"ucker ideal of the Grassmannian and the other from a SAGBI basis for its
projective coordinate ring. The more general case of special Schubert
conditions is solved by delicate intrinsic deformations, called Pieri
homotopies, which first arose in the study of enumerative geometry over the
real numbers. Computational results are presented and applications to control
theory are discussed.Comment: 24 pages, LaTeX 2e with 2 figures, used epsf.st
Slider-pinning Rigidity: a Maxwell-Laman-type Theorem
We define and study slider-pinning rigidity, giving a complete combinatorial
characterization. This is done via direction-slider networks, which are a
generalization of Whiteley's direction networks.Comment: Accepted, to appear in Discrete and Computational Geometr
- …