24,338 research outputs found
Riemannian simplices and triangulations
We study a natural intrinsic definition of geometric simplices in Riemannian
manifolds of arbitrary dimension , and exploit these simplices to obtain
criteria for triangulating compact Riemannian manifolds. These geometric
simplices are defined using Karcher means. Given a finite set of vertices in a
convex set on the manifold, the point that minimises the weighted sum of
squared distances to the vertices is the Karcher mean relative to the weights.
Using barycentric coordinates as the weights, we obtain a smooth map from the
standard Euclidean simplex to the manifold. A Riemannian simplex is defined as
the image of this barycentric coordinate map. In this work we articulate
criteria that guarantee that the barycentric coordinate map is a smooth
embedding. If it is not, we say the Riemannian simplex is degenerate. Quality
measures for the "thickness" or "fatness" of Euclidean simplices can be adapted
to apply to these Riemannian simplices. For manifolds of dimension 2, the
simplex is non-degenerate if it has a positive quality measure, as in the
Euclidean case. However, when the dimension is greater than two, non-degeneracy
can be guaranteed only when the quality exceeds a positive bound that depends
on the size of the simplex and local bounds on the absolute values of the
sectional curvatures of the manifold. An analysis of the geometry of
non-degenerate Riemannian simplices leads to conditions which guarantee that a
simplicial complex is homeomorphic to the manifold
Density of Spherically-Embedded Stiefel and Grassmann Codes
The density of a code is the fraction of the coding space covered by packing
balls centered around the codewords. This paper investigates the density of
codes in the complex Stiefel and Grassmann manifolds equipped with the chordal
distance. The choice of distance enables the treatment of the manifolds as
subspaces of Euclidean hyperspheres. In this geometry, the densest packings are
not necessarily equivalent to maximum-minimum-distance codes. Computing a
code's density follows from computing: i) the normalized volume of a metric
ball and ii) the kissing radius, the radius of the largest balls one can pack
around the codewords without overlapping. First, the normalized volume of a
metric ball is evaluated by asymptotic approximations. The volume of a small
ball can be well-approximated by the volume of a locally-equivalent tangential
ball. In order to properly normalize this approximation, the precise volumes of
the manifolds induced by their spherical embedding are computed. For larger
balls, a hyperspherical cap approximation is used, which is justified by a
volume comparison theorem showing that the normalized volume of a ball in the
Stiefel or Grassmann manifold is asymptotically equal to the normalized volume
of a ball in its embedding sphere as the dimension grows to infinity. Then,
bounds on the kissing radius are derived alongside corresponding bounds on the
density. Unlike spherical codes or codes in flat spaces, the kissing radius of
Grassmann or Stiefel codes cannot be exactly determined from its minimum
distance. It is nonetheless possible to derive bounds on density as functions
of the minimum distance. Stiefel and Grassmann codes have larger density than
their image spherical codes when dimensions tend to infinity. Finally, the
bounds on density lead to refinements of the standard Hamming bounds for
Stiefel and Grassmann codes.Comment: Two-column version (24 pages, 6 figures, 4 tables). To appear in IEEE
Transactions on Information Theor
Approximating Hereditary Discrepancy via Small Width Ellipsoids
The Discrepancy of a hypergraph is the minimum attainable value, over
two-colorings of its vertices, of the maximum absolute imbalance of any
hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum
discrepancy of a restriction of the hypergraph to a subset of its vertices, is
a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related
the natural extension of this quantity to matrices to rounding algorithms for
linear programs, and gave a determinant based lower bound on the hereditary
discrepancy. Matousek (2011) showed that this bound is tight up to a
polylogarithmic factor, leaving open the question of actually computing this
bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time
-approximation to hereditary discrepancy, as a by-product
of their work in differential privacy. In this paper, we give a direct simple
-approximation algorithm for this problem. We show that up to
this approximation factor, the hereditary discrepancy of a matrix is
characterized by the optimal value of simple geometric convex program that
seeks to minimize the largest norm of any point in a ellipsoid
containing the columns of . This characterization promises to be a useful
tool in discrepancy theory
On choice of preconditioner for minimum residual methods for nonsymmetric matrices
Existing convergence bounds for Krylov subspace methods such as GMRES for nonsymmetric linear systems give little mathematical guidance for the choice of preconditioner. Here, we establish a desirable mathematical property of a preconditioner which guarantees that convergence of a minimum residual method will essentially depend only on the eigenvalues of the preconditioned system, as is true in the symmetric case. Our theory covers only a subset of nonsymmetric coefficient matrices but computations indicate that it might be more generally applicable
Robust Localization from Incomplete Local Information
We consider the problem of localizing wireless devices in an ad-hoc network
embedded in a d-dimensional Euclidean space. Obtaining a good estimation of
where wireless devices are located is crucial in wireless network applications
including environment monitoring, geographic routing and topology control. When
the positions of the devices are unknown and only local distance information is
given, we need to infer the positions from these local distance measurements.
This problem is particularly challenging when we only have access to
measurements that have limited accuracy and are incomplete. We consider the
extreme case of this limitation on the available information, namely only the
connectivity information is available, i.e., we only know whether a pair of
nodes is within a fixed detection range of each other or not, and no
information is known about how far apart they are. Further, to account for
detection failures, we assume that even if a pair of devices is within the
detection range, it fails to detect the presence of one another with some
probability and this probability of failure depends on how far apart those
devices are. Given this limited information, we investigate the performance of
a centralized positioning algorithm MDS-MAP introduced by Shang et al., and a
distributed positioning algorithm, introduced by Savarese et al., called
HOP-TERRAIN. In particular, for a network consisting of n devices positioned
randomly, we provide a bound on the resulting error for both algorithms. We
show that the error is bounded, decreasing at a rate that is proportional to
R/Rc, where Rc is the critical detection range when the resulting random
network starts to be connected, and R is the detection range of each device.Comment: 40 pages, 13 figure
- …