2,156 research outputs found
Average Interpolation Under the Maximum Angle Condition
Interpolation error estimates needed in common finite element applications
using simplicial meshes typically impose restrictions on the both the
smoothness of the interpolated functions and the shape of the simplices. While
the simplest theory can be generalized to admit less smooth functions (e.g.,
functions in H^1(\Omega) rather than H^2(\Omega)) and more general shapes
(e.g., the maximum angle condition rather than the minimum angle condition),
existing theory does not allow these extensions to be performed simultaneously.
By localizing over a well-shaped auxiliary spatial partition, error estimates
are established under minimal function smoothness and mesh regularity. This
construction is especially important in two cases: L^p(\Omega) estimates for
data in W^{1,p}(\Omega) hold for meshes without any restrictions on simplex
shape, and W^{1,p}(\Omega) estimates for data in W^{2,p}(\Omega) hold under a
generalization of the maximum angle condition which previously required p>2 for
standard Lagrange interpolation
Operational one-to-one mapping between coherence and entanglement measures
We establish a general operational one-to-one mapping between coherence
measures and entanglement measures: Any entanglement measure of bipartite pure
states is the minimum of a suitable coherence measure over product bases. Any
coherence measure of pure states, with extension to mixed states by convex
roof, is the maximum entanglement generated by incoherent operations acting on
the system and an incoherent ancilla. Remarkably, the generalized CNOT gate is
the universal optimal incoherent operation. In this way, all convex-roof
coherence measures, including the coherence of formation, are endowed with
(additional) operational interpretations. By virtue of this connection, many
results on entanglement can be translated to the coherence setting, and vice
versa. As applications, we provide tight observable lower bounds for
generalized entanglement concurrence and coherence concurrence, which enable
experimentalists to quantify entanglement and coherence of the maximal
dimension in real experiments.Comment: 14 pages, 1 figure, new results added, published in PR
Heegaard gradient and virtual fibers
We show that if a closed hyperbolic 3-manifold has infinitely many finite
covers of bounded Heegaard genus, then it is virtually fibered. This
generalizes a theorem of Lackenby, removing restrictions needed about the
regularity of the covers. Furthermore, we can replace the assumption that the
covers have bounded Heegaard genus with the weaker hypotheses that the Heegaard
splittings for the covers have Heegaard gradient zero, and also bounded width,
in the sense of Scharlemann-Thompson thin position for Heegaard splittings.Comment: Published by Geometry and Topology at
http://www.maths.warwick.ac.uk/gt/GTVol9/paper51.abs.htm
Faster Geometric Algorithms via Dynamic Determinant Computation
The computation of determinants or their signs is the core procedure in many
important geometric algorithms, such as convex hull, volume and point location.
As the dimension of the computation space grows, a higher percentage of the
total computation time is consumed by these computations. In this paper we
study the sequences of determinants that appear in geometric algorithms. The
computation of a single determinant is accelerated by using the information
from the previous computations in that sequence.
We propose two dynamic determinant algorithms with quadratic arithmetic
complexity when employed in convex hull and volume computations, and with
linear arithmetic complexity when used in point location problems. We implement
the proposed algorithms and perform an extensive experimental analysis. On one
hand, our analysis serves as a performance study of state-of-the-art
determinant algorithms and implementations. On the other hand, we demonstrate
the supremacy of our methods over state-of-the-art implementations of
determinant and geometric algorithms. Our experimental results include a 20 and
78 times speed-up in volume and point location computations in dimension 6 and
11 respectively.Comment: 29 pages, 8 figures, 3 table
On implicit-factorization constraint preconditioners
Recently Dollar and Wathen [14] proposed a class of incomplete factorizations for saddle-point problems, based upon earlier work by Schilders [40]. In this paper, we generalize this class of preconditioners, and examine the spectral implications of our approach. Numerical tests indicate the efficacy of our preconditioners
Dynamic Factorization in Large-Scale Optimization
Mathematical Programming, 64, pp. 17-51.Factorization of linear programming (LP) models enables a large portion of the LP tableau to be
represented implicitly and generated from the remaining explicit part. Dynamic factorization admits algebraic elements which change in dimension during the course of solution. A unifying mathematical framework for dynamic row factorization is presented with three algorithms which derive from different LP model row structures: generalized upper bound rows, pure network rows,and generalized network TOWS. Each of these structures is a generalization of its predecessors, and each corresponding algorithm exhibits just enough additional richness to accommodate the structure at hand within the
unified framework. Implementation and computational results are presented for a variety of real-world models. These results suggest that each of these algorithms is superior to the traditional, non-factorized approach, with the degree of improvement depending upon the size and quality of the row factorization identified
Analysis of large scale linear programming problems with embedded network structures: Detection and solution algorithms
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Linear programming (LP) models that contain a (substantial) network structure frequently
arise in many real life applications. In this thesis, we investigate two main questions; i) how an embedded network structure can be detected, ii) how the network structure can be exploited to create improved sparse simplex solution algorithms. In order to extract an embedded pure network structure from a general LP problem we
develop two new heuristics. The first heuristic is an alternative multi-stage generalised upper bounds (GUB) based approach which finds as many GUB subsets as possible. In order to identify a GUB subset two different approaches are introduced; the first is based on the notion of Markowitz merit count and the second exploits an independent set in the corresponding graph. The second heuristic is based on the generalised signed graph of the coefficient matrix. This heuristic determines whether the given LP problem is an entirely pure network; this is in contrast to all previously known heuristics. Using generalised signed graphs, we prove that the problem of detecting the maximum size embedded network structure within an LP problem is NP-hard. The two detection
algorithms perform very well computationally and make positive contributions to the
known body of results for the embedded network detection. For computational solution
a decomposition based approach is presented which solves a network problem with side constraints. In this approach, the original coefficient matrix is partitioned into the network and the non-network parts. For the partitioned problem, we investigate two alternative decomposition techniques namely, Lagrangean relaxation and Benders decomposition. Active variables identified by these procedures are then used to create
an advanced basis for the original problem. The computational results of applying these techniques to a selection of Netlib models are encouraging. The development and computational investigation of this solution algorithm constitute further contribution
made by the research reported in this thesis.This study is funded by the Turkish Educational Council and Mugla University
- …