105,761 research outputs found
Regular polynomial interpolation and approximation of global solutions of linear partial differential equations
We consider regular polynomial interpolation algorithms on recursively
defined sets of interpolation points which approximate global solutions of
arbitrary well-posed systems of linear partial differential equations.
Convergence of the 'limit' of the recursively constructed family of
polynomials to the solution and error estimates are obtained from a priori
estimates for some standard classes of linear partial differential equations,
i.e. elliptic and hyperbolic equations. Another variation of the algorithm
allows to construct polynomial interpolations which preserve systems of linear
partial differential equations at the interpolation points. We show how this
can be applied in order to compute higher order terms of WKB-approximations of
fundamental solutions of a large class of linear parabolic equations. The error
estimates are sensitive to the regularity of the solution. Our method is
compatible with recent developments for solution of higher dimensional partial
differential equations, i.e. (adaptive) sparse grids, and weighted Monte-Carlo,
and has obvious applications to mathematical finance and physics.Comment: 28 page
Schur complement preconditioners for surface integral-equation formulations of dielectric problems solved with the multilevel fast multipole algorithm
Cataloged from PDF version of article.Surface integral-equation methods accelerated with the multilevel fast multipole algorithm (MLFMA) provide a suitable mechanism for electromagnetic analysis of real-life dielectric problems. Unlike the perfect-electric-conductor case, discretizations of surface formulations of dielectric problems yield 2 x 2 partitioned linear systems. Among various surface formulations, the combined tangential formulation (CTF) is the closest to the category of first-kind integral equations, and hence it yields the most accurate results, particularly when the dielectric constant is high and/or the dielectric problem involves sharp edges and corners. However, matrix equations of CTF are highly ill-conditioned, and their iterative solutions require powerful preconditioners for convergence. Second-kind surface integral-equation formulations yield better conditioned systems, but their conditionings significantly degrade when real-life problems include high dielectric constants. In this paper, for the first time in the context of surface integral-equation methods of dielectric objects, we propose Schur complement preconditioners to increase their robustness and efficiency. First, we approximate the dense system matrix by a sparse near-field matrix, which is formed naturally by MLFMA. The Schur complement preconditioning requires approximate solutions of systems involving the (1,1) partition and the Schur complement. We approximate the inverse of the (1,1) partition with a sparse approximate inverse (SAI) based on the Frobenius norm minimization. For the Schur complement, we first approximate it via incomplete sparse matrix-matrix multiplications, and then we generate its approximate inverse with the same SAI technique. Numerical experiments on sphere, lens, and photonic crystal problems demonstrate the effectiveness of the proposed preconditioners. In particular, the results for the photonic crystal problem, which has both surface singularity and a high dielectric constant, shows that accurate CTF solutions for such problems can be obtained even faster than with second-kind integral equation formulations, with the acceleration provided by the proposed Schur complement preconditioners
A Fast Active Set Block Coordinate Descent Algorithm for -regularized least squares
The problem of finding sparse solutions to underdetermined systems of linear
equations arises in several applications (e.g. signal and image processing,
compressive sensing, statistical inference). A standard tool for dealing with
sparse recovery is the -regularized least-squares approach that has
been recently attracting the attention of many researchers. In this paper, we
describe an active set estimate (i.e. an estimate of the indices of the zero
variables in the optimal solution) for the considered problem that tries to
quickly identify as many active variables as possible at a given point, while
guaranteeing that some approximate optimality conditions are satisfied. A
relevant feature of the estimate is that it gives a significant reduction of
the objective function when setting to zero all those variables estimated
active. This enables to easily embed it into a given globally converging
algorithmic framework. In particular, we include our estimate into a block
coordinate descent algorithm for -regularized least squares, analyze
the convergence properties of this new active set method, and prove that its
basic version converges with linear rate. Finally, we report some numerical
results showing the effectiveness of the approach.Comment: 28 pages, 5 figure
Parallel Newton Method for High-Speed Viscous Separated Flowfields. G.U. Aero Report 9210
This paper presents a new technique to parallelize Newton method for the locally
conical approximate, laminar Navier-Stokes solutions on a distributed memory parallel
computer. The method uses Newton's method for nonlinear systems of equations to find
steady-state solutions. The parallelization is based on a parallel iterative solver for large
sparse non-symmetric linear system. The method of distributed storage of the matrix data
results in the corresponding geometric domain decomposition. The large sparse Jacobian
matrix is then generated distributively in each subdomain. Since the numerical algorithms
on the global domain are unchanged, the convergence and the accuracy of the original
sequential scheme are maintained, and no inner boundary condition is needed
Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations
We show how to solve directed Laplacian systems in nearly-linear time. Given
a linear system in an Eulerian directed Laplacian with nonzero
entries, we show how to compute an -approximate solution in time . Through reductions from [Cohen et al.
FOCS'16] , this gives the first nearly-linear time algorithms for computing
-approximate solutions to row or column diagonally dominant linear
systems (including arbitrary directed Laplacians) and computing
-approximations to various properties of random walks on directed
graphs, including stationary distributions, personalized PageRank vectors,
hitting times, and escape probabilities. These bounds improve upon the recent
almost-linear algorithms of [Cohen et al. STOC'17], which gave an algorithm to
solve Eulerian Laplacian systems in time .
To achieve our results, we provide a structural result that we believe is of
independent interest. We show that Laplacians of all strongly connected
directed graphs have sparse approximate LU-factorizations. That is, for every
such directed Laplacian , there is a lower triangular matrix
and an upper triangular matrix
, each with at most
nonzero entries, such that their product spectrally approximates
in an appropriate norm. This claim can be viewed as an analogue of recent work
on sparse Cholesky factorizations of Laplacians of undirected graphs. We show
how to construct such factorizations in nearly-linear time and prove that, once
constructed, they yield nearly-linear time algorithms for solving directed
Laplacian systems.Comment: Appeared in FOCS 201
On the hardness of learning sparse parities
This work investigates the hardness of computing sparse solutions to systems
of linear equations over F_2. Consider the k-EvenSet problem: given a
homogeneous system of linear equations over F_2 on n variables, decide if there
exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse
solution). While there is a simple O(n^{k/2})-time algorithm for it,
establishing fixed parameter intractability for k-EvenSet has been a notorious
open problem. Towards this goal, we show that unless k-Clique can be solved in
n^{o(k)} time, k-EvenSet has no poly(n)2^{o(sqrt{k})} time algorithm and no
polynomial time algorithm when k = (log n)^{2+eta} for any eta > 0.
Our work also shows that the non-homogeneous generalization of the problem --
which we call k-VectorSum -- is W[1]-hard on instances where the number of
equations is O(k log n), improving on previous reductions which produced
Omega(n) equations. We also show that for any constant eps > 0, given a system
of O(exp(O(k))log n) linear equations, it is W[1]-hard to decide if there is a
k-sparse linear form satisfying all the equations or if every function on at
most k-variables (k-junta) satisfies at most (1/2 + eps)-fraction of the
equations. In the setting of computational learning, this shows hardness of
approximate non-proper learning of k-parities. In a similar vein, we use the
hardness of k-EvenSet to show that that for any constant d, unless k-Clique can
be solved in n^{o(k)} time there is no poly(m, n)2^{o(sqrt{k}) time algorithm
to decide whether a given set of m points in F_2^n satisfies: (i) there exists
a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the
points, or (ii) any non-trivial degree d polynomial P supported on at most k
variables evaluates to zero on approx. Pr_{F_2^n}[P(z) = 0] fraction of the
points i.e., P is fooled by the set of points
- …