435,431 research outputs found
Partially Penalized Immersed Finite Element Methods for Elliptic Interface Problems
This article presents new immersed finite element (IFE) methods for solving
the popular second order elliptic interface problems on structured Cartesian
meshes even if the involved interfaces have nontrivial geometries. These IFE
methods contain extra stabilization terms introduced only at interface edges
for penalizing the discontinuity in IFE functions. With the enhanced stability
due to the added penalty, not only these IFE methods can be proven to have the
optimal convergence rate in the H1-norm provided that the exact solution has
sufficient regularity, but also numerical results indicate that their
convergence rates in both the H1-norm and the L2-norm do not deteriorate when
the mesh becomes finer which is a shortcoming of the classic IFE methods in
some situations. Trace inequalities are established for both linear and
bilinear IFE functions that are not only critical for the error analysis of
these new IFE methods, but also are of a great potential to be useful in error
analysis for other IFE methods
Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis
Subspace recovery from corrupted and missing data is crucial for various
applications in signal processing and information theory. To complete missing
values and detect column corruptions, existing robust Matrix Completion (MC)
methods mostly concentrate on recovering a low-rank matrix from few corrupted
coefficients w.r.t. standard basis, which, however, does not apply to more
general basis, e.g., Fourier basis. In this paper, we prove that the range
space of an matrix with rank can be exactly recovered from few
coefficients w.r.t. general basis, though and the number of corrupted
samples are both as high as . Our model covers
previous ones as special cases, and robust MC can recover the intrinsic matrix
with a higher rank. Moreover, we suggest a universal choice of the
regularization parameter, which is . By our
filtering algorithm, which has theoretical guarantees, we can
further reduce the computational cost of our model. As an application, we also
find that the solutions to extended robust Low-Rank Representation and to our
extended robust MC are mutually expressible, so both our theory and algorithm
can be applied to the subspace clustering problem with missing values under
certain conditions. Experiments verify our theories.Comment: To appear in IEEE Transactions on Information Theor
- …
