9,732 research outputs found
The Gauss-Bonnet-Chern mass of conformally flat manifolds
In this paper we show positive mass theorems and Penrose type inequalities
for the Gauss-Bonnet-Chern mass, which was introduced recently in \cite{GWW},
for asymptotically flat CF manifolds and its rigidity.Comment: 17 pages, references added, the statement of Prop. 4.6 correcte
Hyperbolic Alexandrov-Fenchel quermassintegral inequalities I
In this paper we prove the following geometric inequality in the hyperbolic
space \H^n (, which is a hyperbolic Alexandrov-Fenchel inequality,
\begin{array}{rcl} \ds \int_\Sigma \s_4 d \mu\ge \ds\vs
C_{n-1}^4\omega_{n-1}\left\{\left(\frac{|\Sigma|}{\omega_{n-1}} \right)^\frac
12 + \left(\frac{|\Sigma|}{\omega_{n-1}} \right)^{\frac 12\frac {n-5}{n-1}}
\right\}^2, \end{array} provided that is a horospherical convex
hypersurface. Equality holds if and only if is a geodesic sphere in
\H^n.Comment: 18page
A new mass for asymptotically flat manifolds
In this paper we introduce a mass for asymptotically flat manifolds by using
the Gauss-Bonnet curvature. We first prove that the mass is well-defined and is
a geometric invariant, if the Gauss-Bonnet curvature is integrable and the
decay order satisfies Then we show a positive
mass theorem for asymptotically flat graphs over . Moreover we
obtain also Penrose type inequalities in this case.Comment: 32 pages. arXiv:1211.7305 was integrated into this new version as an
applicatio
Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time
The notion of program sensitivity (aka Lipschitz continuity) specifies that
changes in the program input result in proportional changes to the program
output. For probabilistic programs the notion is naturally extended to expected
sensitivity. A previous approach develops a relational program logic framework
for proving expected sensitivity of probabilistic while loops, where the number
of iterations is fixed and bounded. In this work, we consider probabilistic
while loops where the number of iterations is not fixed, but randomized and
depends on the initial input values. We present a sound approach for proving
expected sensitivity of such programs. Our sound approach is martingale-based
and can be automated through existing martingale-synthesis algorithms.
Furthermore, our approach is compositional for sequential composition of while
loops under a mild side condition. We demonstrate the effectiveness of our
approach on several classical examples from Gambler's Ruin, stochastic hybrid
systems and stochastic gradient descent. We also present experimental results
showing that our automated approach can handle various probabilistic programs
in the literature
Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution
Recent years have seen a flurry of activities in designing provably efficient
nonconvex procedures for solving statistical estimation problems. Due to the
highly nonconvex nature of the empirical loss, state-of-the-art procedures
often require proper regularization (e.g. trimming, regularized cost,
projection) in order to guarantee fast convergence. For vanilla procedures such
as gradient descent, however, prior theory either recommends highly
conservative learning rates to avoid overshooting, or completely lacks
performance guarantees.
This paper uncovers a striking phenomenon in nonconvex optimization: even in
the absence of explicit regularization, gradient descent enforces proper
regularization implicitly under various statistical models. In fact, gradient
descent follows a trajectory staying within a basin that enjoys nice geometry,
consisting of points incoherent with the sampling mechanism. This "implicit
regularization" feature allows gradient descent to proceed in a far more
aggressive fashion without overshooting, which in turn results in substantial
computational savings. Focusing on three fundamental statistical estimation
problems, i.e. phase retrieval, low-rank matrix completion, and blind
deconvolution, we establish that gradient descent achieves near-optimal
statistical and computational guarantees without explicit regularization. In
particular, by marrying statistical modeling with generic optimization theory,
we develop a general recipe for analyzing the trajectories of iterative
algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy
matrix completion, we demonstrate that gradient descent achieves near-optimal
error control --- measured entrywise and by the spectral norm --- which might
be of independent interest.Comment: accepted to Foundations of Computational Mathematics (FOCM
Spectral Method and Regularized MLE Are Both Optimal for Top- Ranking
This paper is concerned with the problem of top- ranking from pairwise
comparisons. Given a collection of items and a few pairwise comparisons
across them, one wishes to identify the set of items that receive the
highest ranks. To tackle this problem, we adopt the logistic parametric model
--- the Bradley-Terry-Luce model, where each item is assigned a latent
preference score, and where the outcome of each pairwise comparison depends
solely on the relative scores of the two items involved. Recent works have made
significant progress towards characterizing the performance (e.g. the mean
square error for estimating the scores) of several classical methods, including
the spectral method and the maximum likelihood estimator (MLE). However, where
they stand regarding top- ranking remains unsettled.
We demonstrate that under a natural random sampling model, the spectral
method alone, or the regularized MLE alone, is minimax optimal in terms of the
sample complexity --- the number of paired comparisons needed to ensure exact
top- identification, for the fixed dynamic range regime. This is
accomplished via optimal control of the entrywise error of the score estimates.
We complement our theoretical studies by numerical experiments, confirming that
both methods yield low entrywise errors for estimating the underlying scores.
Our theory is established via a novel leave-one-out trick, which proves
effective for analyzing both iterative and non-iterative procedures. Along the
way, we derive an elementary eigenvector perturbation bound for probability
transition matrices, which parallels the Davis-Kahan theorem for
symmetric matrices. This also allows us to close the gap between the
error upper bound for the spectral method and the minimax lower limit.Comment: Add discussions on the setting of the general condition numbe
- …
