37,753 research outputs found
Higher Order Derivatives in Costa's Entropy Power Inequality
Let be an arbitrary continuous random variable and be an independent
Gaussian random variable with zero mean and unit variance. For , Costa
proved that is concave in , where the proof hinged on
the first and second order derivatives of . Specifically, these
two derivatives are signed, i.e., and . In this
paper, we show that the third order derivative of is
nonnegative, which implies that the Fisher information is
convex in . We further show that the fourth order derivative of
is nonpositive. Following the first four derivatives, we make
two conjectures on : the first is that
is nonnegative in if
is odd, and nonpositive otherwise; the second is that is
convex in . The first conjecture can be rephrased in the context of
completely monotone functions: is completely monotone in .
The history of the first conjecture may date back to a problem in mathematical
physics studied by McKean in 1966. Apart from these results, we provide a
geometrical interpretation to the covariance-preserving transformation and
study the concavity of , revealing its connection
with Costa's EPI.Comment: Second version submitted. https://sites.google.com/site/chengfancuhk
Asymmetry Helps: Eigenvalue and Eigenvector Analyses of Asymmetrically Perturbed Low-Rank Matrices
This paper is concerned with the interplay between statistical asymmetry and
spectral methods. Suppose we are interested in estimating a rank-1 and
symmetric matrix , yet only a
randomly perturbed version is observed. The noise matrix
is composed of zero-mean independent (but not
necessarily homoscedastic) entries and is, therefore, not symmetric in general.
This might arise, for example, when we have two independent samples for each
entry of and arrange them into an {\em asymmetric} data
matrix . The aim is to estimate the leading eigenvalue and
eigenvector of . We demonstrate that the leading eigenvalue
of the data matrix can be times more accurate --- up
to some log factor --- than its (unadjusted) leading singular value in
eigenvalue estimation. Further, the perturbation of any linear form of the
leading eigenvector of --- say, entrywise eigenvector perturbation
--- is provably well-controlled. This eigen-decomposition approach is fully
adaptive to heteroscedasticity of noise without the need of careful bias
correction or any prior knowledge about the noise variance. We also provide
partial theory for the more general rank- case. The takeaway message is
this: arranging the data samples in an asymmetric manner and performing
eigen-decomposition could sometimes be beneficial.Comment: accepted to Annals of Statistics, 2020. 37 page
- β¦