23,594 research outputs found
Positive Definite Penalized Estimation of Large Covariance Matrices
The thresholding covariance estimator has nice asymptotic properties for
estimating sparse large covariance matrices, but it often has negative
eigenvalues when used in real data analysis. To simultaneously achieve sparsity
and positive definiteness, we develop a positive definite -penalized
covariance estimator for estimating sparse large covariance matrices. An
efficient alternating direction method is derived to solve the challenging
optimization problem and its convergence properties are established. Under weak
regularity conditions, non-asymptotic statistical theory is also established
for the proposed estimator. The competitive finite-sample performance of our
proposal is demonstrated by both simulation and real applications.Comment: accepted by JASA, August 201
Alternating Direction Methods for Latent Variable Gaussian Graphical Model Selection
Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization
problem to characterize graphical model selection in the presence of unobserved
variables. This convex optimization problem aims to estimate an inverse
covariance matrix that can be decomposed into a sparse matrix minus a low-rank
matrix from sample data. Solving this convex optimization problem is very
challenging, especially for large problems. In this paper, we propose two
alternating direction methods for solving this problem. The first method is to
apply the classical alternating direction method of multipliers to solve the
problem as a consensus problem. The second method is a proximal gradient based
alternating direction method of multipliers. Our methods exploit and take
advantage of the special structure of the problem and thus can solve large
problems very efficiently. Global convergence result is established for the
proposed methods. Numerical results on both synthetic data and gene expression
data show that our methods usually solve problems with one million variables in
one to two minutes, and are usually five to thirty five times faster than a
state-of-the-art Newton-CG proximal point algorithm
Utilizing the Updated Gamma-Ray Bursts and Type Ia Supernovae to Constrain the Cardassian Expansion Model and Dark Energy
We update gamma-ray burst (GRB) luminosity relations among certain spectral
and light-curve features with 139 GRBs. The distance modulus of 82 GRBs at
can be calibrated with the sample at by using the cubic
spline interpolation method from the Union2.1 Type Ia supernovae (SNe Ia) set.
We investigate the joint constraints on the Cardassian expansion model and dark
energy with 580 Union2.1 SNe Ia sample () and 82 calibrated GRBs data
(). In CDM, we find that adding 82 high-\emph{z} GRBs to
580 SNe Ia significantly improves the constrain on
plane. In the Cardassian expansion model, the
best fit is and
, which is consistent with the CDM cosmology in the
confidence region. We also discuss two dark energy models in which
the equation of state is parametrized as and
, respectively. Based on our analysis, we see that our
Universe at higher redshift up to is consistent with the concordance
model within confidence level.Comment: 17 pages, 6 figures, 2 tables; accepted for publication in Advances
in Astronomy, special issue on Gamma-Ray Burst in Swift and Fermi Era. arXiv
admin note: text overlap with arXiv:0802.4262, arXiv:0706.0938 by other
author
- β¦