148,253 research outputs found
The MM Alternative to EM
The EM algorithm is a special case of a more general algorithm called the MM
algorithm. Specific MM algorithms often have nothing to do with missing data.
The first M step of an MM algorithm creates a surrogate function that is
optimized in the second M step. In minimization, MM stands for
majorize--minimize; in maximization, it stands for minorize--maximize. This
two-step process always drives the objective function in the right direction.
Construction of MM algorithms relies on recognizing and manipulating
inequalities rather than calculating conditional expectations. This survey
walks the reader through the construction of several specific MM algorithms.
The potential of the MM algorithm in solving high-dimensional optimization and
estimation problems is its most attractive feature. Our applications to random
graph models, discriminant analysis and image restoration showcase this
ability.Comment: Published in at http://dx.doi.org/10.1214/08-STS264 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization
Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks
Simplified dark matter models in the light of AMS-02 antiproton data
In this work we perform an analysis of the recent AMS-02 antiproton flux and
the antiproton-to-proton ratio in the framework of simplified dark matter
models. To predict the AMS-02 observables we adopt the propagation and
injection parameters determined by the observed fluxes of nuclei. We assume
that the dark matter particle is a Dirac fermionic dark matter, with
leptophobic pseudoscalar or axialvector mediator that couples only to Standard
Model quarks and dark matter particles. We find that the AMS-02 observations
are consistent with the dark matter hypothesis within the uncertainties. The
antiproton data prefer a dark matter (mediator) mass in the 700 GeV--5 TeV
region for the annihilation with pseudoscalar mediator and greater than 700 GeV
(200 GeV--1 TeV) for the annihilation with axialvector mediator, respectively,
at about 68% confidence level. The AMS-02 data require an effective dark matter
annihilation cross section in the region of 1x10^{-25} -- 1x10^{-24}
(1x10^{-25} -- 4x10^{-24}) cm^3/s for the simplified model with pseudoscalar
(axialvector) mediator. The constraints from the LHC and Fermi-LAT are also
discussed.Comment: 16 pages, 6 figures, 1 table. arXiv admin note: text overlap with
arXiv:1509.0221
Structure Identifiability of an NDS with LFT Parametrized Subsystems
Requirements on subsystems have been made clear in this paper for a linear
time invariant (LTI) networked dynamic system (NDS), under which subsystem
interconnections can be estimated from external output measurements. In this
NDS, subsystems may have distinctive dynamics, and subsystem interconnections
are arbitrary. It is assumed that system matrices of each subsystem depend on
its (pseudo) first principle parameters (FPPs) through a linear fractional
transformation (LFT). It has been proven that if in each subsystem, the
transfer function matrix (TFM) from its internal inputs to its external outputs
is of full normal column rank (FNCR), while the TFM from its external inputs to
its internal outputs is of full normal row rank (FNRR), then the NDS is
structurally identifiable. Moreover, under some particular situations like
there are no direct information transmission from an internal input to an
internal output in each subsystem, a necessary and sufficient condition is
established for NDS structure identifiability. A matrix valued polynomial (MVP)
rank based equivalent condition is further derived, which depends affinely on
subsystem (pseudo) FPPs and can be independently verified for each subsystem.
From this condition, some necessary conditions are obtained for both subsystem
dynamics and its (pseudo) FPPs, using the Kronecker canonical form (KCF) of a
matrix pencil.Comment: 16 page
Decoupling MSSM Higgs Sector and Heavy Higgs Decay
The decoupling limit in the MSSM Higgs sector is the most likely scenario in
light of the Higgs discovery. This scenario is further constrained by MSSM
Higgs search bounds and flavor observables. We perform a comprehensive scan of
MSSM parameters and update the constraints on the decoupling MSSM Higgs sector
in terms of 8 TeV LHC data. We highlight the effect of light SUSY spectrum in
the heavy neutral Higgs decay in the decoupling limit. We find that the
chargino and neutralino decay mode can reach at most 40% and 20% branching
ratio, respectively. In particular, the invisible decay mode BR(H^0(A^0) ->
\tilde{\chi}^0_1\tilde{\chi}^0_1) increases with increasing Bino LSP mass and
is between 10%-15% (20%) for 30<m_{\tilde{\chi}^0_1}<100 GeV. The leading
branching fraction of heavy Higgses decay into sfermions can be as large as 80%
for H^0 -> \tilde{t}_1\tilde{t}_1^\ast and 60% for H^0/A^0 ->
\tilde{\tau}_1\tilde{\tau}_2^\ast+\tilde{\tau}_1^\ast\tilde{\tau}_2. The
branching fractions are less than 10% for H^0 -> h^0h^0 and 1% for A^0 -> h^0Z
for m_A>400 GeV. The charged Higgs decays to neutralino plus chargino and
sfermions with branching ratio as large as 40% and 60%, respectively. Moreover,
the exclusion limit of leading MSSM Higgs search channel, namely gg,b\bar{b} ->
H^0, A^0 -> tau^+ tau^-, is extrapolated to 14 TeV LHC with high luminosities.
It turns out that the tau tau mode can essentially exclude regime with
tan\beta>20 for L=300 fb^{-1} and tan\beta>15 for L=3000 fb^{-1}.Comment: 20 pages, 14 figure
- …