7,951 research outputs found
An Ensemble EM Algorithm for Bayesian Variable Selection
We study the Bayesian approach to variable selection in the context of linear
regression. Motivated by a recent work by Rockova and George (2014), we propose
an EM algorithm that returns the MAP estimate of the set of relevant variables.
Due to its particular updating scheme, our algorithm can be implemented
efficiently without inverting a large matrix in each iteration and therefore
can scale up with big data. We also show that the MAP estimate returned by our
EM algorithm achieves variable selection consistency even when diverges
with . In practice, our algorithm could get stuck with local modes, a common
problem with EM algorithms. To address this issue, we propose an ensemble EM
algorithm, in which we repeatedly apply the EM algorithm on a subset of the
samples with a subset of the covariates, and then aggregate the variable
selection results across those bootstrap replicates. Empirical studies have
demonstrated the superior performance of the ensemble EM algorithm
A Variational Algorithm for Bayesian Variable Selection
There has been an intense development on the estimation of a sparse
regression coefficient vector in statistics, machine learning and related
fields. In this paper, we focus on the Bayesian approach to this problem, where
sparsity is incorporated by the so-called spike-and-slab prior on the
coefficients. Instead of replying on MCMC for posterior inference, we propose a
fast and scalable algorithm based on variational approximation to the posterior
distribution. The updating scheme employed by our algorithm is different from
the one proposed by Carbonetto and Stephens (2012). Those changes seem crucial
for us to show that our algorithm can achieve asymptotic consistency even when
the feature dimension diverges exponentially fast with the sample size.
Empirical results have demonstrated the effectiveness and efficiency of the
proposed algorithm
Implications of the first AMS-02 measurement for dark matter annihilation and decay
In light of the first measurement of the positron fraction by the AMS-02
experiment, we perform a detailed global analysis on the interpretation of the
latest data of PAMELA, Fermi-LAT, and AMS-02 in terms of dark matter (DM)
annihilation and decay in various propagation models. The allowed regions for
the DM particle mass and annihilation cross section or decay life-time are
obtained for channels with leptonic final states: , , , ,
and . We show that for the conventional astrophysical background
the AMS-02 positron fraction data alone favour a DM particle mass $\sim 500 \
(800)2\mu \ (4\mu)99.99999\%Z_{h}D_{0}\delta_{1/2}\gamma_{p1/p2}Z_{h}D_{0}2\tau4\tau\sim 10^{-23}
\text{cm}^3\text{s}^{-1}$. In all the considered leptonic channels, the current
data favour the scenario of DM annihilation over DM decay. In the decay
scenario, the charge asymmetric DM decay is slightly favoured.Comment: 27 pages, 12 figures, 3 tables, in-depth discussions on the
uncertainties in backgrounds and propagation models added, version to appear
in JCA
Tree-Structured Reinforcement Learning for Sequential Object Localization
Existing object proposal algorithms usually search for possible object
regions over multiple locations and scales separately, which ignore the
interdependency among different objects and deviate from the human perception
procedure. To incorporate global interdependency between objects into object
localization, we propose an effective Tree-structured Reinforcement Learning
(Tree-RL) approach to sequentially search for objects by fully exploiting both
the current observation and historical search paths. The Tree-RL approach
learns multiple searching policies through maximizing the long-term reward that
reflects localization accuracies over all the objects. Starting with taking the
entire image as a proposal, the Tree-RL approach allows the agent to
sequentially discover multiple objects via a tree-structured traversing scheme.
Allowing multiple near-optimal policies, Tree-RL offers more diversity in
search paths and is able to find multiple objects with a single feed-forward
pass. Therefore, Tree-RL can better cover different objects with various scales
which is quite appealing in the context of object proposal. Experiments on
PASCAL VOC 2007 and 2012 validate the effectiveness of the Tree-RL, which can
achieve comparable recalls with current object proposal algorithms via much
fewer candidate windows.Comment: Advances in Neural Information Processing Systems 201
Distributions of Gamma-Ray Bursts and Blazars in the Plane and Possible Implications for their Radiation Physics
We present a spectral analysis for a sample of redshift known GRBs observed
with {\em Fermi}/GBM. Together with the results derived from our systematical
spectral energy distribution modeling with the leptonic models for a {\em
Fermi}/LAT blazar sample, we compare the distributions of the GRBs and the
blazars by plotting the synchrotron peak luminosity () and the
corresponding peak photon energy of blazars in the plane of GRBs, where and are the peak
luminosity and peak photon energy of the GRB time-integrated
spectrum, respectively. The GRBs are in the high-, high-
corner of the plane and a tight relation is found, i.e.,
. Both FSRQs and LBLs are
clustered in the low-, low- corner. IBLs and HBLs have
keV and erg s, but no dependence of on is
found. We show that the tight relation of GRBs is potentially
explained with the synchrotron radiation of fast-cooling electrons in a highly
magnetized ejecta, and the weak anti-correlation of for
FSRQs and LBLs may be attributed to synchrotron radiation of slow-cooling
electrons in a moderately magnetized ejecta. The distributions of IBLs and HBLs
in the plane may be interpreted with synchrotron
radiation of fast-cooling electrons in a matter-dominated ejecta. These results
may present a unified picture for the radiation physics of relativistic jets in
GRBs and blazars within the framework of the leptonic synchrotron radiation
models.Comment: 23 pages, 2 tables, 2 figures. Accepted for publication in Ap
A Fast Differential Grouping Algorithm for Large Scale Black-Box Optimization
Decomposition plays a significant role in cooperative co-evolution which
shows great potential in large scale black-box optimization. However, current
popular decomposition algorithms generally require to sample and evaluate a
large number of solutions for interdependency detection, which is very
time-consuming. To address this issue, this study proposes a new decomposition
algorithm named fast differential grouping (FDG). FDG first identifies the type
of an instance by detecting the interdependencies of a few pairs of variable
subsets selected according to certain rules, and thus can rapidly complete the
decomposition of a fully separable or nonseparable instance. For an identified
partially separable instance, FDG converts the key decomposition process into a
search process in a binary tree by taking corresponding variable subsets as
tree nodes. This enables it to directly deduce the interdependency related to a
child node by reutilizing the solutions sampled for corresponding parent and
brother nodes. To support the above operations, this study designs a normalized
variable-subset-oriented interdependency indicator, which can adaptively
generate decomposition thresholds according to its distribution and thus
enhances decomposition accuracy. Computational complexity analysis and
experimental results verify that FDG outperforms popular decomposition
algorithms. Further tests indicate that FDG embedded in a cooperative
co-evolution framework can achieve highly competitive optimization results as
compared with some state-of-the-art algorithms for large scale black-box
optimization
Single Photon Source Driver Designed in ASIC
The single photon source is an important part of the quantum key distribution
(QKD) system. At present, the single photon source is large in size and complex
in structure for a lot of discrete components which are used. The
miniaturization of the photon source is the tendency of the QKD system. We
integrate all laser driver electronic module into one single ASIC chip, which
can be used to drive the 1550nm DFB laser in random pulse mode and it can
greatly reduce the volume of the single photon source. We present the design of
the chip named LSD2018 and simulation results before the tape-out. The LSD2018
is fabricated with a 130 nm CMOS process and consists of a discriminator, an
adjustable pulse generator, a bandgap reference, an SPI bus, and an
amplitude-adjustable current pulse driver. The electronic random pulse from the
driver can go 20mA to 120mA in amplitude and 400ps to 4ns in pulse width. The
parameters can be set by an SPI bus
GeV excess in the Milky Way: The Role of Diffuse Galactic gamma ray Emission template
Several groups have analyzed the publicly-available Fermi-LAT data and
reported a spatially extended ray excess of around GeV from the
region surrounding the Galactic Center that might originate from annihilation
of dark matter particles with a rest mass GeV. In this work
we examine the role of the diffuse Galactic gamma ray emission (DGE) templates
played in suppressing the GeV excess. For such a purpose, we adopt in total 128
background templates that have been generated by Ackermann et al.
\cite{FermiLAT:2012aa} in the study of the {Fermi-LAT} observations of the
diffuse gamma ray emission considering the effects of cosmic rays and the
interstellar medium. The possible GeV excess, assumed to follow the spatial
distribution of the prompt gamma-rays produced in the annihilation of dark
matter particles taking a generalized NFW profile with an inner slope
, has been analyzed in some regions of interest. The introduction
of such an additional component centered at the Galactic center is found to
have improved the goodness of fit to the data significantly in all background
template models regardless of whether the excess spectrum is fixed or not. Our
results thus suggest that the presence of a statistically significant GeV
excess in the inner Galaxy is robust thought its spectrum depends on the DGE
model adopted in the analysis. The possible physical origin of the GeV excess
component is discussed and in the dark matter model the annihilation cross
section of such particles is evaluated.Comment: 14 pages, 9 figures. Accepted for publication in PRD, moderate
revision but main conclusions unchange
Recommended from our members
Cascade energy optimization for waste heat recovery in distributed energy systems
The efficiency of distributed energy systems can be significantly increased through waste heat recovery from industry or power generation. The technologies used for this process are typically dependent on the quality and temperature grades of waste heat. To maximize the efficiency of cascade heat utilization, it is important to optimize the choice of waste heat recovery technologies and their operation. In this paper, a detailed mixed integer linear programming optimization model is proposed for waste heat recovery in a district-scale microgrid. The model can distinguish waste heat quality for planning and operation optimization of distributed energy systems. Heat utilization technologies are formulated in this developed model and categorized in different temperature grades. The developed model is validated using four typical cases under different settings of system operation and business models. It is found that the optimization model, by distinguishing waste heat temperature, can increase energy cost savings by around 5%, compared to models that do not consider waste heat temperature grades. Additionally, the results indicate that the developed model can provide more realistic configuration and technologies dispatch
Search for a gamma-ray line feature from a group of nearby Galaxy clusters with Fermi LAT Pass 8 data
Galaxy clusters are the largest gravitationally bound objects in the universe
and may be suitable targets for indirect dark matter searches. With 85 months
of Fermi-LAT Pass 8 publicly available data, we analyze the gamma-ray emission
in the directions of 16 nearby Galaxy Clusters with an unbinned likelihood
analysis. No globally statistically-significant ray line feature is
identified and a tentative line signal may be present at GeV. The
95\% confidence level upper limits on the velocity-averaged cross section of
dark matter particles annihilating into double rays (i.e., ) are derived. Unless very
optimistic boost factors of dark matter annihilation in these Galaxy Clusters
have been assumed, such constraints are much weaker than the bounds set by the
Galactic ray data.Comment: The version published in Phys. Rev. D, minor revision (10 pages
including 4 eps figures
- …