7,951 research outputs found

    An Ensemble EM Algorithm for Bayesian Variable Selection

    Full text link
    We study the Bayesian approach to variable selection in the context of linear regression. Motivated by a recent work by Rockova and George (2014), we propose an EM algorithm that returns the MAP estimate of the set of relevant variables. Due to its particular updating scheme, our algorithm can be implemented efficiently without inverting a large matrix in each iteration and therefore can scale up with big data. We also show that the MAP estimate returned by our EM algorithm achieves variable selection consistency even when pp diverges with nn. In practice, our algorithm could get stuck with local modes, a common problem with EM algorithms. To address this issue, we propose an ensemble EM algorithm, in which we repeatedly apply the EM algorithm on a subset of the samples with a subset of the covariates, and then aggregate the variable selection results across those bootstrap replicates. Empirical studies have demonstrated the superior performance of the ensemble EM algorithm

    A Variational Algorithm for Bayesian Variable Selection

    Full text link
    There has been an intense development on the estimation of a sparse regression coefficient vector in statistics, machine learning and related fields. In this paper, we focus on the Bayesian approach to this problem, where sparsity is incorporated by the so-called spike-and-slab prior on the coefficients. Instead of replying on MCMC for posterior inference, we propose a fast and scalable algorithm based on variational approximation to the posterior distribution. The updating scheme employed by our algorithm is different from the one proposed by Carbonetto and Stephens (2012). Those changes seem crucial for us to show that our algorithm can achieve asymptotic consistency even when the feature dimension diverges exponentially fast with the sample size. Empirical results have demonstrated the effectiveness and efficiency of the proposed algorithm

    Implications of the first AMS-02 measurement for dark matter annihilation and decay

    Full text link
    In light of the first measurement of the positron fraction by the AMS-02 experiment, we perform a detailed global analysis on the interpretation of the latest data of PAMELA, Fermi-LAT, and AMS-02 in terms of dark matter (DM) annihilation and decay in various propagation models. The allowed regions for the DM particle mass and annihilation cross section or decay life-time are obtained for channels with leptonic final states: 2e2e, 2μ2\mu, 2τ2\tau, 4e4e, 4μ4\mu and 4τ4\tau. We show that for the conventional astrophysical background the AMS-02 positron fraction data alone favour a DM particle mass $\sim 500 \ (800)GeVifDMparticlesannihilatedominantlyinto GeV if DM particles annihilate dominantly into 2\mu \ (4\mu)finalstates,whichissignificantlylowerthanthatfavouredbytheFermiLATdataofthetotalfluxofelectronsandpositrons.Theallowedregionsbythetwoexperimentsdonotoverlapatahighconfidencelevel( final states, which is significantly lower than that favoured by the Fermi-LAT data of the total flux of electrons and positrons. The allowed regions by the two experiments do not overlap at a high confidence level (99.99999\%C.L.).WeconsideranumberofpropagationmodelswithdifferenthaloheightC.L.). We consider a number of propagation models with different halo height Z_{h},diffusionparameters, diffusion parameters D_{0}and and \delta_{1/2},andpowerindicesofprimarynucleonsources, and power indices of primary nucleon sources \gamma_{p1/p2}.Thenormalizationandtheslopeoftheelectronbackgroundarealsoallowedtovary.Wefindthatthetensionbetweenthetwoexperimentscanbeonlyslightlyreducedinthepropagationmodelwithlarge. The normalization and the slope of the electron background are also allowed to vary. We find that the tension between the two experiments can be only slightly reduced in the propagation model with large Z_{h}and and D_{0}.Theconsistencyoffitisimprovedforannihilationchannelswith. The consistency of fit is improved for annihilation channels with 2\tauand and 4\taufinalstateswhichfavourTeVscaleDMparticlewithlargecrosssectionsabove final states which favour TeV scale DM particle with large cross sections above \sim 10^{-23} \text{cm}^3\text{s}^{-1}$. In all the considered leptonic channels, the current data favour the scenario of DM annihilation over DM decay. In the decay scenario, the charge asymmetric DM decay is slightly favoured.Comment: 27 pages, 12 figures, 3 tables, in-depth discussions on the uncertainties in backgrounds and propagation models added, version to appear in JCA

    Tree-Structured Reinforcement Learning for Sequential Object Localization

    Full text link
    Existing object proposal algorithms usually search for possible object regions over multiple locations and scales separately, which ignore the interdependency among different objects and deviate from the human perception procedure. To incorporate global interdependency between objects into object localization, we propose an effective Tree-structured Reinforcement Learning (Tree-RL) approach to sequentially search for objects by fully exploiting both the current observation and historical search paths. The Tree-RL approach learns multiple searching policies through maximizing the long-term reward that reflects localization accuracies over all the objects. Starting with taking the entire image as a proposal, the Tree-RL approach allows the agent to sequentially discover multiple objects via a tree-structured traversing scheme. Allowing multiple near-optimal policies, Tree-RL offers more diversity in search paths and is able to find multiple objects with a single feed-forward pass. Therefore, Tree-RL can better cover different objects with various scales which is quite appealing in the context of object proposal. Experiments on PASCAL VOC 2007 and 2012 validate the effectiveness of the Tree-RL, which can achieve comparable recalls with current object proposal algorithms via much fewer candidate windows.Comment: Advances in Neural Information Processing Systems 201

    Distributions of Gamma-Ray Bursts and Blazars in the LpEpL_{\rm p}-E_{\rm p} Plane and Possible Implications for their Radiation Physics

    Full text link
    We present a spectral analysis for a sample of redshift known GRBs observed with {\em Fermi}/GBM. Together with the results derived from our systematical spectral energy distribution modeling with the leptonic models for a {\em Fermi}/LAT blazar sample, we compare the distributions of the GRBs and the blazars by plotting the synchrotron peak luminosity (LsL_{\rm s}) and the corresponding peak photon energy EsE_{\rm s} of blazars in the LpEpL_{\rm p}-E_{\rm p} plane of GRBs, where LpL_{\rm p} and EpE_{\rm p} are the peak luminosity and peak photon energy of the GRB time-integrated νfν\nu f_\nu spectrum, respectively. The GRBs are in the high-LpL_{\rm p}, high-EpE_{\rm p} corner of the plane and a tight LpEpL_{\rm p}-E_{\rm p} relation is found, i.e., LpEp2.130.46+0.54L_{\rm p}\propto E_{\rm p}^{2.13^{+0.54}_{-0.46}}. Both FSRQs and LBLs are clustered in the low-EpE_{\rm p}, low-LpL_{\rm p} corner. IBLs and HBLs have Es2×103102E_{\rm s}\sim 2\times 10^{-3} - 10^{2} keV and Ls10441047L_{\rm s} \sim 10^{44} - 10^{47} erg s1^{-1}, but no dependence of LsL_{\rm s} on EsE_{\rm s} is found. We show that the tight LpEpL_p-E_p relation of GRBs is potentially explained with the synchrotron radiation of fast-cooling electrons in a highly magnetized ejecta, and the weak anti-correlation of LsEsL_{\rm s}-E_{\rm s} for FSRQs and LBLs may be attributed to synchrotron radiation of slow-cooling electrons in a moderately magnetized ejecta. The distributions of IBLs and HBLs in the LpEpL_{\rm p}-E_{\rm p} plane may be interpreted with synchrotron radiation of fast-cooling electrons in a matter-dominated ejecta. These results may present a unified picture for the radiation physics of relativistic jets in GRBs and blazars within the framework of the leptonic synchrotron radiation models.Comment: 23 pages, 2 tables, 2 figures. Accepted for publication in Ap

    A Fast Differential Grouping Algorithm for Large Scale Black-Box Optimization

    Full text link
    Decomposition plays a significant role in cooperative co-evolution which shows great potential in large scale black-box optimization. However, current popular decomposition algorithms generally require to sample and evaluate a large number of solutions for interdependency detection, which is very time-consuming. To address this issue, this study proposes a new decomposition algorithm named fast differential grouping (FDG). FDG first identifies the type of an instance by detecting the interdependencies of a few pairs of variable subsets selected according to certain rules, and thus can rapidly complete the decomposition of a fully separable or nonseparable instance. For an identified partially separable instance, FDG converts the key decomposition process into a search process in a binary tree by taking corresponding variable subsets as tree nodes. This enables it to directly deduce the interdependency related to a child node by reutilizing the solutions sampled for corresponding parent and brother nodes. To support the above operations, this study designs a normalized variable-subset-oriented interdependency indicator, which can adaptively generate decomposition thresholds according to its distribution and thus enhances decomposition accuracy. Computational complexity analysis and experimental results verify that FDG outperforms popular decomposition algorithms. Further tests indicate that FDG embedded in a cooperative co-evolution framework can achieve highly competitive optimization results as compared with some state-of-the-art algorithms for large scale black-box optimization

    Single Photon Source Driver Designed in ASIC

    Full text link
    The single photon source is an important part of the quantum key distribution (QKD) system. At present, the single photon source is large in size and complex in structure for a lot of discrete components which are used. The miniaturization of the photon source is the tendency of the QKD system. We integrate all laser driver electronic module into one single ASIC chip, which can be used to drive the 1550nm DFB laser in random pulse mode and it can greatly reduce the volume of the single photon source. We present the design of the chip named LSD2018 and simulation results before the tape-out. The LSD2018 is fabricated with a 130 nm CMOS process and consists of a discriminator, an adjustable pulse generator, a bandgap reference, an SPI bus, and an amplitude-adjustable current pulse driver. The electronic random pulse from the driver can go 20mA to 120mA in amplitude and 400ps to 4ns in pulse width. The parameters can be set by an SPI bus

    GeV excess in the Milky Way: The Role of Diffuse Galactic gamma ray Emission template

    Full text link
    Several groups have analyzed the publicly-available Fermi-LAT data and reported a spatially extended γ\gamma-ray excess of around 131-3 GeV from the region surrounding the Galactic Center that might originate from annihilation of dark matter particles with a rest mass mχ3040m_\chi \sim 30-40 GeV. In this work we examine the role of the diffuse Galactic gamma ray emission (DGE) templates played in suppressing the GeV excess. For such a purpose, we adopt in total 128 background templates that have been generated by Ackermann et al. \cite{FermiLAT:2012aa} in the study of the {Fermi-LAT} observations of the diffuse gamma ray emission considering the effects of cosmic rays and the interstellar medium. The possible GeV excess, assumed to follow the spatial distribution of the prompt gamma-rays produced in the annihilation of dark matter particles taking a generalized NFW profile with an inner slope α=1.2\alpha=1.2, has been analyzed in some regions of interest. The introduction of such an additional component centered at the Galactic center is found to have improved the goodness of fit to the data significantly in all background template models regardless of whether the excess spectrum is fixed or not. Our results thus suggest that the presence of a statistically significant GeV excess in the inner Galaxy is robust thought its spectrum depends on the DGE model adopted in the analysis. The possible physical origin of the GeV excess component is discussed and in the dark matter model the annihilation cross section of such particles is evaluated.Comment: 14 pages, 9 figures. Accepted for publication in PRD, moderate revision but main conclusions unchange

    Search for a gamma-ray line feature from a group of nearby Galaxy clusters with Fermi LAT Pass 8 data

    Full text link
    Galaxy clusters are the largest gravitationally bound objects in the universe and may be suitable targets for indirect dark matter searches. With 85 months of Fermi-LAT Pass 8 publicly available data, we analyze the gamma-ray emission in the directions of 16 nearby Galaxy Clusters with an unbinned likelihood analysis. No globally statistically-significant γ\gamma-ray line feature is identified and a tentative line signal may be present at 43\sim 43 GeV. The 95\% confidence level upper limits on the velocity-averaged cross section of dark matter particles annihilating into double γ\gamma-rays (i.e., σvχχγγ\langle \sigma v \rangle_{\chi\chi\rightarrow \gamma\gamma}) are derived. Unless very optimistic boost factors of dark matter annihilation in these Galaxy Clusters have been assumed, such constraints are much weaker than the bounds set by the Galactic γ\gamma-ray data.Comment: The version published in Phys. Rev. D, minor revision (10 pages including 4 eps figures
    corecore