1,549,157 research outputs found

    Combining perturbation theories with halo models

    Full text link
    We investigate the building of unified models that can predict the matter-density power spectrum and the two-point correlation function from very large to small scales, being consistent with perturbation theory at low kk and with halo models at high kk. We use a Lagrangian framework to re-interpret the halo model and to decompose the power spectrum into "2-halo" and "1-halo" contributions, related to "perturbative" and "non-perturbative" terms. We describe a simple implementation of this model and present a detailed comparison with numerical simulations, from k0.02k \sim 0.02 up to 100h100 hMpc1^{-1}, and from x0.02x \sim 0.02 up to 150h1150 h^{-1}Mpc. We show that the 1-halo contribution contains a counterterm that ensures a k2k^2 tail at low kk and is important not to spoil the predictions on the scales probed by baryon acoustic oscillations, k0.02k \sim 0.02 to 0.3h0.3 hMpc1^{-1}. On the other hand, we show that standard perturbation theory is inadequate for the 2-halo contribution, because higher order terms grow too fast at high kk, so that resummation schemes must be used. We describe a simple implementation, based on a 1-loop "direct steepest-descent" resummation for the 2-halo contribution that allows fast numerical computations, and we check that we obtain a good match to simulations at low and high kk. Our simple implementation already fares better than standard 1-loop perturbation theory on large scales and simple fits to the power spectrum at high kk, with a typical accuracy of 1% on large scales and 10% on small scales. We obtain similar results for the two-point correlation function. However, there remains room for improvement on the transition scale between the 2-halo and 1-halo contributions, which may be the most difficult regime to describe.Comment: 29 page

    Combining forecasts from nested models

    Get PDF
    Motivated by the common finding that linear autoregressive models forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but as the sample size grows, the DGP converges to the restricted model. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive MSE-minimizing weights for combining the restricted and unrestricted forecasts. In the Monte Carlo and empirical analysis, we compare the effectiveness of our combination approach against related alternatives, such as Bayesian estimation.Forecasting

    Combining forecasts from nested models

    Get PDF
    Motivated by the common finding that linear autoregressive models often forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but a subset of the coefficients are treated as being local-to-zero. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive MSE-minimizing weights for combining the restricted and unrestricted forecasts. Monte Carlo and empirical analyses verify the practical e effectiveness of our combination approach.Econometric models ; Economic forecasting

    Combining Thesaurus Knowledge and Probabilistic Topic Models

    Full text link
    In this paper we present the approach of introducing thesaurus knowledge into probabilistic topic models. The main idea of the approach is based on the assumption that the frequencies of semantically related words and phrases, which are met in the same texts, should be enhanced: this action leads to their larger contribution into topics found in these texts. We have conducted experiments with several thesauri and found that for improving topic models, it is useful to utilize domain-specific knowledge. If a general thesaurus, such as WordNet, is used, the thesaurus-based improvement of topic models can be achieved with excluding hyponymy relations in combined topic models.Comment: Accepted to AIST-2017 conference (http://aistconf.ru/). The final publication will be available at link.springer.co

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    Rho meson properties from combining QCD-based models

    Full text link
    Aiming at the calculation of the properties of rho-mesons, non-perturbative QCD-based methods are discussed concerning their potentials as well as their short-comings. The latter are overcome by combining these techniques. The utilized methods are (i) the chiral constituent quark model deduced from the instanton vacuum model and large-N_c arguments, (ii) chiral perturbation theory unitarized by the inverse amplitude method and (iii) QCD sum rules. Advantages of the combination of these methods are especially the absence of un-physical quark-production thresholds and parameter-free results. Already in the chiral limit and in leading order in 1/N_c one obtains a reasonable result for the mass of the rho-meson, namely m_rho = 790 \pm 30 MeV. Using the KSFR relation the universality of the rho-meson coupling is recovered. The latter is found to be g = 6.0 \pm 0.3.Comment: 16 pages, 1 figure, Revtex

    When topic models disagree: keyphrase extraction with mulitple topic models

    Get PDF
    We explore how the unsupervised extraction of topic-related keywords benefits from combining multiple topic models. We show that averaging multiple topic models, inferred from different corpora, leads to more accurate keyphrases than when using a single topic model and other state-of-the-art techniques. The experiments confirm the intuitive idea that a prerequisite for the significant benefit of combining multiple models is that the models should be sufficiently different, i.e., they should provide distinct contexts in terms of topical word importance
    corecore