28,378 research outputs found

    Robust Inference of Trees

    Full text link
    This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.Comment: 26 pages, 7 figure

    Measurement errors in body size of sea scallops (Placopecten magellanicus) and their effect on stock assessment models

    Get PDF
    Body-size measurement errors are usually ignored in stock assessments, but may be important when body-size data (e.g., from visual sur veys) are imprecise. We used experiments and models to quantify measurement errors and their effects on assessment models for sea scallops (Placopecten magellanicus). Errors in size data obscured modes from strong year classes and increased frequency and size of the largest and smallest sizes, potentially biasing growth, mortality, and biomass estimates. Modeling techniques for errors in age data proved useful for errors in size data. In terms of a goodness of model fit to the assessment data, it was more important to accommodate variance than bias. Models that accommodated size errors fitted size data substantially better. We recommend experimental quantification of errors along with a modeling approach that accommodates measurement errors because a direct algebraic approach was not robust and because error parameters were diff icult to estimate in our assessment model. The importance of measurement errors depends on many factors and should be evaluated on a case by case basis

    A Bayesian Framework for Combining Valuation Estimates

    Full text link
    Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.Comment: Citations at http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=240309 Review of Quantitative Finance and Accounting, 30.3 (2008) forthcomin

    Bayesian Linear Regression

    Get PDF
    The paper is concerned with Bayesian analysis under prior-data conflict, i.e. the situation when observed data are rather unexpected under the prior (and the sample size is not large enough to eliminate the influence of the prior). Two approaches for Bayesian linear regression modeling based on conjugate priors are considered in detail, namely the standard approach also described in Fahrmeir, Kneib & Lang (2007) and an alternative adoption of the general construction procedure for exponential family sampling models. We recognize that - in contrast to some standard i.i.d. models like the scaled normal model and the Beta-Binomial / Dirichlet-Multinomial model, where prior-data conflict is completely ignored - the models may show some reaction to prior-data conflict, however in a rather unspecific way. Finally we briefly sketch the extension to a corresponding imprecise probability model, where, by considering sets of prior distributions instead of a single prior, prior-data conflict can be handled in a very appealing and intuitive way

    Constructing a knowledge economy composite indicator with imprecise data.

    Get PDF
    This paper focuses on the construction of a composite indicator for the knowledge based economy using imprecise data. Specifically, for some indicators we only have information on the bounds of the interval within which the true value is believed to lie. The proposed approach is based on a recent offspring in the Data Envelopment Analysis literature. Given the setting of evaluating countries, this paper discerns a ‘strong country in weak environment’ and ‘weak country in strong environment’ scenario resulting in respectively an upper and lower bound on countries’ performance. Accordingly, we derive a classification of ‘benchmark countries’, ‘potential benchmark countries’, and ‘countries open to improvement’.Knowledge economy indicators; Composite indicators; Multiple Imputation; Benefit of the doubt; Weight restrictions; Data Envelopment Analysis; Data impreciseness;
    • …
    corecore