76 research outputs found

    Reconstructing pedigrees using probabilistic analysis of ISSR amplification

    Get PDF
    Data obtained from ISSR amplification may readily be extracted but only allows us to know, for each gene, if a specific allele is present or not. From this partial information we provide a probabilistic method to reconstruct the pedigree corresponding to some families of diploid cultivars. This method consists in determining for each individual what is the most likely couple of parent pair amongst all older individuals, according to some probability measure. The construction of this measure bears on the fact that the probability to observe the specific alleles in the child, given the status of the parents does not depend on the generation and is the same for each gene. This assumption is then justified from a convergence result of gene frequencies which is proved here. Our reconstruction method is applied to a family of 85 living accessions representing the common broom {\it Cytisus scoparius}.Comment: 5 figure

    Effective medical surplus recovery

    Get PDF
    We analyze not-for-profit Medical Surplus Recovery Organizations (MSROs) that manage the recovery of surplus (unused or donated) medical products to fulfill the needs of underserved healthcare facilities in the developing world. Our work is inspired by an award-winning North American non-governmental organization (NGO) that matches the uncertain supply of medical surplus with the receiving parties’ needs. In particular, this NGO adopts a recipient-driven resource allocation model, which grants recipients access to an inventory database, and each recipient selects products of limited availability to fill a container based on its preferences. We first develop a game theoretic model to investigate the effectiveness of this approach. This analysis suggests that the recipient-driven model may induce competition among recipients and lead to a loss in value provision through premature orders. Further, contrary to the common wisdom from traditional supply chains, full inventory visibility in our setting may accelerate premature orders and lead to loss of effectiveness. Accordingly, we identify operational mechanisms to help MSROs deal with this problem. These are: (i) appropriately selecting container capacities while limiting the inventory availability visible to recipients and increasing the acquisition volumes of supplies, (ii) eliminating recipient competition through exclusive single-recipient access to MSRO inventory, and (iii) focusing on learning recipient needs as opposed to providing them with supply information, and switching to a provider-driven resource allocation model. We use real data from the NGO by which the study was inspired and show that the proposed improvements can substantially increase the value provided to recipients

    Unsupervised empirical Bayesian multiple testing with external covariates

    Full text link
    In an empirical Bayesian setting, we provide a new multiple testing method, useful when an additional covariate is available, that influences the probability of each null hypothesis being true. We measure the posterior significance of each test conditionally on the covariate and the data, leading to greater power. Using covariate-based prior information in an unsupervised fashion, we produce a list of significant hypotheses which differs in length and order from the list obtained by methods not taking covariate-information into account. Covariate-modulated posterior probabilities of each null hypothesis are estimated using a fast approximate algorithm. The new method is applied to expression quantitative trait loci (eQTL) data.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS158 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Forecasting Expected Shortfall: An Extreme Value Approach

    Get PDF
    We compare estimates of Value at Risk and Expected Shortfall from AR(1)-GARCH(1,1)-type models (standard GARCH, GJR-GARCH, Component GARCH), to estimates produced using the Peak Over Threshold method on the residuals of these models. We find that the conditional volatility model matters less than the choice of distribution for the innovations in the loss process, for which we compare the normal and the t-distribution. The Peak Over Threshold estimates are found to improve upon the estimates of the original models, particularly in the case of normally distributed innovations

    Backtesting Parametric Value-at-Risk Estimates in the S&P500 Index

    Get PDF
    Thanks to its wide diffusion in the industry, Value-at-Risk (VaR) manages to became a cornerstone in the growing and complex regulation of capital requirements (Basel Accords). For this reason, despite the theoretical limitations of VaR, the study of how improve the performance of such risk measure is still fundamental. This thesis concerns the parametric method used to estimate Value-at-Risk and the evaluation of such estimates. The accuracy in predicting future risks, strictly depends on how such measure is calculated. The chosen method for the calculation is the parametric approach based on various extensions of the ARCH-GARCH models, combined with different assumed distributions for the returns. The ARCHGARCH models should be able to fit time series which show a time-varying volatility (heteroskedasticity), while more leptokurtic distributions (such as Student’s t and GED) than the Normal one, and their relative skew version, should provide better tail forecast and hence better VaR estimates. The primary objective of this work is the evaluation of the estimates obtained from the models described above. For this purposes, several backtesting methods were performed and their results compared. Backtesting is a statistical procedure where actual profits and losses are systematically compared to corresponding VaR estimates. Backtesting methods here considered can be broadly divide in two categories. Those tests that evaluate only a single VaR level (i.e. 1% or 5%) and those tests that evaluate a multiple VaR levels (hence they evaluate the entire density forecast). To the first group belong test such as: Kupiec’s Unconditional Coverage test, Christoffersen’s Conditional Coverage test, Mixed Kupiec test and Duration test. While to the second group belongs the Crnkovic-Drachman test, the Q-test and the Berkowitz test. The results are then compared in the light of the strengths and the weaknesses of each approach. It emerged a substantial heterogeneity among the outcomes of these tests, especially between backtesting methods base on a single VaR level and those based on a multiple VaR levels. This empirical work is built on the framework of Angelidis, Benos and Degiannakis (2003). However, different volatility models, distributions and backtesting methods were employed. For these reasons, a comparison between the results of the two study is also provided

    Bayesian nonparametric tests via sliced inverse modeling

    Full text link
    We study the problem of independence and conditional independence tests between categorical covariates and a continuous response variable, which has an immediate application in genetics. Instead of estimating the conditional distribution of the response given values of covariates, we model the conditional distribution of covariates given the discretized response (aka "slices"). By assigning a prior probability to each possible discretization scheme, we can compute efficiently a Bayes factor (BF)-statistic for the independence (or conditional independence) test using a dynamic programming algorithm. Asymptotic and finite-sample properties such as power and null distribution of the BF statistic are studied, and a stepwise variable selection method based on the BF statistic is further developed. We compare the BF statistic with some existing classical methods and demonstrate its statistical power through extensive simulation studies. We apply the proposed method to a mouse genetics data set aiming to detect quantitative trait loci (QTLs) and obtain promising results.Comment: 32 pages, 7 figure

    Econometric Methods and Monte Carlo Simulations for Financial Risk Management

    Get PDF
    Value-at-Risk (VaR) forecasting in the context of Monte Carlo simulations is evaluated. A range of parametric models is considered, namely the traditional Generalized Autore- gressive Conditional Heteroscedasticity (GARCH) model, the exponential GARCH and the GJR-GARCH, which are put in the context of the Gaussian and Student-t distri- butions. The returns of the S&P 500 provide the basis for the study. Monte Carlo simulations are then applied in the estimation and forecasting of index returns. Two forecasting periods are employed with respect to the Global Financial Crisis (GFC). The forecasting accuracy of the various models will be evaluated in order to determine the applicability of these VaR estimation techniques in dierent market conditions. Results reveal that: (i) no model has consistent performance in both volatile and stable mar- ket conditions; (ii) asymmetric volatility models oer better performance in the post crisis forecasting period; (iii) all models underestimate risk in highly unstable market conditions

    Reporting and Interpretation in Genome-Wide Association Studies

    Get PDF
    In the context of genome-wide association studies we critique a number of methods that have been suggested for flagging associations for further investigation. The p-value is by far the most commonly used measure, but requires careful calibration when the a priori probability of an association is small, and discards information by not considering the power associated with each test. The q-value is a frequentist method by which the false discovery rate (FDR) may be controlled. We advocate the use of the Bayes factor as a summary of the information in the data with respect to the comparison of the null and alternative hypotheses, and describe a recently-proposed approach to the calculation of the Bayes factor that is easily implemented. The combination of data across studies is straightforward using the Bayes factor approach, as are power calculations. The Bayes factor and the q-value provide complementary information and when used in addition to the p-value may be used to reduce the number of reported findings that are subsequently not reproduced
    • 

    corecore