20 research outputs found

    Spatial matching of M configurations of points with a bioinformatics application

    Get PDF
    In this paper, we present a model to deal with the problem of matching M objects or configurations of points. This is a generalization of the model proposed by Green and Mardia (2006). We consider, as a direct and simple application, the case of three configurations with labelled and with unlabelled points. In both cases, we consider data from a microarray experiment of gorilla, bonobo and human cultured fibroblasts published by Karaman et al. (2003). We find out the matchings and the best affine transformation between the projections of genes in a two dimensional space, obtained by a Multidimensional Scaling technique

    Bayesian non-linear matching of pairwise microarray gene expressions

    Get PDF
    In this paper, we present a Bayesian non-linear model to analyze matching pairs of microarray expression data. This model generalizes, in terms of neural networks, standard linear matching models. As a practical application, we analyze data of patients with Acute Lymphoblastic Leukemia and we find out the best neural net model that relates the expression levels of two types of cytogenetically different samples from them

    Multiple hypothesis testing and clustering with mixtures of non-central t-distributions applied in microarray data analysis

    Get PDF
    Multiple testing analysis, based on clustering methodologies, is usually applied in Microarray Data Analysis for comparisons between pair of groups. In this paper, we generalize this methodology to deal with multiple comparisons among more than two groups obtained from microarray expressions of genes. Assuming normal data, we define a statistic which depends on sample means and sample variances, distributed as a non-central t-distribution. As we consider multiple comparisons among groups, a mixture of non-central t-distributions is derived. The estimation of the components of mixtures is obtained via a Bayesian approach, and the model is applied in a multiple comparison problem from a microarray experiment obtained from gorilla, bonobo and human cultured fibroblasts

    Using weibull mixture distributions to model heterogeneous survival data

    Get PDF
    In this article we use Bayesian methods to fit a Weibull mixture model with an unknown number of components to possibly right censored survival data. This is done using the recently developed, birth-death MCMC algorithm. We also show how to estimate the survivor function and the expected hazard rate from the MCMA output

    Bayesian hierarchical modelling of bacteria growth

    Get PDF
    Bacterial growth models are commonly used in food safety. Such models permit the prediction of microbial safety and the shelf life of perishable foods. In this paper, we study the problem of modelling bacterial growth when we observe multiple experimental results under identical environmental conditions. We develop a hierarchical version of the Gompertz equation to take into account the possibility of replicated experiments and we show how it can be fitted using a fully Bayesian approach. This approach is illustrated using experimental data from Listeria monocytogenes growth and the results are compared with alternative models. Model selection is undertaken throughout using an appropriate version of the deviance information criterion and the posterior predictive loss criterion. Models are fitted using WinBUGS via R2WinBUGS

    Data cloning for a threshold asymmetric stochastic volatility model

    Get PDF
    In this paper, we propose a new asymmetric stochastic volatility model whose asymmetry parameter can change depending on the intensity of the shock and is modeled as a threshold function whose threshold depends on past returns. We study the model in terms of leverage and propagation using a new concept that has recently appeared in the literature. We find that the new model can generate more leverage and propagation than a well-known asymmetric volatility model. We also propose to estimate the parameters of the model by cloning data. We compare the estimates in finite samples of data cloning and a Bayesian approach and find that data cloning is often more accurate. Data cloning is a general technique for computing maximum likelihood estimators and their asymptotic variances using a Markov chain Monte Carlo (MCMC) method. The empirical application shows that the new model often improves the fit compared to the benchmark model. Finally, the new proposal together with data cloning estimation often leads to more accurate 1-day and 10-day volatility forecasts, especially for return series with high volatility

    Non-linear models of disability and age applied to census data

    Get PDF
    It is usually considered that the proportion of handicapped people grows with age. Namely, the older the man/woman is, the more level of disability he/she suffers. However, empirical evidence shows that this assessment is not always true, or at least, it is not true in the Spanish population. This study tries to assess the impact of age on disability in Spain. It is divided into three different parts. The first one is focused in describing the way disability is measured in this work. We used a former index defined by the authors that distinguishes between men and women. The second one is focused in a literature review about the methods used in this paper. This section emphasizes on local regression, feed forward neural networks and BARS. Finally, in the last section estimations are undertaken. Several methods are used and, therefore, there are fairly differences in the results, not only among the methodologies, but also between genders

    ABC and Hamiltonian Monte-Carlo methods in COGARCH models

    Get PDF
    The analysis of financial series, assuming calendar effects and unequally spaced times over continuous time, can be studied by means of COGARCH models based on Lévy processes. In order to estimate the COGARCH model parameters, we propose to use two different Bayesian approaches. First, we suggest to use a Hamiltonian Montecarlo (HMC) algorithm that improves the performance of standard MCMC methods. Secondly, we introduce an Approximate Bayesian Computational (ABC) methodology which allows to work with analytically infeasible or computationally expensive likelihoods. After a simulation and comparison study for both methods, HMC and ABC, we apply them to model the behaviour of some NASDAQ time series and we discuss the results

    Why using a general model in Solvency II is not a good idea : an explanation from a Bayesian point of view

    Get PDF
    The passing of Directive 2009/138/CE (Solvency II) has opened a new era in the European insurance market. According to this new regulatory environment, the volume of own resources will be determined depending on the risks that any insurer would be holding. So, nowadays, the model to estimate the amount of economic capital is one of the most important elements. The Directive establishes that the European entities can use a general model to perform these tasks. However, this situation is far from being optimal because the calibration of the general model has been made using figures that reflects and average behaviour. This paper shows that not all the companies operating in a specific market has the same risk profile. For this reason, it is unsatisfactory to use a general model for all of them. We use the PAM clustering method and afterwards some Bayesian tools to check the results previously obtained. Analysed data (public information belonging to Spanish insurance companies about balance sheets and income statements from 1998 to 2007) comes from the DGSFP (Spanish insurance regulator)

    Data cloning estimation of GARCH and COGARCH models

    Get PDF
    GARCH models include most of the stylized facts of financial time series and they have been largely used to analyze discrete financial time series. In the last years, continuous time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behavior of some NASDAQ time serie
    corecore