39 research outputs found

    Robustness of the shrinkage estimator for the relative potency in the combination of multivariate bioassays

    Get PDF
    ABSTRACTThis article investigates the robustness of the shrinkage Bayesian estimator for the relative potency parameter in the combinations of multivariate bioassays proposed in Chen et al. (1999), which incorporated prior information on the model parameters based on Jeffreys’ rules. This investigation is carried out for the families of t-distribution and Cauchy-distribution based on the characteristics of bioassay theory since the t-distribution approaches the normal distribution which is the most commonly used distribution in the applications of bioassay as the degrees of freedom increases and the t-distribution approaches the Cauchy-distribution as the degrees of freedom approaches 1 which is also an important distribution in bioassay. A real data is used to illustrate the application of this investigation. This analysis further supports the application of the shrinkage Bayesian estimator to the theory of bioassay along with the empirical Bayesian estimator

    Robustness of the shrinkage estimator for the relative potency in the combination of multivariate bioassays

    Get PDF
    This paper investigates the robustness of the shrinkage Bayesian estimator for the relative potency parameter in the combinations of multivariate bioassays proposed in Chen et al.(1999), which incorporated prior information on the model parame- ters based on Je reys' rules. This investigation is carried out for the families of t-distribution and Cauchy-distribution based on the characteristics of bioassay the- ory since the t-distribution approaches the normal distribution which is the most commonly used distribution in the applications of bioassay as the degrees of freedom increases and the t-distribution approaches the Cauchy-distribution as the degrees of freedom approaches 1 which is also an important distribution in bioassay. A real data is used to illustrate the application of this investigation. This analysis further supports the application of the shrinkage Bayesian estimator to the theory of bioassay along with the empirical Bayesian estimator.http://www.tandfonline.com/loi/lsta202017-09-30hb2016Statistic

    Comparing geographic area-based and classical population-based incidence and prevalence rates, and their confidence intervals

    Get PDF
    To quantify the HIV epidemic, the classical population-based prevalence and incidence rates (P rates) are the two most commonly used measures used for policy interventions. However, these P rates ignore the heterogeneity of the size of geographic regionwhere the population resides. It is intuitive that with the sameP rates, the likelihood for HIV can be much greater to spread in a population residing in a crowed small urban area than the same number of population residing in a large rural area. With this limitation, Chen and Wang (2017) proposed the geographic area-based rates (G rates) to complement the classical P rates. They analyzed the 2000–2012 US data on new HIV infections and persons living with HIV and found, as compared with other methods, using G rates enables researchers to more quickly detect increases in HIV rates. This capacity to reveal increasing rates in a more efficient and timely manner is a crucial methodological contribution to HIV research. To enhance this newly proposed concept of G rates, this article presents a discussion of 3 areas for further development of this important concept: (1) analysis of global HIV epidemic data using the newly proposed G rates to capture the changes globally; (2) development of the associated population density-based rates (D rates) to incorporate the heterogeneities from both geographical area and total population-at-risk; and (3) development of methods to calculate variances and confidence intervals for the P rates, G rates, and D rates to capture the variability of these indices.http: //ees.elsevier.com/pmedam2017Statistic

    Efficient and direct estimation of the variance–covariance matrix in EM algorithm with interpolation method

    Get PDF
    The expectation–maximization (EM) algorithm is a seminal method to calculate the maximum likelihood estimators (MLEs) for incomplete data. However, one drawback of this algorithm is that the asymptotic variance–covariance matrix of the MLE is not automatically produced. Although there are several methods proposed to resolve this drawback, limitations exist for these methods. In this paper, we propose an innovative interpolation procedure to directly estimate the asymptotic variance–covariance matrix of the MLE obtained by the EM algorithm. Specifically we make use of the cubic spline interpolation to approximate the first-order and the second-order derivative functions in the Jacobian and Hessian matrices from the EM algorithm. It does not require iterative procedures as in other previously proposed numerical methods, so it is computationally efficient and direct. We derive the truncation error bounds of the functions theoretically and show that the truncation error diminishes to zero as the mesh size approaches zero. The optimal mesh size is derived as well by minimizing the global error. The accuracy and the complexity of the novel method is compared with those of the well-known SEM method. Two numerical examples and a real data are used to illustrate the accuracy and stability of this novel method.The National Research Foundation of South Africa and the South African Medical Research Council (SAMRC).http://www.elsevier.com/locate/jspihj2022Statistic

    Bayesian inference for stochastic cusp catastrophe model with partially observed data

    Get PDF
    The purpose of this paper is to develop a data augmentation technique for statistical inference concerning stochastic cusp catastrophe model subject to missing data and partially observed observations. We propose a Bayesian inference solution that naturally treats missing observations as parameters and we validate this novel approach by conducting a series of Monte Carlo simulation studies assuming the cusp catastrophe model as the underlying model. We demonstrate that this Bayesian data augmentation technique can recover and estimate the underlying parameters from the stochastic cusp catastrophe model.South Africa DST-NRF-SAMRC SARChI Research Chair in Biostatistics.https://www.mdpi.com/journal/mathematicsam2022Statistic

    A Randomized Clinical Trial of an Identity Intervention Programme for Women with Eating Disorders

    Get PDF
    Objective Findings of a randomized trial of an identity intervention programme (IIP) designed to build new positive self‐schemas that are separate from other conceptions of the self in memory as the means to promote improved health in women diagnosed with eating disorders are reported. Method After baseline data collection, women with anorexia nervosa or bulimia nervosa were randomly assigned to IIP ( n  = 34) or supportive psychotherapy (SPI) ( n  = 35) and followed at 1, 6, and 12 months post‐intervention. Results The IIP and supportive psychotherapy were equally effective in reducing eating disorder symptoms at 1 month post‐intervention, and changes were stable through the 12‐month follow‐up period. The IIP tended to be more effective in fostering development of positive self‐schemas, and the increase was stable over time. Regardless of baseline level, an increase in the number of positive self‐schemas between pre‐intervention and 1‐month post‐intervention predicted a decrease in desire for thinness and an increase in psychological well‐being and functional health over the same period. Discussion A cognitive behavioural intervention that focuses on increasing the number of positive self‐schemas may be central to improving emotional health in women with anorexia nervosa and bulimia nervosa. Copyright © 2012 John Wiley & Sons, Ltd and Eating Disorders Association.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/96416/1/erv2195.pd

    Robust Bayesian nonlinear mixed‐effects modeling of time to positivity in tuberculosis trials

    Get PDF
    Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.http://wileyonlinelibrary.com/journal/pst2019-09-01hj2018Statistic

    Meta-analysis of two studies with random effects?

    Get PDF
    A meta-analysis (MA) combines similar studies resulting in a larger number of subjects to improve the degree of belief in the significance declared. Its major purpose is to increase the number of observations and the associated statistical power, thereby increasing the precision for the estimates of the effect size as it relates to an association or an intervention. As commonly known, there are discrepancies between MAs and the large randomized clinical trials. The conclusions drawn are subject to bias because they are affected by the small size of clinical studies. However, large randomized clinical trials are the most reliable way of obtaining reproducible results; in other words, we expect the same results if we repeated the experiment. On the other hand, large trials do not guarantee that the protocol or the conclusions were appropriate. Although it is intuitive to believe an MA of similar trials is more likely to result in valid conclusions, studies show this is not always the case. By the same argument, adding studies with diverse protocols makes an MA less reliable. Because an MA is a summation, its reliability depends on the combined trials. Inclusion/exclusion criteria, conclusions, reliability of the results, and applicability for the conclusions affect the bias. Hence, we cannot declare that MA represents the final and accurate viewpoint on an area of research. Several statistical methods similar to what have been used to perform analyses on individual subject data have been modified to improve the reliability of MA.https://www.journals.elsevier.com/journal-of-minimally-invasive-gynecology2018-07-30hj2018Statistic

    Performance of diagnostic tests based on continuous bivariate markers

    Get PDF
    In medical diagnostic research, it is customary to collect multiple continuous biomarker measures to improve the accuracy of diagnostic tests. A prevalent practice is to combine the measurements of these biomarkers into one single composite score. However, incorporating those biomarker measurements into a single score depends on the combination of methods and may lose vital information needed to make an effective and accurate decision. Furthermore, a diagnostic cut-off is required for such a combined score, and it is difficult to interpret in actual clinical practice. The paper extends the classical biomarkers’ accuracy and predictive values from univariate to bivariate markers. Also, we will develop a novel pseudo-measures system to maximize the vital information from multiple biomarkers. We specified these pseudo-and-or classifiers for the true positive rate, true negative rate, false-positive rate, and false-negative rate. We used them to redefine classical measures such as the Youden index, diagnostics odds ratio, likelihood ratios, and predictive values. We provide optimal cut-off point selection based on the modified Youden index with numerical illustrations and real data analysis for this paper's newly developed pseudo measures.https://www.tandfonline.com/loi/cjas202022-10-26hj2023Statistic

    Linking cognition and frailty in middle and old age: metabolic syndrome matters

    Get PDF
    Equal contribution Objectives: This study examined whether metabolic syndrome (MetS) would moderate the association of cognition with frailty in middle and old age. Methods: A cross-sectional design was used. Six hundred and ninety participants (age ≄ 50 years) from an on-going national survey were included in the study. Confirmatory factor analysis was applied to determine latent variables of executive function (EF), episodic memory (EM), and MetS based on relevant measurements. Frailty was defined using a modified form of Fried's criteria
    corecore