22 research outputs found

    Formulation development and characterization of cellulose acetate nitrate based propellants for improved insensitive munitions properties

    Get PDF
    AbstractCellulose acetate nitrate (CAN) was used as an insensitive energetic binder to improve the insensitive munitions (IM) properties of gun propellants to replace the M1 propellant used in 105 mm artillery charges. CAN contains the energetic nitro groups found in nitrocellulose (NC), but also acetyl functionalities, which lowered the polymer's sensitivity to heat and shock, and therefore improved its IM properties relative to NC. The formulation, development and small-scale characterization testing of several CAN-based propellants were done. The formulations, using insensitive energetic solid fillers and high-nitrogen modifiers in place of nitramine were completed. The small scale characterization testing, such as closed bomb testing, small scale sensitivity, thermal stability, and chemical compatibility were done. The mechanical response of the propellants under high-rate uni-axial compression at, hot, cold, and ambient temperatures were also completed. Critical diameter testing, hot fragment conductive ignition (HFCI) tests were done to evaluate the propellants' responses to thermal and shock stimuli. Utilizing the propellant chemical composition, theoretical predictions of erosivity were completed. All the small scale test results were utilized to down-select the promising CAN based formulations for large scale demonstration testing such as the ballistic performance and fragment impact testing in the 105 mm M67 artillery charge configurations. The test results completed in the small and large scale testing are discussed

    Bayesian L1 Lasso for High Dimensional Data

    No full text
    The necessity to perform variable selection and estimation in the high dimensional situation is increased with the introduction of new inventions and technologies, capable of generating enormous amounts of data on individual observations. In frequent settings, the sample size (n) is smaller than number of variables (p) which hinders the performance of traditional regression methods. It has been demonstrated that the shrinkage methods such as the least absolute shrinkage and selection operator (LASSO) (Tibshirani, 1996; Hastie, Tibshirani & Friedman, 2009) outclass the traditional least squares estimates in the high dimensional situation; however, they suffer from convex optimization problem. In our study, we produce a Gibbs sampler, identical to the Gibbs sampler for the Bayesian Lasso (Park & Casella, 2008), but we introduce the absolute deviation loss function (L1 loss) describing as modified Bayesian lasso with L1 loss. It is demonstrated that the proposed method outperforms the LASSO and the Bayesian LASSO in terms of prediction accuracy and variable selection. Our method is also implemented on a real high dimensional data set

    Bayesian Multivariate Regression for High-dimensional Longitudinal Data with Heavy-tailed Errors

    No full text
    High-dimensional data occurs when the number of measurements on subjects or sampling units is far greater than the size of the sample in the study. Similar to the popularity of longitudinal data in various biomedical and public health research, high-dimensional longitudinal data are also on the rise in bioinformatics, genomics, and public health research. These data often exhibit heavy-tailed errors or contain outliers in areas such as genomics, finance and more. Application of the traditional ordinary least squares for high dimensional longitudinal data may fail to produce valid estimates due to identifiability issues and specifically in heavy-tailed situations, as it penalizes large deviations inappropriately. To address these issues, we present a method for variable selection and estimation based on the continuous shrinkage priors for multivariate continuous outcomes with heavy-tailed errors. The proposed method is developed in a Bayesian setting and a Gibbs sampler is derived to efficiently sample from the posterior distribution. We compare the method to standard estimation routines in a series of simulation examples as well as on a data set from a gene expression profiling experiment on T-cell activation

    The Bayesian Multivariate Regression for High Dimensional Longitudinal Data with Heavy-Tailed Errors

    No full text
    High-dimensional longitudinal data, also called “large p small n”, which consists of the situation when the number of measurements on subjects or sampling units is far greater than the size of the sample in the study. Similar to the popularity of longitudinal data in various biomedical and public health research, high-dimensional longitudinal data are also on the rise in bioinformatics, genomics, and public health research. These data exhibit heavy-tailed errors in frequent situations such as genomics, finance and more or contain outliers. Application of traditional ordinary least squares method for high dimensional longitudinal data will fail to produce valid estimates due to identifiability issues and specifically in heavy tails situation as it penalizes large deviations inappropriately. To address these issues, we present a method for variable selection and estimation based on the horseshoe prior for multivariate continuous outcomes with heavy-tailed errors. The proposed method is developed in a Bayesian setting and Gibbs sampler is derived to efficiently sample from the posterior distribution. We compare the method to standard estimation routines in a series of simulation examples as well as on a data set from a gene expression profiling experiment on T-cell activation

    Notes on the Overlap Measure as an Alternative to the Youden Index: How Are They Related?

    No full text
    The receiver operating characteristic (ROC) curve is frequently used to evaluate and compare diagnostic tests. As one of the ROC summary indices, the Youden index measures the effectiveness of a diagnostic marker and enables the selection of an optimal threshold value (cut-off point) for the marker. Recently, the overlap coefficient, which captures the similarity between 2 distributions directly, has been considered as an alternative index for determining the diagnostic performance of markers. In this case, a larger overlap indicates worse diagnostic accuracy, and vice versa. This paper provides a graphical demonstration and mathematical derivation of the relationship between the Youden index and the overlap coefficient and states their advantages over the most popular diagnostic measure, the area under the ROC curve. Furthermore, we outline the differences between the Youden index and overlap coefficient and identify situations in which the overlap coefficient outperforms the Youden index. Numerical examples and real data analysis are provided

    Notes on the Overlap Measure as an Alternative to the Youden Index: How Are They Related?

    No full text
    The ROC (Receiver Operating Characteristic) curve is frequently used for evaluating and comparing diagnostic tests. The Youden index, as one of the ROC summary index, measures the effectiveness of a diagnostic marker and enables the selection of an optimal threshold value (cut -off point) for the marker. Recently, the overlap coefficients (OVL), which capture the similarity between two distributions directly, are considered as alternative indices for determining the diagnostic performance of markers, that is, larger (smaller) overlap indicating poorer (better) diagnostic accuracy. This paper compare s the similarities and dissimilarities of the Youden index and OVL measures as well as the advantages of OVL measures over the Youden index in some situations. Numerical examples as well as a real data analysis with differentially expressed gene biomarkers are provided

    Efficient Sampling Design for Making Inference on Mean Estimation in Longitudinal Data

    No full text
    In many studies, a researcher attempts to describe a population where units are measured for multiple outcomes, or responses. In this paper, we present an efficient procedure based on ranked set sampling to estimate and perform hypothesis testing on a multivariate mean. The method is based on ranking on an auxiliary covariate, which is assumed to be correlated with the multivariate response, in order to improve the efficiency of the estimation. We show that the proposed estimator developed under this sampling scheme is unbiased, has smaller variance in the multivariate sense, and is asymptotically Gaussian. A bootstrap routine is developed in the statistical software R to perform inference when the sample size is small. We use a simulation study to investigate the performance of the method under known conditions and apply the method to the biomarker data collected in China Health and Nutrition Survey (CHNS 2009) data
    corecore