30 research outputs found

    Advances in estimation by the item sum technique using auxiliary information in complex surveys

    Get PDF
    To collect sensitive data, survey statisticians have designed many strategies to reduce nonresponse rates and social desirability response bias. In recent years, the item count technique (ICT) has gained considerable popularity and credibility as an alternative mode of indirect questioning survey, and several variants of this technique have been proposed as new needs and challenges arise. The item sum technique (IST), which was introduced by Chaudhuri and Christofides (2013) and Trappmann et al. (2014), is one such variant, used to estimate the mean of a sensitive quantitative variable. In this approach, sampled units are asked to respond to a two-list of items containing a sensitive question related to the study variable and various innocuous, nonsensitive, questions. To the best of our knowledge, very few theoretical and applied papers have addressed the IST. In this article, therefore, we present certain methodological advances as a contribution to appraising the use of the IST in real-world surveys. In particular, we employ a generic sampling design to examine the problem of how to improve the estimates of the sensitive mean when auxiliary information on the population under study is available and is used at the design and estimation stages. A Horvitz-Thompson type estimator and a calibration type estimator are proposed and their efficiency is evaluated by means of an extensive simulation study. Using simulation experiments, we show that estimates obtained by the IST are nearly equivalent to those obtained using “true data” and that in general they outperform the estimates provided by a competitive randomized response method. Moreover, the variance estimation may be considered satisfactory. These results open up new perspectives for academics, researchers and survey practitioners, and could justify the use of the IST as a valid alternative to traditional direct questioning survey modes.Ministerio de Economía y Competitividad of SpainMinisterio de Educacion, Cultura y Deporteproject PRIN-SURWE

    Dietary phytochemicals, HDAC inhibition, and DNA damage/repair defects in cancer cells

    Get PDF
    Genomic instability is a common feature of cancer etiology. This provides an avenue for therapeutic intervention, since cancer cells are more susceptible than normal cells to DNA damaging agents. However, there is growing evidence that the epigenetic mechanisms that impact DNA methylation and histone status also contribute to genomic instability. The DNA damage response, for example, is modulated by the acetylation status of histone and non-histone proteins, and by the opposing activities of histone acetyltransferase and histone deacetylase (HDAC) enzymes. Many HDACs overexpressed in cancer cells have been implicated in protecting such cells from genotoxic insults. Thus, HDAC inhibitors, in addition to unsilencing tumor suppressor genes, also can silence DNA repair pathways, inactivate non-histone proteins that are required for DNA stability, and induce reactive oxygen species and DNA double-strand breaks. This review summarizes how dietary phytochemicals that affect the epigenome also can trigger DNA damage and repair mechanisms. Where such data is available, examples are cited from studies in vitro and in vivo of polyphenols, organosulfur/organoselenium compounds, indoles, sesquiterpene lactones, and miscellaneous agents such as anacardic acid. Finally, by virtue of their genetic and epigenetic mechanisms, cancer chemopreventive agents are being redefined as chemo- or radio-sensitizers. A sustained DNA damage response coupled with insufficient repair may be a pivotal mechanism for apoptosis induction in cancer cells exposed to dietary phytochemicals. Future research, including appropriate clinical investigation, should clarify these emerging concepts in the context of both genetic and epigenetic mechanisms dysregulated in cancer, and the pros and cons of specific dietary intervention strategies

    Recycled incomplete identification procedures for blood screening

    Get PDF
    \u3cp\u3eThe operation of blood banks aims at the cost-efficient supply of uncontaminated human blood. Each unit of donated blood goes through multiple testing for the presence of various pathogens which are able to cause transfusion-transmitted diseases. The blood screening process is comprised of two phases. At the first phase, blood units are screened together in pooled groups of a certain size by the ELISA (Enzyme Linked Immuno-Sorbent Assay) test to detect various virus-specific antibodies. The second phase of the screening process is conducted by PCR (Polymerase Chain Reaction) testing of the individual blood units of the groups found clean by the initial ELISA phase. Thousands of units of donated blood arrive daily at the central blood bank for screening. Each screening scheme has associated testing costs and testing times. In addition, each blood unit arrives with an expiration date. As a result, the shorter the testing time, the longer the residual lifetime that is left for the blood unit for future use. The controller faces a natural and well-motivated operations management problem. He will attempt to shorten the testing period and reduce the testing costs without compromising too much on the reliability. To achieve these goals, we propose a new testing procedure that we term Recycled Incomplete Identification Procedure (RIIP). In RIIP, groups of pooled blood units which are found contaminated in the ELISA test are divided into smaller subgroups and again group-tested by ELISA, and so forth, until eventually the PCR test is conducted for those subgroups which are found clean. We analyze and optimize the performance of RIIP by deriving explicit formulas for the cost components of interest and maximize the profit associated with the procedure. Our numerical results suggest that it can indeed be profitable to do several cycles at ELISA.\u3c/p\u3

    Two-stage queueing network models for quality control and testing

    No full text
    We study sojourn times in a two-node open queueing network with a processor sharing node and a delay node, with Poisson arrivals at the PS node. Motivated by quality control and blood testing applications, we consider a feedback mechanism in which customers may either leave the system after service at the PS node or move to the delay node; from the delay node, they always return to the PS node for new quality controls or blood tests. We propose various approximations for the distribution of the total sojourn time in the network; each of these approximations yields the exact mean sojourn time, and very accurate results for the variance. The best of the three approximations is used to tackle an optimization problem that is mainly inspired by a blood testing application
    corecore