24 research outputs found

    An optimized protocol for microarray validation by quantitative PCR using amplified amino allyl labeled RNA

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Validation of microarrays data by quantitative real-time PCR (qPCR) is often limited by the low amount of available RNA. This raised the possibility to perform validation experiments on the amplified amino allyl labeled RNA (AA-aRNA) leftover from microarrays. To test this possibility, we used an ongoing study of our laboratory aiming at identifying new biomarkers of graft rejection by the transcriptomic analysis of blood cells from brain-dead organ donors.</p> <p>Results</p> <p>qPCR for ACTB performed on AA-aRNA from 15 donors provided Cq values 8 cycles higher than when original RNA was used (P < 0.001), suggesting a strong inhibition of qPCR performed on AA-aRNA. When expression levels of 5 other genes were measured in AA-aRNA generated from a universal reference RNA, qPCR sensitivity and efficiency were decreased. This prevented the quantification of one low-abundant gene, which was readily quantified in un-amplified and un-labeled RNA. To overcome this limitation, we modified the reverse transcription (RT) protocol that generates cDNA from AA-aRNA as follows: addition of a denaturation step and 2-min incubation at room temperature to improve random primers annealing, a transcription initiation step to improve RT, and a final treatment with RNase H to degrade remaining RNA. Tested on universal reference AA-aRNA, these modifications provided a gain of 3.4 Cq (average from 5 genes, P < 0.001) and an increase of qPCR efficiency (from -1.96 to -2.88; P = 0.02). They also allowed for the detection of a low-abundant gene that was previously undetectable. Tested on AA-aRNA from 15 brain-dead organ donors, RT optimization provided a gain of 2.7 cycles (average from 7 genes, P = 0.004). Finally, qPCR results significantly correlated with microarrays.</p> <p>Conclusion</p> <p>We present here an optimized RT protocol for validation of microarrays by qPCR from AA-aRNA. This is particularly valuable in experiments where limited amount of RNA is available.</p

    Expanding the Understanding of Biases in Development of Clinical-Grade Molecular Signatures: A Case Study in Acute Respiratory Viral Infections

    Get PDF
    The promise of modern personalized medicine is to use molecular and clinical information to better diagnose, manage, and treat disease, on an individual patient basis. These functions are predominantly enabled by molecular signatures, which are computational models for predicting phenotypes and other responses of interest from high-throughput assay data. Data-analytics is a central component of molecular signature development and can jeopardize the entire process if conducted incorrectly. While exploratory data analysis may tolerate suboptimal protocols, clinical-grade molecular signatures are subject to vastly stricter requirements. Closing the gap between standards for exploratory versus clinically successful molecular signatures entails a thorough understanding of possible biases in the data analysis phase and developing strategies to avoid them.Using a recently introduced data-analytic protocol as a case study, we provide an in-depth examination of the poorly studied biases of the data-analytic protocols related to signature multiplicity, biomarker redundancy, data preprocessing, and validation of signature reproducibility. The methodology and results presented in this work are aimed at expanding the understanding of these data-analytic biases that affect development of clinically robust molecular signatures.Several recommendations follow from the current study. First, all molecular signatures of a phenotype should be extracted to the extent possible, in order to provide comprehensive and accurate grounds for understanding disease pathogenesis. Second, redundant genes should generally be removed from final signatures to facilitate reproducibility and decrease manufacturing costs. Third, data preprocessing procedures should be designed so as not to bias biomarker selection. Finally, molecular signatures developed and applied on different phenotypes and populations of patients should be treated with great caution

    A transversal approach to predict gene product networks from ontology-based similarity

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Interpretation of transcriptomic data is usually made through a "standard" approach which consists in clustering the genes according to their expression patterns and exploiting Gene Ontology (GO) annotations within each expression cluster. This approach makes it difficult to underline functional relationships between gene products that belong to different expression clusters. To address this issue, we propose a transversal analysis that aims to predict functional networks based on a combination of GO processes and data expression.</p> <p>Results</p> <p>The transversal approach presented in this paper consists in computing the semantic similarity between gene products in a Vector Space Model. Through a weighting scheme over the annotations, we take into account the representativity of the terms that annotate a gene product. Comparing annotation vectors results in a matrix of gene product similarities. Combined with expression data, the matrix is displayed as a set of functional gene networks. The transversal approach was applied to 186 genes related to the enterocyte differentiation stages. This approach resulted in 18 functional networks proved to be biologically relevant. These results were compared with those obtained through a standard approach and with an approach based on information content similarity.</p> <p>Conclusion</p> <p>Complementary to the standard approach, the transversal approach offers new insight into the cellular mechanisms and reveals new research hypotheses by combining gene product networks based on semantic similarity, and data expression.</p

    Practical guidelines for rigor and reproducibility in preclinical and clinical studies on cardioprotection

    Get PDF
    The potential for ischemic preconditioning to reduce infarct size was first recognized more than 30 years ago. Despite extension of the concept to ischemic postconditioning and remote ischemic conditioning and literally thousands of experimental studies in various species and models which identified a multitude of signaling steps, so far there is only a single and very recent study, which has unequivocally translated cardioprotection to improved clinical outcome as the primary endpoint in patients. Many potential reasons for this disappointing lack of clinical translation of cardioprotection have been proposed, including lack of rigor and reproducibility in preclinical studies, and poor design and conduct of clinical trials. There is, however, universal agreement that robust preclinical data are a mandatory prerequisite to initiate a meaningful clinical trial. In this context, it is disconcerting that the CAESAR consortium (Consortium for preclinicAl assESsment of cARdioprotective therapies) in a highly standardized multi-center approach of preclinical studies identified only ischemic preconditioning, but not nitrite or sildenafil, when given as adjunct to reperfusion, to reduce infarct size. However, ischemic preconditioning—due to its very nature—can only be used in elective interventions, and not in acute myocardial infarction. Therefore, better strategies to identify robust and reproducible strategies of cardioprotection, which can subsequently be tested in clinical trials must be developed. We refer to the recent guidelines for experimental models of myocardial ischemia and infarction, and aim to provide now practical guidelines to ensure rigor and reproducibility in preclinical and clinical studies on cardioprotection. In line with the above guideline, we define rigor as standardized state-of-the-art design, conduct and reporting of a study, which is then a prerequisite for reproducibility, i.e. replication of results by another laboratory when performing exactly the same experiment

    Integrative data mining for assessing international conflict events

    No full text
    corecore