287 research outputs found

    A Reliable Method for the Selection of Exploitable Melanoma Archival Paraffin Embedded Tissues for Transcript Biomarker Profiling

    Get PDF
    The source tissue for biomarkers mRNA expression profiling of tumors has traditionally been fresh-frozen tissue. The adaptation of formalin-fixed, paraffin-embedded (FFPE) tissues for routine mRNA profiling would however be invaluable in view of their abundance and the clinical information related to them. However, their use in the clinic remains a challenge due to the poor quality of RNA extracted from such tissues. Here, we developed a method for the selection of melanoma archival paraffin-embedded tissues that can be reliably used for transcript biomarker profiling. For that, we used qRT-PCR to conduct a comparative study in matched pairs of frozen and FFPE melanoma tissues of the expression of 25 genes involved in angiogenesis/tumor invasion and 15 housekeeping genes. A classification method was developed that can select the samples with a good frozen/FFPE correlation and identify those that should be discarded on the basis of paraffin data for four reference genes only. We propose therefore a simple and inexpensive assay which improves reliability of mRNA profiling in FFPE samples by allowing the identification and analysis of “good” samples only. This assay which can be extended to other genes would however need validation at the clinical level and on independent tumor series

    Visual versus semi-quantitative analysis of 18F-FDG-PET in amnestic MCI. An European Alzheimer\u27s Disease Consortium (EADC) project

    Get PDF
    We aimed to investigate the accuracy of FDG-PET to detect the Alzheimer\u27s disease (AD) brain glucose hypometabolic pattern in 142 patients with amnestic mild cognitive impairment (aMCI) and 109 healthy controls. aMCI patients were followed for at least two years or until conversion to dementia. Images were evaluated by means of visual read by either moderately-skilled or expert readers, and by means of a summary metric of AD-like hypometabolism (PALZ score). Seventy-seven patients converted to AD-dementia after 28.6?19.3 months of follow-up. Expert reading was the most accurate tool to detect these MCI converters from healthy controls (sensitivity 89.6%, specificity 89.0%, accuracy 89.2%) while two moderately-skilled readers were less (p < 0.05) specific (sensitivity 85.7%, specificity 79.8%, accuracy 82.3%) and PALZ scorewas less (p < 0.001) sensitive (sensitivity 62.3%, specificity 91.7%, accuracy 79.6%). Among the remaining 67 aMCI patients, 50 were confirmed as aMCI after an average of 42.3 months, 12 developed other dementia, and 3 reverted to normalcy. In 30/50 persistent MCI patients, the expert recognized the AD hypometabolic pattern. In 13/50 aMCI, both the expert and PALZ score were negative while in 7/50, only the PALZ score was positive due to sparse hypometabolic clusters mainly in frontal lobes. Visual FDG-PET reads by an expert is the most accurate method but an automated, validated system may be particularly helpful to moderately-skilled readers because of high specificity, and should be mandatory when even a moderately-skilled reader is unavailable

    “Excellence R Us”: university research and the fetishisation of excellence

    Get PDF
    The rhetoric of “excellence” is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organisations, from art history to zoology. But does “excellence” actually mean anything? Does this pervasive narrative of “excellence” do any good? Drawing on a range of sources we interrogate “excellence” as a concept and find that it has no intrinsic meaning in academia. Rather it functions as a linguistic interchange mechanism. To investigate whether this linguistic function is useful we examine how the rhetoric of excellence combines with narratives of scarcity and competition to show that the hypercompetition that arises from the performance of “excellence” is completely at odds with the qualities of good research. We trace the roots of issues in reproducibility, fraud, and homophily to this rhetoric. But we also show that this rhetoric is an internal, and not primarily an external, imposition. We conclude by proposing an alternative rhetoric based on soundness and capacity-building. In the final analysis, it turns out that that “excellence” is not excellent. Used in its current unqualified form it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship

    Clinically Relevant Characterization of Lung Adenocarcinoma Subtypes Based on Cellular Pathways: An International Validation Study

    Get PDF
    Lung adenocarcinoma (AD) represents a predominant type of lung cancer demonstrating significant morphologic and molecular heterogeneity. We sought to understand this heterogeneity by utilizing gene expression analyses of 432 AD samples and examining associations between 27 known cancer-related pathways and the AD subtype, clinical characteristics and patient survival. Unsupervised clustering of AD and gene expression enrichment analysis reveals that cell proliferation is the most important pathway separating tumors into subgroups. Further, AD with increased cell proliferation demonstrate significantly poorer outcome and an increased solid AD subtype component. Additionally, we find that tumors with any solid component have decreased survival as compared to tumors without a solid component. These results lead to the potential to use a relatively simple pathological examination of a tumor in order to determine its aggressiveness and the patient's prognosis. Additional results suggest the ability to use a similar approach to determine a patient's sensitivity to targeted treatment. We then demonstrated the consistency of these findings using two independent AD cohorts from Asia (N = 87) and Europe (N = 89) using the identical analytic procedures

    Crises and collective socio-economic phenomena: simple models and challenges

    Full text link
    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the Random Field Ising model (RFIM) indeed provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilising self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of RFIM-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can badly fail at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria to be reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.Comment: Review paper accepted for a special issue of J Stat Phys; several minor improvements along reviewers' comment

    Should We Abandon the t-Test in the Analysis of Gene Expression Microarray Data: A Comparison of Variance Modeling Strategies

    Get PDF
    High-throughput post-genomic studies are now routinely and promisingly investigated in biological and biomedical research. The main statistical approach to select genes differentially expressed between two groups is to apply a t-test, which is subject of criticism in the literature. Numerous alternatives have been developed based on different and innovative variance modeling strategies. However, a critical issue is that selecting a different test usually leads to a different gene list. In this context and given the current tendency to apply the t-test, identifying the most efficient approach in practice remains crucial. To provide elements to answer, we conduct a comparison of eight tests representative of variance modeling strategies in gene expression data: Welch's t-test, ANOVA [1], Wilcoxon's test, SAM [2], RVM [3], limma [4], VarMixt [5] and SMVar [6]. Our comparison process relies on four steps (gene list analysis, simulations, spike-in data and re-sampling) to formulate comprehensive and robust conclusions about test performance, in terms of statistical power, false-positive rate, execution time and ease of use. Our results raise concerns about the ability of some methods to control the expected number of false positives at a desirable level. Besides, two tests (limma and VarMixt) show significant improvement compared to the t-test, in particular to deal with small sample sizes. In addition limma presents several practical advantages, so we advocate its application to analyze gene expression data
    • …
    corecore