57 research outputs found

    Cubic exact solutions for the estimation of pairwise haplotype frequencies: implications for linkage disequilibrium analyses and a web tool 'CubeX'

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The frequency of a haplotype comprising one allele at each of two loci can be expressed as a cubic equation (the 'Hill equation'), the solution of which gives that frequency. Most haplotype and linkage disequilibrium analysis programs use iteration-based algorithms which substitute an estimate of haplotype frequency into the equation, producing a new estimate which is repeatedly fed back into the equation until the values converge to a maximum likelihood estimate (expectation-maximisation).</p> <p>Results</p> <p>We present a program, "CubeX", which calculates the biologically possible exact solution(s) and provides estimated haplotype frequencies, D', r<sup>2 </sup>and <it>χ</it><sup>2 </sup>values for each. CubeX provides a "complete" analysis of haplotype frequencies and linkage disequilibrium for a pair of biallelic markers under situations where sampling variation and genotyping errors distort sample Hardy-Weinberg equilibrium, potentially causing more than one biologically possible solution. We also present an analysis of simulations and real data using the algebraically exact solution, which indicates that under perfect sample Hardy-Weinberg equilibrium there is only one biologically possible solution, but that under other conditions there may be more.</p> <p>Conclusion</p> <p>Our analyses demonstrate that lower allele frequencies, lower sample numbers, population stratification and a possible |D'| value of 1 are particularly susceptible to distortion of sample Hardy-Weinberg equilibrium, which has significant implications for calculation of linkage disequilibrium in small sample sizes (eg HapMap) and rarer alleles (eg paucimorphisms, q < 0.05) that may have particular disease relevance and require improved approaches for meaningful evaluation.</p

    Analytic philosophy for biomedical research: the imperative of applying yesterday's timeless messages to today's impasses

    Get PDF
    The mantra that "the best way to predict the future is to invent it" (attributed to the computer scientist Alan Kay) exemplifies some of the expectations from the technical and innovative sides of biomedical research at present. However, for technical advancements to make real impacts both on patient health and genuine scientific understanding, quite a number of lingering challenges facing the entire spectrum from protein biology all the way to randomized controlled trials should start to be overcome. The proposal in this chapter is that philosophy is essential in this process. By reviewing select examples from the history of science and philosophy, disciplines which were indistinguishable until the mid-nineteenth century, I argue that progress toward the many impasses in biomedicine can be achieved by emphasizing theoretical work (in the true sense of the word 'theory') as a vital foundation for experimental biology. Furthermore, a philosophical biology program that could provide a framework for theoretical investigations is outlined

    Robust Biomarkers: Methodologically Tracking Causal Processes in Alzheimer’s Measurement

    Get PDF
    In biomedical measurement, biomarkers are used to achieve reliable prediction of, and useful causal information about patient outcomes while minimizing complexity of measurement, resources, and invasiveness. A biomarker is an assayable metric that discloses the status of a biological process of interest, be it normative, pathophysiological, or in response to intervention. The greatest utility from biomarkers comes from their ability to help clinicians (and researchers) make and evaluate clinical decisions. In this paper we discuss a specific methodological use of clinical biomarkers in pharmacological measurement: Some biomarkers, called ‘surrogate markers’, are used to substitute for a clinically meaningful endpoint corresponding to events and their penultimate risk factors. We confront the reliability of clinical biomarkers that are used to gather information about clinically meaningful endpoints. Our aim is to present a systematic methodology for assessing the reliability of multiple surrogate markers (and biomarkers in general). To do this we draw upon the robustness analysis literature in the philosophy of science and the empirical use of clinical biomarkers. After introducing robustness analysis we present two problems with biomarkers in relation to reliability. Next, we propose an intervention-based robustness methodology for organizing the reliability of biomarkers in general. We propose three relevant conditions for a robust methodology for biomarkers: (R1) Intervention-based demonstration of partial independence of modes: In biomarkers partial independence can be demonstrated through exogenous interventions that modify a process some number of “steps” removed from each of the markers. (R2) Comparison of diverging and converging results across biomarkers: By systematically comparing partially-independent biomarkers we can track under what conditions markers fail to converge in results, and under which conditions they successfully converge. (R3) Information within the context of theory: Through a systematic cross-comparison of the markers we can make causal conclusions as well as eliminate competing theories. We apply our robust methodology to currently developing Alzheimer’s research to show its usefulness for making causal conclusions

    Factive Scientific Understanding Without Accurate Representation

    Get PDF
    This paper analyzes two ways idealized biological models produce factive scientific understanding. I then argue that models can provide factive scientific understanding of a phenomenon without providing an accurate representation of the (difference-making) features of their real-world target system(s). My analysis of these cases also suggests that the debate over scientific realism needs to investigate the factive scientific understanding produced by scientists’ use of idealized models rather than the accuracy of scientific models themselves
    • 

    corecore