1,252 research outputs found

    Conceptual Modeling Applied to Genomics: Challenges Faced in Data Loading

    Full text link
    Todays genomic domain evolves around insecurity: too many imprecise concepts, too much information to be properly managed. Considering that conceptualization is the most exclusive human characteristic, it makes full sense to try to conceptualize the principles that guide the essence of why humans are as we are. This question can of course be generalized to any species, but we are especially interested in this work in showing how conceptual modeling is strictly required to understand the ''execution model'' that human beings ''implement''. The main issue is to defend the idea that only by having an in-depth knowledge of the Conceptual Model that is associated to the Human Genome, can this Human Genome properly be understood. This kind of Model-Driven perspective of the Human Genome opens challenging possibilities, by looking at the individuals as implementation of that Conceptual Model, where different values associated to different modeling primitives will explain the diversity among individuals and the potential, unexpected variations together with their unwanted effects in terms of illnesses. This work focuses on the challenges faced in loading data from conventional resources into Information Systems created according to the above mentioned conceptual modeling approach. The work reports on various loading efforts, problems encountered and the solutions to these problems. Also, a strong argument is made about why conventional methods to solve the so called `data chaos¿ problems associated to the genomics domain so often fail to meet the demands.Van Der Kroon ., M. (2011). Conceptual Modeling Applied to Genomics: Challenges Faced in Data Loading. http://hdl.handle.net/10251/16993Archivo delegad

    Alleles versus mutations: Understanding the evolution of genetic architecture requires a molecular perspective on allelic origins

    Get PDF
    Perspectives on the role of large-effect quantitative trait loci (QTL) in the evolution of complex traits have shifted back and forth over the past few decades. Different sets of studies have produced contradictory insights on the evolution of genetic architecture. I argue that much of the confusion results from a failure to distinguish mutational and allelic effects, a limitation of using the Fisherian model of adaptive evolution as the lens through which the evolution of adaptive variation is examined. A molecular-based perspective reveals that allelic differences can involve the cumulative effects of many mutations plus intragenic recombination, a model that is supported by extensive empirical evidence. I discuss how different selection regimes could produce very different architectures of allelic effects under a molecular-based model, which may explain conflicting insights on genetic architecture from studies of variation within populations versus between divergently selected populations. I address shortcomings of genome-wide association study (GWAS) practices in light of more suitable models of allelic evolution, and suggest alternate GWAS strategies to generate more valid inferences about genetic architecture. Finally, I discuss how adopting more suitable models of allelic evolution could help redirect research on complex trait evolution toward addressing more meaningful questions in evolutionary biology

    Experiments, Simulations, and Lessons from Experimental Evolution

    Get PDF
    Philosophers and scientists have sought to draw methodological distinctions among different kinds of experiments, and between experimentation and other scientific methodologies. This dissertation focuses on two such cases: hypothesis-testing versus exploratory experiments, and experiment versus simulation. I draw on examples from experimental evolution--evolving organisms in a controlled laboratory setting to study evolution via natural selection in real time--to challenge the way we think about these distinctions. In the case of hypothesis-testing versus exploratory experiments, philosophers have distinguished these categories in terms of the role of theory in experiment. I discuss examples from experimental evolution which occupy the poorly characterized middle ground between the two categories. I argue that we should take more seriously the point that multiple theoretical backgrounds can come into play at multiple points in an experiment, and propose some new contributions toward clarifying the conceptual space of experimental inquiry. In the case of experiment versus simulation, people have attempted to clearly delineate cases of science into these two categories, and base judgments about their epistemic value on these categorizations. I discuss and reject two arguments for the epistemic superiority of experiments over simulations: (1) Experiments put scientists in a better position to make valid inferences about the natural world; (2) Experiments are a superior source of surprises or novel insights. Both of these claims are false as generalizations across science. Focusing on the experiment/simulation distinction as a basis for in-principle judgments about epistemic value focuses us on the wrong issues. This leaves us with a question: What should we focus on instead? I offer preliminary considerations for a framework for evaluating inferences from objects of study to targets of inquiry in the world, which departs from the problematic custom of basing such evaluations on questions like Was it an experiment or a simulation? This framework is based on the ideas of capturing relevant similarities while appropriately accounting for what researchers already know and what they are trying to learn by asking the scientific question at hand

    A Third Way to the Selected Effect/Causal Role Distinction in the Great Encode Debate

    Get PDF
    Since the ENCODE project published its final results in a series of articles in 2012, there is no consensus on what its implications are. ENCODE’s central and most controversial claim was that there is essentially no junk DNA: most sections of the human genome believed to be «junk» are functional. This claim was met with many reservations. If researchers disagree about whether there is junk DNA, they have first to agree on a concept of function and how function, given a particular definition, can be discovered. The ENCODE debate centered on a notion of function that assumes a strong dichotomy between evolutionary and non-evolutionary function and causes, prevalent in the Modern Evolutionary Synthesis. In contrast to how the debate is typically portrayed, both sides share a commitment to this distinction. This distinction is, however, much debated in alternative approaches to evolutionary theory, such as the EES. We show that because the ENCODE debate is grounded in a particular notion of function, it is unclear how it connects to broader debates about what is the correct evolutionary frame- work. Furthermore, we show how arguments brought forward in the controversy, particularly arguments from mathematical population genetics, are deeply embedded in their particular disciplinary contexts, and reflect substantive assumptions about the evolution of genomes. With this article, we aim to provide an anatomy of the ENCODE debate that offers a new perspective on the notions of function both sides employed, as well as to situate the ENCODE debate within wider debates regarding the forces operating in evolution

    STABLE ADAPTIVE STRATEGY of HOMO SAPIENS and EVOLUTIONARY RISK of HIGH TECH. Transdisciplinary essay

    Get PDF
    The co-evolutionary concept of Three-modal stable evolutionary strategy of Homo sapiens is developed. The concept based on the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined by descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctly) parameters simultaneously, that cannot be instrumental reduced to others ones. Resulting volume of both parameters define the trends of biological, social, cultural and techno-rationalistic human evolution by two gear mechanism Ë— gene-cultural co-evolution and techno- humanitarian balance. The resultant each of them can estimated by the ratio of socio-psychological predispositions of humanization/dehumanization in mentality. Explanatory model and methodology of evaluation of creatively teleological evolutionary risk component of NBIC technological complex is proposed. Integral part of the model is evolutionary semantics (time-varying semantic code, the compliance of the biological, socio-cultural and techno-rationalist adaptive modules of human stable evolutionary strategy)

    Explaining additional genetic variation in complex traits

    Get PDF
    Genome-wide association studies (GWAS) have provided valuable insights into the genetic basis of complex traits, discovering >6000 variants associated with >500 quantitative traits and common complex diseases in humans. The associations identified so far represent only a fraction of those that influence phenotype, because there are likely to be many variants across the entire frequency spectrum, each of which influences multiple traits, with only a small average contribution to the phenotypic variance. This presents a considerable challenge to further dissection of the remaining unexplained genetic variance within populations, which limits our ability to predict disease risk, identify new drug targets, improve and maintain food sources, and understand natural diversity. This challenge will be met within the current framework through larger sample size, better phenotyping, including recording of nongenetic risk factors, focused study designs, and an integration of multiple sources of phenotypic and genetic information. The current evidence supports the application of quantitative genetic approaches, and we argue that one should retain simpler theories until simplicity can be traded for greater explanatory power

    The diagnosis of mental disorders: the problem of reification

    Get PDF
    A pressing need for interrater reliability in the diagnosis of mental disorders emerged during the mid-twentieth century, prompted in part by the development of diverse new treatments. The Diagnostic and Statistical Manual of Mental Disorders (DSM), third edition answered this need by introducing operationalized diagnostic criteria that were field-tested for interrater reliability. Unfortunately, the focus on reliability came at a time when the scientific understanding of mental disorders was embryonic and could not yield valid disease definitions. Based on accreting problems with the current DSM-fourth edition (DSM-IV) classification, it is apparent that validity will not be achieved simply by refining criteria for existing disorders or by the addition of new disorders. Yet DSM-IV diagnostic criteria dominate thinking about mental disorders in clinical practice, research, treatment development, and law. As a result, the modernDSMsystem, intended to create a shared language, also creates epistemic blinders that impede progress toward valid diagnoses. Insights that are beginning to emerge from psychology, neuroscience, and genetics suggest possible strategies for moving forward

    Standards and legacies: Pragmatic constraints on a uniform gene nomenclature

    Get PDF
    Over the past half-century, there have been concerted efforts to standardize how clinicians and medical researchers refer to genetic material. However, practical and historical impediments thwart this goal. In the current paper I argue that the ontological status of a genetic mutation cannot be cleanly separated from its pragmatic role in therapy. Attempts at standardization fail due to the non-standardized ends to which genetic information is employed, along with historical inertia and unregulated local innovation. These factors prevent rationalistic attempts to ‘modernize’ what is otherwise trumpeted as the most modern of the medical sciences
    • …
    corecore