58 research outputs found

    Exploring modularity in biological networks

    Get PDF
    Network theoretical approaches have shaped our understanding of many different kinds of biological modularity. This essay makes the case that to capture these contributions, it is useful to think about the role of network models in exploratory research. The overall point is that it is possible to provide a systematic analysis of the exploratory functions of network models in bioscientific research. Using two examples from molecular and developmental biology, I argue that often the same modelling approach can perform one or more exploratory functions, such as introducing new directions of research, offering a complementary set of concepts, methods and algorithms for individuating important features of natural phenomena, generating proofs of principle demonstrations and potential explanations for phenomena of interest and enlarging the scope of certain research agendas. This article is part of the theme issue 'Unifying the essential concepts of biological networks: biological insights and philosophical foundations'

    Long-Term Potentiation: One Kind or Many?

    Get PDF
    Do neurobiologists aim to discover natural kinds? I address this question in this chapter via a critical analysis of classification practices operative across the 43-year history of research on long-term potentiation (LTP). I argue that this 43-year history supports the idea that the structure of scientific practice surrounding LTP research has remained an obstacle to the discovery of natural kinds

    Living instruments and theoretical terms : Xenografts as measurements in cancer research

    Get PDF
    I discuss the relationship between theoretical terms and measuring devices using a very peculiar example from biomedical research: cancer transplantation models. I do so through two complementary comparisons. I first show how a historical case study can shed light on a similar case from contemporary biomedical research. But I also compare both to a paradigmatic case of measurement in the physical sciences -- thermometry -- which reveals some of the most relevant epistemological issues. The comparison offers arguments for the recent debate on the operational definition of Cancer Stem Cells, and thereby suggests the relevance of a comparative approach in the history and philosophy of science

    Philosophy of Science and The Replicability Crisis

    Get PDF
    Replicability is widely taken to ground the epistemic authority of science. However, in recent years, important published findings in the social, behavioral, and biomedical sciences have failed to replicate, suggesting that these fields are facing a “replicability crisis.” For philosophers, the crisis should not be taken as bad news but as an opportunity to do work on several fronts, including conceptual analysis, history and philosophy of science, research ethics, and social epistemology. This article introduces philosophers to these discussions. First, I discuss precedents and evidence for the crisis. Second, I discuss methodological, statistical, and social-structural factors that have contributed to the crisis. Third, I focus on the philosophical issues raised by the crisis. Finally, I discuss proposed solutions and highlight the gaps that philosophers could focus on

    Psychoneural Isomorphism: From Metaphysics to Robustness

    Get PDF
    At the beginning of the 20th century, Gestalt psychologists put forward the concept of psychoneural isomorphism, which was meant to replace Fechner’s obscure notion of psychophysical parallelism and provide a heuristics that may facilitate the search for the neural correlates of the mind. However, the concept has generated much confusion in the debate, and today its role is still unclear. In this contribution, I will attempt a little conceptual spadework in clarifying the concept of psychoneural isomorphism, focusing exclusively on conscious visual perceptual experience and its neural correlates. Firstly, I will outline the history of our concept, and its alleged metaphysical and epistemic roles. Then, I will clarify the nature of isomorphism and rule out its metaphysical role. Finally, I will review some epistemic roles of our concept, zooming in on the work of Jean Petitot, and suggest that it does not play a relevant heuristic role. I conclude suggesting that psychoneural isomorphism might be an indicator of robustness for certain mathematical descriptions of perceptual content

    Effects of Anacetrapib in Patients with Atherosclerotic Vascular Disease

    Get PDF
    BACKGROUND: Patients with atherosclerotic vascular disease remain at high risk for cardiovascular events despite effective statin-based treatment of low-density lipoprotein (LDL) cholesterol levels. The inhibition of cholesteryl ester transfer protein (CETP) by anacetrapib reduces LDL cholesterol levels and increases high-density lipoprotein (HDL) cholesterol levels. However, trials of other CETP inhibitors have shown neutral or adverse effects on cardiovascular outcomes. METHODS: We conducted a randomized, double-blind, placebo-controlled trial involving 30,449 adults with atherosclerotic vascular disease who were receiving intensive atorvastatin therapy and who had a mean LDL cholesterol level of 61 mg per deciliter (1.58 mmol per liter), a mean non-HDL cholesterol level of 92 mg per deciliter (2.38 mmol per liter), and a mean HDL cholesterol level of 40 mg per deciliter (1.03 mmol per liter). The patients were assigned to receive either 100 mg of anacetrapib once daily (15,225 patients) or matching placebo (15,224 patients). The primary outcome was the first major coronary event, a composite of coronary death, myocardial infarction, or coronary revascularization. RESULTS: During the median follow-up period of 4.1 years, the primary outcome occurred in significantly fewer patients in the anacetrapib group than in the placebo group (1640 of 15,225 patients [10.8%] vs. 1803 of 15,224 patients [11.8%]; rate ratio, 0.91; 95% confidence interval, 0.85 to 0.97; P=0.004). The relative difference in risk was similar across multiple prespecified subgroups. At the trial midpoint, the mean level of HDL cholesterol was higher by 43 mg per deciliter (1.12 mmol per liter) in the anacetrapib group than in the placebo group (a relative difference of 104%), and the mean level of non-HDL cholesterol was lower by 17 mg per deciliter (0.44 mmol per liter), a relative difference of -18%. There were no significant between-group differences in the risk of death, cancer, or other serious adverse events. CONCLUSIONS: Among patients with atherosclerotic vascular disease who were receiving intensive statin therapy, the use of anacetrapib resulted in a lower incidence of major coronary events than the use of placebo. (Funded by Merck and others; Current Controlled Trials number, ISRCTN48678192 ; ClinicalTrials.gov number, NCT01252953 ; and EudraCT number, 2010-023467-18 .)

    Operationism, Experimentation, and Concept Formation

    No full text
    According to a common reading (and criticism), the doctrine of operationism purports to define the subject matter under investigation in terms of the methods of investigation. While such a procedure would be circular, and as such untenable, I argue that we have to distinguish between two questions that are frequently conflated in discussions of operationism, i.e., (a) the issue of how to operationalize a scientific question (i.e., how to pose it in a way that makes it amenable to scientific testing), and (b) the issue of what constitute conditions of application of a scientific term. I claim that it was the former question that was of foremost interest to both early and contemporary operationists in psychology. In my dissertation I argue that this aspect of operationism highlights an interesting problem, i.e., what assumptions about the subject matter have to be presupposed in order to be able to operationalize a question about it? This question is analyzed both, by means of historical case studies (which give accounts of the operationist positions of the psychologists S. S. Stevens, E. C. Tolman, and C. Hull), and by means of a contemporary case study. In my contemporary case study, I discuss the emergence of the concept of “implicit memory” in cognitive psychology and cognitive neuropsychology within the last 20 years. I show that a large part of the research on implicit memory is concerned with the design of experiments that are aimed at giving an adequate empirical description of the phenomenon. These experiments, in turn, have to rely on what I call “guiding assumptions” about the phenomenon. Operationalizing questions about the phenomenon of implicit memory importantly involves providing interpretations for such guiding assumptions. I argue that my construal of operationism raises interesting questions about the relationship between theory and observations during the process of scientific concept formation. While it has become a commonplace within philosophy that all observations are “theory-laden”, very little has been written about the question of what this actually means for specific scientific contexts. Using my contemporary case study as a point of departure, I develop a taxonomy of the ways in which observations may be said to be theory-laden. I use this taxonomy to re-evaluate problems that have traditionally been assumed to follow from theory-ladenness, such as the problem of underdetermination of theory by evidence. This philosophical concern, I claim, corresponds to the scientific concern with avoiding experimental artifacts. I provide an analysis of the notion of experimental artifacts in terms of my analysis of theory-ladenness
    corecore