34 research outputs found

    Network Representation and Complex Systems

    Get PDF
    In this article, network science is discussed from a methodological per- spective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling

    Network representation and complex systems

    Get PDF

    Mechanistic and topological explanations: an introduction

    Get PDF
    In the last twenty years or so, since the publication of a seminal paper by Watts and Storgatz (1998), an interest in topological explanations has spread like a wild fire over many areas of science, e.g. ecology, evolutionary biology, medicine, and cognitive neuroscience. The aim of this special issue is to discuss the relationship between mechanistic and topological approaches to explanation and their prospects

    Minimal structure explanations, scientific understanding and explanatory depth

    Get PDF
    In this paper, I outline a heuristic for thinking about the relation between explanation and understanding that can be used to capture various levels of “intimacy”, between them. I argue that the level of complexity in the structure of explanation is inversely proportional to the level of intimacy between explanation and understanding, i.e. the more complexity the less intimacy. I further argue that the level of complexity in the structure of explanation also affects the explanatory depth in a similar way to intimacy between explanation and understanding, i.e. the less complexity the greater explanatory depth and vice versa

    Exploring modularity in biological networks

    Get PDF
    Network theoretical approaches have shaped our understanding of many different kinds of biological modularity. This essay makes the case that to capture these contributions, it is useful to think about the role of network models in exploratory research. The overall point is that it is possible to provide a systematic analysis of the exploratory functions of network models in bioscientific research. Using two examples from molecular and developmental biology, I argue that often the same modelling approach can perform one or more exploratory functions, such as introducing new directions of research, offering a complementary set of concepts, methods and algorithms for individuating important features of natural phenomena, generating proofs of principle demonstrations and potential explanations for phenomena of interest and enlarging the scope of certain research agendas. This article is part of the theme issue 'Unifying the essential concepts of biological networks: biological insights and philosophical foundations'

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    Models and Mechanisms in Network Neuroscience

    Get PDF
    This paper considers the way mathematical and computational models are used in network neuroscience to deliver mechanistic explanations. Two case studies are considered: Recent work on klinotaxis by Caenorhabditis elegans, and a longstanding research effort on the network basis of schizophrenia in humans. These case studies illustrate the various ways in which network, simulation and dynamical models contribute to the aim of representing and understanding network mechanisms in the brain, and thus, of delivering mechanistic explanations. After outlining this mechanistic construal of network neuroscience, two concerns are addressed. In response to the concern that functional network models are non-explanatory, it is argued that functional network models are in fact explanatory mechanism sketches. In response to the concern that models which emphasize a network’s organization over its composition do not explain mechanistically, it is argued that this emphasis is both appropriate and consistent with the principles of mechanistic explanation. What emerges is an improved understanding of the ways in which mathematical and computational models are deployed in network neuroscience, as well as an improved conception of mechanistic explanation in general

    Discovering Brain Mechanisms Using Network Analysis and Causal Modeling

    Get PDF
    Mechanist philosophers have examined several strategies scientists use for discovering causal mechanisms in neuroscience. Findings about the anatomical organization of the brain play a central role in several such strategies. Little attention has been paid, however, to the use of network analysis and causal modeling techniques for mechanism discovery. In particular, mechanist philosophers have not explored whether and how these strategies incorporate information about the anatomical organization of the brain. This paper clarifies these issues in the light of the distinction between structural, functional and effective connectivity. Specifically, we examine two quantitative strategies currently used for causal discovery from functional neuroimaging data: dynamic causal modeling and probabilistic graphical modeling. We show that dynamic causal modeling uses findings about the brain’s anatomical organization to improve the statistical estimation of parameters in an already specified causal model of the target brain mechanism. Probabilistic graphical modeling, in contrast, makes no appeal to the brain’s anatomical organization, but lays bare the conditions under which correlational data suffice to license reliable inferences about the causal organization of a target brain mechanism. The question of whether findings about the anatomical organization of the brain can and should constrain the inference of causal networks remains open, but we show how the tools supplied by graphical modeling methods help in addressing it
    corecore