17 research outputs found

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    Manipulation is key – On why non-mechanistic explanations in the cognitive sciences also describe relations of manipulation and control

    Get PDF
    A popular view presents explanations in the cognitive sciences as causal or mechanistic and argues that an important feature of such explanations is that they allow us to manipulate and control the explanandum phenomena. Nonetheless, whether there can be explanations in the cognitive sciences that are neither causal nor mechanistic is still under debate. Another prominent view suggests that both causal and non-causal relations of counterfactual dependence can be explanatory, but this view is open to the criticism that it is not clear how to distinguish explanatory from non-explanatory relations. In this paper, I draw from both views and suggest that, in the cognitive sciences, relations of counterfactual dependence that allow manipulation and control can be explanatory even when they are neither causal nor mechanistic. Furthermore, the ability to allow manipulation can determine whether non-causal counterfactual dependence relations are explanatory. I present a preliminary framework for manipulation relations that includes some non-causal relations and use two examples from the cognitive sciences to show how this framework distinguishes between explanatory and non-explanatory, non-causal relations. The proposed framework suggests that, in the cognitive sciences, causal and non-causal relations have the same criterion for explanatory value, namely, whether or not they allow manipulation and control

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Do retinal neurons also represent somatosensory inputs? On why neuronal responses are not sufficient to determine what neurons do

    Get PDF
    How does neuronal activity give rise to cognitive capacities? To address this question, neuroscientists hypothesize about what neurons ‘represent’, ‘encode’, or ‘compute’, and test these hypotheses empirically. This process is similar to the assessment of hypotheses in other fields of science and as such is subject to the same limitations and difficulties that have been discussed at length by philosophers of science. In this paper, we highlight an additional difficulty in the process of empirical assessment of hypotheses that is unique to the cognitive sciences. We argue that, unlike in other scientific fields, comparing hypotheses according to the extent to which they explain or predict empirical data can lead to absurd results. Other considerations, which are perhaps more subjective, must be taken into account. We focus on one such consideration, which is the purposeful function of the neurons as part of a biological system. We believe that progress in neuroscience critically depends on properly addressing this difficulty

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    Explanatory Integration, Computational Phenotypes and Dimensional Psychiatry. The case of alcohol use disorder

    Get PDF
    We compare three theoretical frameworks for pursuing explanatory integration in psychiatry: a new dimensional framework grounded in the notion of computational phenotype, a mechanistic framework, and a network of symptoms framework. Considering the phenomenon of alcoholism, we argue that the dimensional framework is the best for effectively integrating computational and mechanistic explanations with phenomenological analyses

    Analysis of influenza and RSV dynamics in the community using a 'local transmission zone' approach

    No full text
    Understanding the dynamics of pathogen spread within urban areas is critical for the effective prevention and containment of communicable diseases. At these relatively small geographic scales, short-distance interactions and tightly knit sub-networks dominate the dynamics of pathogen transmission; yet, the effective boundaries of these micro-scale groups are generally not known and often ignored. Using clinical test results from hospital admitted patients we analyze the spatio-temporal distribution of Influenza Like Illness (ILI) in the city of Jerusalem over a period of three winter seasons. We demonstrate that this urban area is not a single, perfectly mixed ecology, but is in fact comprised of a set of more basic, relatively independent pathogen transmission units, which we term here Local Transmission Zones, LTZs. By identifying these LTZs, and using the dynamic pathogen-content information contained within them, we are able to differentiate between disease-causes at the individual patient level often with near-perfect predictive accuracy
    corecore