7 research outputs found

    Manipulation is key – On why non-mechanistic explanations in the cognitive sciences also describe relations of manipulation and control

    Get PDF
    A popular view presents explanations in the cognitive sciences as causal or mechanistic and argues that an important feature of such explanations is that they allow us to manipulate and control the explanandum phenomena. Nonetheless, whether there can be explanations in the cognitive sciences that are neither causal nor mechanistic is still under debate. Another prominent view suggests that both causal and non-causal relations of counterfactual dependence can be explanatory, but this view is open to the criticism that it is not clear how to distinguish explanatory from non-explanatory relations. In this paper, I draw from both views and suggest that, in the cognitive sciences, relations of counterfactual dependence that allow manipulation and control can be explanatory even when they are neither causal nor mechanistic. Furthermore, the ability to allow manipulation can determine whether non-causal counterfactual dependence relations are explanatory. I present a preliminary framework for manipulation relations that includes some non-causal relations and use two examples from the cognitive sciences to show how this framework distinguishes between explanatory and non-explanatory, non-causal relations. The proposed framework suggests that, in the cognitive sciences, causal and non-causal relations have the same criterion for explanatory value, namely, whether or not they allow manipulation and control

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Do retinal neurons also represent somatosensory inputs? On why neuronal responses are not sufficient to determine what neurons do

    Get PDF
    How does neuronal activity give rise to cognitive capacities? To address this question, neuroscientists hypothesize about what neurons ‘represent’, ‘encode’, or ‘compute’, and test these hypotheses empirically. This process is similar to the assessment of hypotheses in other fields of science and as such is subject to the same limitations and difficulties that have been discussed at length by philosophers of science. In this paper, we highlight an additional difficulty in the process of empirical assessment of hypotheses that is unique to the cognitive sciences. We argue that, unlike in other scientific fields, comparing hypotheses according to the extent to which they explain or predict empirical data can lead to absurd results. Other considerations, which are perhaps more subjective, must be taken into account. We focus on one such consideration, which is the purposeful function of the neurons as part of a biological system. We believe that progress in neuroscience critically depends on properly addressing this difficulty

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)
    corecore