29 research outputs found

    Development of prototype abstraction and exemplar memorization

    Get PDF
    We present a connectionist model of concept learning that integrates prototype and exemplar effects and reconciles apparently conflicting findings on the development of these effects. Using sibling-descendant cascade-correlation networks, we found that prototype effects were more prominent at the beginning of training and decreased with further training. In contrast, exemplar effects steadily increased with learning. Both kinds of effects were also influenced by category structure. Well-differentiated categories encouraged prototype abstraction while poorly structured categories promoted example memorization.Irina Baetu and Thomas R. Shult

    Has classical gene position been practically reduced?

    Get PDF
    One of the defining features of the classical gene was its position (a band in the chromosome). In molecular genetics, positions are defined instead as nucleotide numbers and there is no clear correspondence with its classical counterpart. However, the classical gene position did not simply disappear with the development of the molecular approach, but survived in the lab associated to different genetic practices. The survival of classical gene position would illustrate Waters’ view about the practical persistence of the genetic approach beyond reductionism and anti-reductionist claims. We show instead that at the level of laboratory practices there are also reductive processes, operating through the rise and fall of different techniques. Molecular markers made the concept of classical gene position practically dispensable, leading us to rethink whether it had any causal role or was just a mere heuristi

    Are preventive and generative causal reasoning symmetrical? Extinction and competition

    No full text
    We tested whether preventive and generative reasoning processes are symmetrical by keeping the training and testing of preventive (inhibitory) and generative (excitatory) causal cues as similar as possible. In Experiment 1, we extinguished excitors and inhibitors in a blocking design, in which each extinguished cause was presented in compound with a novel cause, with the same outcome occurring following the compound and following the novel cause alone. With this novel extinction procedure, the inhibitory cues seemed more likely to lose their properties than the excitatory cues. In Experiment 2, we investigated blocking of excitatory and inhibitory causes and found similar blocking effects. Taken together, these results suggest that acquisition of excitation and inhibition is similar, but that inhibition is more liable to extinguish with our extinction procedure. In addition, we used a variable outcome, and this enabled us to test the predictions of an inferential reasoning account about what happens when the outcome level is at its minimum or maximum (De Houwer, Beckers, & Glautier, 2002). We discuss the predictions of this inferential account, Rescorla and Wagner’s (1972) model, and a connectionist model—the auto-associator.Irina Baetu & A. G. Bake

    Blocking in human causal learning is affected by outcome assumptions manipulated through causal structure

    No full text
    Additivity-related assumptions have been proven to modulate blocking in human causal learning. Typically, these assumptions are manipulated by means of pretraining phases (including exposure to different outcome magnitudes), or through explicit instructions. In two experiments, we used a different approach that involved neither pretraining nor instructional manipulations. Instead, we manipulated the causal structure in which the cues were embedded, thereby appealing directly to the participants’ prior knowledge about causal relations and how causes would add up to yield stronger outcomes. Specifically, in our "different-system" condition, the participants should assume that the outcomes would add up, whereas in our "same-system" condition, a ceiling effect would prevent such an assumption. Consistent with our predictions, Experiment 1 showed that, when two cues from separate causal systems were combined, the participants did expect a stronger outcome on compound trials, and blocking was found, whereas when the cues belonged to the same causal system, the participants did not expect a stronger out- come on compound trials, and blocking was not observed. The results were partially replicated in Experiment 2, in which this pattern was found when the cues were tested for the second time. This evidence supports the claim that prior knowledge about the nature of causal relations can affect human causal learning. In addition, the fact that we did not manipulate causal assumptions through pretraining renders the results hard to account for with associative theories of learning

    Are Dynamic Mechanistic Explanations Still Mechanistic?

    No full text
    International audienceA major type of explanation in biology consists of mechanistic explanations (e.g. Machamer et al. 2000, Kaplan and Craver 2011). The explanatory force of mechanisms is apparent in such typical cases as the functioning of an ion channel or the molecular activation of a receptor: it includes the specification of a model of mechanism and the rehearsing of a causal story that tells how the explanandum phenomenon is produced by the mechanism. It is however much less clear how mechanisms explain in the case of complex and non-linear biomolecular networks such as those that underlie the action of hormones and the regulation of genes. While dynamic mechanistic explanations have been proposed as an extension of mechanistic explanations (e.g. Bechtel and Abrahamsen 2010), we argue that the former depart from the latter in that they do not draw their explanatory force from a causal story but from the mathematical warrants they give that the explanandum phenomenon follows from a mathematical model. By analyzing the explanatory force of mechanistic explanation and of dynamic mechanistic explanation, we show that the two types of explanations can be construed as limit cases of a more general pattern of explanation-Causally Interpreted Model Explanations-that draws its explanatory force from a model, a causal interpretation that links the model to biological reality, and a mathematical derivation that links the model to the explanandum phenomenon
    corecore