2,431 research outputs found

    Dielectric breakdown induced by picosecond laser pulses

    Get PDF
    The damage thresholds of transparent optical materials were investigated. Single picosecond pulses at 1.06 microns, 0.53 microns and 0.35 microns were obtained from a mode locked Nd-YAG oscillator-amplifier-frequency multiplier system. The pulses were Gaussian in space and time and permitted the determination of breakdown thresholds with a reproducibility of 15%. It was shown that the breakdown thresholds are characteristic of the bulk material, which included nine alkali halides, five different laser host materials, KDP, quartz, sapphire and calcium fluoride. The extension of the damage data to the ultraviolet is significant, because some indication was obtained that two- and three-photon absorption processes begin to play a role in determining the threshold. Throughout the visible region of the spectrum the threshold is still an increasing function of frequency, indicating that avalanche ionization is the dominant factor in determining the breakdown threshold. This was confirmed by a detailed study of the damage morphology with a high resolution microscope just above the threshold. The influence of self focusing is discussed, and evidence for beam distortion below the power threshold for complete self focusing is presented, confirming the theory of Marburger

    Mechanistic unity of the predictive mind

    Get PDF
    It is often recognized that cognitive science employs a diverse explanatory toolkit. It has also been argued that cognitive scientists should embrace this explanatory diversity rather than pursue search for some grand unificatory framework or theory. This pluralist stance dovetails with the mechanistic view of cognitive-scientific explanation. However, one recently proposed theory – based on an idea that the brain is a predictive engine – opposes the spirit of pluralism by unapologetically wearing unificatory ambitions on its sleeves. In this paper, my aim is to investigate those pretentions to elucidate what sort of unification is on offer. I challenge the idea that explanatory unification of cognitive science follows from the Free Energy Principle. I claim that if the predictive story is to provide an explanatory unification, it is rather by proposing that many distinct cognitive mechanisms fall under the same functional schema that pertains to prediction error minimization. Seen this way, the brain is not simply a predictive mechanism – it is a collection of predictive mechanisms. I also pursue a more general aim of investigating the value of unificatory power for mechanistic explanations. I argue that even though unification is not an absolute evaluative criterion for mechanistic explanations, it may play an epistemic role in evaluating the credibility of an explanation relative to its direct competitors

    Long-Term Potentiation: One Kind or Many?

    Get PDF
    Do neurobiologists aim to discover natural kinds? I address this question in this chapter via a critical analysis of classification practices operative across the 43-year history of research on long-term potentiation (LTP). I argue that this 43-year history supports the idea that the structure of scientific practice surrounding LTP research has remained an obstacle to the discovery of natural kinds

    Potentiality in Biology

    Get PDF
    We take the potentialities that are studied in the biological sciences (e.g., totipotency) to be an important subtype of biological dispositions. The goal of this paper is twofold: first, we want to provide a detailed understanding of what biological dispositions are. We claim that two features are essential for dispositions in biology: the importance of the manifestation process and the diversity of conditions that need to be satisfied for the disposition to be manifest. Second, we demonstrate that the concept of a disposition (or potentiality) is a very useful tool for the analysis of the explanatory practice in the biological sciences. On the one hand it allows an in-depth analysis of the nature and diversity of the conditions under which biological systems display specific behaviors. On the other hand the concept of a disposition may serve a unificatory role in the philosophy of the natural sciences since it captures not only the explanatory practice of biology, but of all natural sciences. Towards the end we will briefly come back to the notion of a potentiality in biology

    What can polysemy tell us about theories of explanation?

    Get PDF
    Philosophical accounts of scientific explanation are broadly divided into ontic and epistemic views. This paper explores the idea that the lexical ambiguity of the verb to explain and its nominalisation supports an ontic conception of explanation. I analyse one argument which challenges this strategy by criticising the claim that explanatory talk is lexically ambiguous, 375–394, 2012). I propose that the linguistic mechanism of transfer of meaning, 109–132, 1995) provides a better account of the lexical alternations that figure in the systematic polysemy of explanatory talk, and evaluate the implications of this proposal for the debate between ontic and epistemic conceptions of scientific explanation

    Complexity

    Get PDF
    This is a contribution to the encyclopedia of systems biology on complexity

    Functional kinds: a skeptical look

    Get PDF
    The functionalist approach to kinds has suffered recently due to its association with law-based approaches to induction and explanation. Philosophers of science increasingly view nomological approaches as inappropriate for the special sciences like psychology and biology, which has led to a surge of interest in approaches to natural kinds that are more obviously compatible with mechanistic and model-based methods, especially homeostatic property cluster theory. But can the functionalist approach to kinds be weaned off its dependency on laws? Dan Weiskopf has recently offered a reboot of the functionalist program by replacing its nomological commitments with a model-based approach more closely derived from practice in psychology. Roughly, Weiskopf holds that the natural kinds of psychology will be the functional properties that feature in many empirically successful cognitive models, and that those properties need not be localized to parts of an underlying mechanism. I here skeptically examine the three modeling practices that Weiskopf thinks introduce such non-localizable properties: fictionalization, reification, and functional abstraction. In each case, I argue that recognizing functional properties introduced by these practices as autonomous kinds comes at clear cost to those explanations’ counterfactual explanatory power. At each step, a tempting functionalist response is parochialism: to hold that the false or omitted counterfactuals fall outside the modeler’s explanatory aims, and so should not be counted against functional kinds. I conclude by noting the dangers this attitude poses to scientific disagreement, inviting functionalists to better articulate how the individuation conditions for functional kinds might outstrip the perspective of a single modeler

    Teleology and Realism in Leibniz's Philosophy of Science

    Get PDF
    This paper argues for an interpretation of Leibniz’s claim that physics requires both mechanical and teleological principles as a view regarding the interpretation of physical theories. Granting that Leibniz’s fundamental ontology remains non-physical, or mentalistic, it argues that teleological principles nevertheless ground a realist commitment about mechanical descriptions of phenomena. The empirical results of the new sciences, according to Leibniz, have genuine truth conditions: there is a fact of the matter about the regularities observed in experience. Taking this stance, however, requires bringing non-empirical reasons to bear upon mechanical causal claims. This paper first evaluates extant interpretations of Leibniz’s thesis that there are two realms in physics as describing parallel, self-sufficient sets of laws. It then examines Leibniz’s use of teleological principles to interpret scientific results in the context of his interventions in debates in seventeenth-century kinematic theory, and in the teaching of Copernicanism. Leibniz’s use of the principle of continuity and the principle of simplicity, for instance, reveal an underlying commitment to the truth-aptness, or approximate truth-aptness, of the new natural sciences. The paper concludes with a brief remark on the relation between metaphysics, theology, and physics in Leibniz
    • …
    corecore