978 research outputs found
Measuring autonomy and emergence via Granger causality
Concepts of emergence and autonomy are central to artificial life and related cognitive and behavioral sciences. However, quantitative and easy-to-apply measures of these phenomena are mostly lacking. Here, I describe quantitative and practicable measures for both autonomy and emergence, based on the framework of multivariate autoregression and specifically Granger causality. G-autonomy measures the extent to which the knowing the past of a variable helps predict its future, as compared to predictions based on past states of external (environmental) variables. G-emergence measures the extent to which a process is both dependent upon and autonomous from its underlying causal factors. These measures are validated by application to agent-based models of predation (for autonomy) and flocking (for emergence). In the former, evolutionary adaptation enhances autonomy; the latter model illustrates not only emergence but also downward causation. I end with a discussion of relations among autonomy, emergence, and consciousness
The Execution of the Innocent
Radelet and Bedau discuss the continuing and regular incidence of American trial courts sentencing innocent defendants to death, which was one of the problems that gave rise to the ABA\u27s moratorium on capital punishment
A Functional Naturalism
I provide two arguments against value-free naturalism. Both are based on considerations concerning biological teleology. Value-free naturalism is the thesis that both (1) everything is, at least in principle, under the purview of the sciences and (2) all scientific facts are purely non-evaluative. First, I advance a counterexample to any analysis on which natural selection is necessary to biological teleology. This should concern the value-free naturalist, since most value-free analyses of biological teleology appeal to natural selection. My counterexample is unique in that it is likely to actually occur. It concerns the creation of synthetic life. Recent developments in synthetic biology suggest scientists will eventually be able to develop synthetic life. Such life, however, would not have any of its traits naturally selected for. Second, I develop a simple argument that biological teleology is a scientific but value-laden notion. Consequently, value-free naturalism is false. I end with some concluding remarks on the implications for naturalism, the thesis that (1). Naturalism may be salvaged only if we reject (2). (2) is a dogma that unnecessarily constrains our conception of the sciences. Only a naturalism that recognizes value-laden notions as scientifically respectable can be true. Such a naturalism is a functional naturalism
A Retributive Theory of the Pardoning Power?
During the past two decades; the retributive theory of punishment has made remarkable strides in recapturing the affections of penologists. The story has been told elsewhere and need not be reviewed here. For philosophers, if not for others interested in the theory and practice of punishment, a retributive approach holds a double attraction
Beyond Biobricks: Synthesizing Synergistic Biochemical Systems from the Bottom-up
Engineers who attempt to discover and optimize the behavior of complex biochemical systems face a dauntingly difficult task. This is especially true if the systems are governed by multiple qualitative and quantitative variables that have non-linear response functions and that interact synergistically. The synthetic biology community has responded to this difficulty by promoting the use of standard biological parts called BioBricks , which are supposed to make biology into traditional engineering and enable engineers to program living organisms in the same way a computer scientists can program a computer . But the BioBricks research program faces daunting hurdles, because the nonlinearity and synergy found throughout biochemical systems generates lots of unpredictable emergent properties. This talk describes an alternative vision of how to engineer complex biochemical systems, according to which we would refashion engineering to fit biology (rather than the other way around). The resulting method (termed Predictive Design Technology or PDT) is a robot- and computer-driven automatic and autonomous implementation of traditional Edisonian science. The PDT method is described and illustrated in application to a number of practical biochemical design tasks, including (2) optimizing combination drug therapies, (2) optimizing cargo capacity of liposomes that self-assemble from complex amphiphile mixtures, (3) optimizing the liposomal formulation of insoluble drugs, and (4) optimizing in vitro protein expression.https://pdxscholar.library.pdx.edu/systems_science_seminar_series/1040/thumbnail.jp
- β¦