28 research outputs found
The Computability-Theoretic Content of Emergence
In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well understood, and (with important exceptions) captured by mathematics from which it is relatively easy to extract algorithmic con- tent. A widespread view is that the difficulty in describing transitions from algorithmic activity to the emergence associated with chaotic situations is a simple case of complexity outstripping computational resources and human ingenuity. Or, on the other hand, that phenomena transcending the standard Turing model of computation, if they exist, must necessarily lie outside the domain of classical computability theory. In this article we suggest that much of the current confusion arises from conceptual gaps and the lack of a suitably fundamental model within which to situate emergence. We examine the potential for placing emer- gent relations in a familiar context based on Turing's 1939 model for interactive computation over structures described in terms of reals. The explanatory power of this model is explored, formalising informal descrip- tions in terms of mathematical definability and invariance, and relating a range of basic scientific puzzles to results and intractable problems in computability theory
Computational Natural Philosophy: A Thread from Presocratics through Turing to ChatGPT
Modern computational natural philosophy conceptualizes the universe in terms
of information and computation, establishing a framework for the study of
cognition and intelligence. Despite some critiques, this computational
perspective has significantly influenced our understanding of the natural
world, leading to the development of AI systems like ChatGPT based on deep
neural networks. Advancements in this domain have been facilitated by
interdisciplinary research, integrating knowledge from multiple fields to
simulate complex systems. Large Language Models (LLMs), such as ChatGPT,
represent this approach's capabilities, utilizing reinforcement learning with
human feedback (RLHF). Current research initiatives aim to integrate neural
networks with symbolic computing, introducing a new generation of hybrid
computational models.Comment: 17 page
Information Processing, Computation and Cognition
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both ā although others disagree vehemently. Yet different cognitive scientists use ācomputationā and āinformation processingā to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism/computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debatesā empirical aspects
The External Tape Hypothesis: a Turing machine based approach to cognitive computation
The symbol processing or "classical cognitivist" approach to mental computation suggests that the cognitive architecture operates rather like a digital computer. The components of the architecture are input, output and central systems. The input and output systems communicate with both the internal and external environments of the cognizer and transmit codes to and from the rule governed, central processing system which operates on structured representational expressions in the internal environment. The connectionist approach, by contrast, suggests that the cognitive architecture should be thought of as a network of interconnected neuron-like processing elements (nodes) which operates rather like a brain. Connectionism distinguishes input, output and central or "hidden" layers of nodes. Connectionists claim that internal processing consists not of the rule governed manipulation of structured symbolic expressions, but of the excitation and inhibition of activity and the alteration of connection strengths via message passing within and between layers of nodes in the network. A central claim of the thesis is that neither symbol processing nor connectionism provides an adequate characterization of the role of the external environment in cognitive computation. An alternative approach, called the External Tape Hypothesis (ETH), is developed which claims, on the basis of Turing's analysis of routine computation, that the Turing machine model can be used as the basis for a theory which includes the environment as an essential part of the cognitive architecture. The environment is thought of as the tape, and the brain as the control of a Turing machine. Finite state automata, Turing machines,
and universal Turing machines are described, including details of Turing's original universal machine construction. A short account of relevant aspects of the history of digital computation is followed by a critique of the symbol processing approach as it is construed by influential proponents such as Allen Newell and Zenon Pylyshyn among others. The External Tape Hypothesis is then developed as an alternative theoretical basis. In the final chapter, the ETH is combined with the notion of a self-describing Turing machine to provide the basis for an account of thinking and the development of internal representations
Neural Attractors and Phonological Grammar
This volume collects three articles which constitute the bulk of my PhD research. The overarching theme of the volume is the role of attractors - a concept from dynamical systems theory ā in the neural realization of phonological grammar.
The motivation for this line of inquiry begins with the claim that the study of language should provide some insight into the workings of the human mind/brain. Indeed this is one of few mantras shared by linguists of the seemingly irreconcilable āGenerativeā and āCognitiveā schools (e.g. Chomsky 2002; Lakoff 1988). Given this apparent consensus then, it is perhaps surprising that no breakthrough in our understanding of the brain can yet be attributed to some insight from the study of language.
An analysis and critique of this state of affairs is given by Poeppel & Embick (2005), who identify (amongst other things) that we currently have no way of relating the ontologies of linguistics and neuroscience. This Ontological Incommensurability Problem (OIP) can be resolved, they argue, by the use of a Linking Hypothesis, which spells out linguistic computations at the relevant level of algorithmic abstraction, such that the neuroscientist need only find the exact implementations of those algorithms in the brain. If such a hypothesis were sufficiently complete then it could, in principle, predict the kinds of neural configurations required for natural language processing, using linguistic theories as their starting point. In this way, we could finally realize the long sought-after goal of cashing in theories of language for understanding of the human brain. Simultaneously, a Linking Hypothesis also has the potential to unearth lower-level explanations for linguistic phenomena, for example where those explanations might depend on purely neurobiological notions (e.g. neuronal morphology, synaptic density, metabolic efficiency, etc.)
Morphological Computing as Logic Underlying Cognition in Human, Animal, and Intelligent Machine
This work examines the interconnections between logic, epistemology, and
sciences within the Naturalist tradition. It presents a scheme that connects
logic, mathematics, physics, chemistry, biology, and cognition, emphasizing
scale-invariant, self-organizing dynamics across organizational tiers of
nature. The inherent logic of agency exists in natural processes at various
levels, under information exchanges. It applies to humans, animals, and
artifactual agents. The common human-centric, natural language-based logic is
an example of complex logic evolved by living organisms that already appears in
the simplest form at the level of basal cognition of unicellular organisms.
Thus, cognitive logic stems from the evolution of physical, chemical, and
biological logic. In a computing nature framework with a self-organizing
agency, innovative computational frameworks grounded in
morphological/physical/natural computation can be used to explain the genesis
of human-centered logic through the steps of naturalized logical processes at
lower levels of organization. The Extended Evolutionary Synthesis of living
agents is essential for understanding the emergence of human-level logic and
the relationship between logic and information processing/computational
epistemology. We conclude that more research is needed to elucidate the details
of the mechanisms linking natural phenomena with the logic of agency in nature.Comment: 20 pages, no figure
Warren McCulloch and the British cyberneticians
Warren McCulloch was a significant influence on a number of British cyberneticians, as some British pioneers in this area were on him. He interacted regularly with most of the main figures on the British cybernetics scene, forming close friendships and collaborations with several, as well as mentoring others. Many of these interactions stemmed from a 1949 visit to London during which he gave the opening talk at the inaugural meeting of the Ratio Club, a gathering of brilliant, mainly young, British scientists working in areas related to cybernetics. This paper traces some of these relationships and interaction
Autopoietic-extended architecture: can buildings think?
To incorporate bioremedial functions into the performance of buildings and to balance
generative architecture's dominant focus on computational programming and digital
fabrication, this thesis first hybridizes theories of autopoiesis into extended cognition in order to
research biological domains that include synthetic biology and biocomputation. Under the
rubric of living technology I survey multidisciplinary fields to gather perspective for student
design of bioremedial and/or metabolic components in generative architecture where
generative not only denotes the use of computation but also includes biochemical,
biomechanical, and metabolic functions.
I trace computation and digital simulations back to Alan Turing's early 1950s
Morphogenetic drawings, reaction-diffusion algorithms, and pioneering artificial intelligence
(AI) in order to establish generative architecture's point of origin. I ask provocatively: Can
buildings think? as a question echoing Turing's own "Can machines think?" Thereafter, I
anticipate not only future bioperformative materials but also theories capable of underpinning
strains of metabolic intelligences made possible via AI, synthetic biology, and living technology.
I do not imply that metabolic architectural intelligence will be like human cognition. I
suggest, rather, that new research and pedagogies involving the intelligence of bacteria, plants,
synthetic biology, and algorithms define approaches that generative architecture should take in
order to source new forms of autonomous life that will be deployable as corrective
environmental interfaces. I call the research protocol autopoietic-extended design, theorizing it
as an operating system (OS), a research methodology, and an app schematic for design studios
and distance learning that makes use of in-field, e-, and m-learning technologies.
A quest of this complexity requires scaffolding for coordinating theory-driven teaching
with practice-oriented learning. Accordingly, I fuse Maturana and Varela's biological autopoiesis
and its definitions of minimal biological life with Andy Clark's hypothesis of extended cognition
and its cognition-to-environment linkages. I articulate a generative design strategy and student
research method explained via architectural history interpreted from Louis Sullivan's 1924
pedagogical drawing system, Le Corbusier's Modernist pronouncements, and Greg Lynn's
Animate Form. Thus, autopoietic-extended design organizes thinking about the generation of
ideas for design prior to computational production and fabrication, necessitating a fresh
relationship between nature/science/technology and design cognition. To systematize such a
program requires the avoidance of simple binaries (mind/body, mind/nature) as well as the
stationing of tool making, technology, and architecture within the ream of nature. Hence, I argue,
in relation to extended phenotypes, plant-neurobiology, and recent genetic research:
Consequently, autopoietic-extended design advances design protocols grounded in morphology,
anatomy, cognition, biology, and technology in order to appropriate metabolic and intelligent
properties for sensory/response duty in buildings.
At m-learning levels smartphones, social media, and design apps source data from
nature for students to mediate on-site research by extending 3D pedagogical reach into new
university design programs. I intend the creation of a dialectical investigation of animal/human
architecture and computational history augmented by theory relevant to current algorithmic
design and fablab production. The autopoietic-extended design dialectic sets out ways to
articulate opposition/differences outside the Cartesian either/or philosophy in order to
prototype metabolic architecture, while dialectically maintaining: Buildings can think