2,912 research outputs found
A Biologically Informed Hylomorphism
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention â the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organism
On Many-Minds Interpretations of Quantum Theory
This paper is a response to some recent discussions of many-minds
interpretations in the philosophical literature. After an introduction to the
many-minds idea, the complexity of quantum states for macroscopic objects is
stressed. Then it is proposed that a characterization of the physical structure
of observers is a proper goal for physical theory. It is argued that an
observer cannot be defined merely by the instantaneous structure of a brain,
but that the history of the brain's functioning must also be taken into
account. Next the nature of probability in many-minds interpretations is
discussed and it is suggested that only discrete probability models are needed.
The paper concludes with brief comments on issues of actuality and identity
over time.Comment: 16 pages, plain TeX, no macros required. Revised following comments
November 199
Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding
In âPsychopower and Ordinary Madnessâ my ambition, as it relates to Bernard Stieglerâs recent literature, was twofold: 1) critiquing Stieglerâs work on exosomatization and artefactual posthumanismâor, more specifically, nonhumanismâto problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Baillyâs conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandomâs conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestaniâs deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
Naturalizing institutions: Evolutionary principles and application on the case of money
In recent extensions of the Darwinian paradigm into economics, the replicator-interactor duality looms large. I propose a strictly naturalistic approach to this duality in the context of the theory of institutions, which means that its use is seen as being always and necessarily dependent on identifying a physical realization. I introduce a general framework for the analysis of institutions, which synthesizes Searle's and Aoki's theories, especially with regard to the role of public representations (signs) in the coordination of actions, and the function of cognitive processes that underly rule-following as a behavioral disposition. This allows to conceive institutions as causal circuits that connect the population-level dynamics of interactions with cognitive phenomena on the individual level. Those cognitive phenomena ultimately root in neuronal structures. So, I draw on a critical restatement of the concept of the meme by Aunger to propose a new conceptualization of the replicator in the context of institutions, namely, the replicator is a causal conjunction between signs and neuronal structures which undergirds the dispositions that generate rule-following actions. Signs, in turn, are outcomes of population-level interactions. I apply this framework on the case of money, analyzing the emotions that go along with the use of money, and presenting a stylized account of the emergence of money in terms of the naturalized Searle-Aoki model. In this view, money is a neuronally anchored metaphor for emotions relating with social exchange and reciprocity. Money as a meme is physically realized in a replicator which is a causal conjunction of money artefacts and money emotions. --Generalized Darwinism,institutions,replicator/interactor,Searle,Aoki,naturalism,memes,emotions,money
functional modeling in safety by means of foundational ontologies
Abstract Modern theory of safety deals with systemic approach to safety, formalized in form of several systemic prediction models or methods such as FRAM (Functional Resonance Analysis Method) or STAMP (System-Theoretic Accident Model and Processes). The theory of each approach emphasizes different viewpoints to be considered in approaching various industrial safety issues. This paper focuses on FRAM and its functional viewpoint for modern complex sociotechnical systems. The methodology in this paper is based on the utilization of foundational ontologies to conceptualize the core ideas of FRAM, with the focus on the concept of functions as used in theory. The outcomes of the case study in the aviation domain provide for what needs to be determined to properly model functions in FRAM and they allow for better utilization of the method in real-case applications. The results also confirm some previous research, suggesting that modern systemic approach to safety is theoretically grounded on common - or at least complementary - tenets, to be prospectively integrated by means of ontology engineering
Physical requirements for models of consciousness
Consciousness presents a series of characteristics that have been observed through- out the years: unity, continuity, richness and robustness are some of them. It manifests itself in regions of the brain capable of processing a huge quantity of integrated information with a level of neural activity close to criticality. We argue that the physics of consciousness cannot be exclusively based on classical physics. Consciousness unity cannot be explained classically as the classical properties are always Humean like a mosaic. One needs an entangled quantum system that can at least satisfy part of the functions of a quantum computer to allow to generate an inner aspect with the unity of consciousness and to couple with a classical system that gives it simultaneous access to preprocessed information at the neural level and to produce events that generate neural firings
Understanding, normativity, and scientific practice
Understanding, Normativity, and Scientific Practice
Harry Lewendon-Evans
PhD Thesis
Department of Philosophy
Durham University
2018
Recent work in epistemology and philosophy of science has argued that understanding is an important cognitive achievement that philosophers should seek to address for its own sake. This thesis outlines and defends a new account of scientific understanding that analyses the concept of understanding in terms of the concept of normativity. The central claim is that to understand means to grasp something in the light of norms. The thesis is divided into two parts: Part I (chapters one to three) addresses the question of the agency of understanding and Part II (chapters four to five) focuses on the vehicles of scientific understanding. Chapter One begins with an account of understanding drawn from the work of Martin Heidegger, which presents understanding as a practical, normative capacity for making sense of entities. Chapter Two builds on Robert Brandomâs normative inferentialism to argue that conceptual understanding is grounded in inferential rules embedded within norm-governed, social practices. Chapter Three argues that normativity should be located in the intersubjective nature of social practices. The chapters in Part II draw on and extend the account of understanding developed in Part I by focusing on how models and explanations function within scientific practice to facilitate scientific understanding. Chapter Four investigates the nature of model-based understanding. It defends the claim that constructing and using models enables a form of conceptual articulation which facilitates scientific understanding by rendering scientific phenomena intelligible. Chapter Five considers the connection between understanding and explanation through the role of explanatory discourse in scientific practice. I argue that the function of explanations is to sculpt and make explicit the norms of intelligibility required for scientific understanding. This thesis concludes that scientific understanding is an inherently norm-governed phenomenon that is unintelligible without reference to the normative dimension of our social and scientific practices
Recommended from our members
Aspects of Qualitative Consciousness: A Computer Science Perspective
The domain of artificial intelligence (AI) has been characterised by John Searle [Sear84] by distinguishing between iveak AI, according to which computers are useful tools for studying mind, and strong AI, according to which an equivalence is made between mind and programs such that computers executing programs actually possess minds. This dissertation explores a third alternative, namely: the prospects and promise of m ild AI, according to which a suitable computer is capable of possessing species of mentality that may differ from or be weaker than ordinary human mentality, but qualify as âmentalityâ nonetheless. The purpose of this dissertation is to explore the prospects and promise of mild AI.
The approach adopted explores whether mind can be replicated, as opposed to merely simulated, in digital machines. This requires a definition of mind in order to judge success. James Fetzer [Fetz90] has suggested minds can be defined as sign using systems in the sense of Charles Peirceâs semiotic (theory of signs) and, on this basis, argues convincingly against strong AI. Determining if his negative conclusion applies to mild AI requires rejoining Fetzerâs analysis of the analogical argument for strong AI and redressing his laws of human beings and digital machines. This is tackled by focusing on the nature and form of the operational relationship between the physical machine and mind, and suggesting some operational requirements for a minimal semiotic system independently of any underlying physical implementation. This involves four steps.
Firstly, as a formal foundation, a characterisation of systems is developed in terms of the causal structure and ontological levels in the system, where an ontological level is individuated by the laws that are in effect. This is in contrast to levels of organisation, such as levels of software abstraction. This exploration suggests the necessity â as a matter of natural law â for a mediating level between the physical machine and mind that is or, at least, appears to be necessary for producing forms of mentality. The lawful structure that appears to be required within this level and between levels is examined with respect to the prospects for implementing a semiotic system.
Secondly, how a system can operate in terms of semiotic processes based on a network of instantiated dispositions is explored. These are modelled as the temporal counterparts of state-transitions and stationary-representations, which are termed causal-flows and temporal-representations, respectively. They highlight the varying interactive structure of temporal patterns of causal activity in time. For the purposes of replicating mind, preserving the causal-flow structure of mental processes arises as an important requirement.
Thirdly, the system structure sufficient for generating consciousness is explored â a necessary condition for a cognitive semiotic system. This suggests a requirement relating to the causal accessibility of the contents of consciousness. This structuring is driven by the systemâs need to signify reality by categorising these aspects as operational entities upon which decisions can be made. Consciousness arises through the manner in which the signified reality is generated. This makes mind and consciousness the result of a co-ordinated occurrent system wide activity.
Fourthly, in a mathematical sense, brains and computers can be classified as types of numeric and symbolic systems, respectively. These systems are compared and conditions formulated under which they may give rise to equivalent ontological levels. Peirceâs triadic sign relation is analysed in terms of ontological levels and the results used to clarify the nature of the ground relation in machine forms of mentality.
According to the theorems developed, the introduction of a dispositional mediating level might effectively enable a suitable computer to replicate species of mentality. An important factor in determining whether a computer is suitable for this purpose is its performance capacity and thus some estimates are calculated in this respect. It is shown how these requirements, along with a number of others, can help in the development of semiotic systems and variants, such as the iconic state machine of Igor Aleksander [Alek96]
- âŠ