59 research outputs found
Recommended from our members
Grounded metacognitive architectures for machine consciousness
Multiple approaches to machine consciousness emphasise the importance of metacognitive states and processes. A considerable num- ber of cognitive systems researchers prefer architectures that are not classically symbolic, and in which learning, rather a priori structure, is central. But it is unclear how these grounded architectures can support metacognition of the required sort. To investigate this possibility, a basic design sketch of such an architecture is presented
Natural intensions
There is an attractive way to explain representation in terms of adaptivity: roughly, an item R represents a state of affairs S if it has the proper function of co-occurring with S (that is, if the ancestors of R co-occurred with S and this co-occurrence explains why R was selected for, and thus why R exists now). Although this may be an adequate account of the extension or reference of R, what such explanations often neglect is an account of the intension or sense of R: how S is represented by R. No doubt such an account, if correct, would be complex, involving such things as the proper functions of the mechanisms that use R, the mechanisms by which R fulfills its function, and more. But it seems likely that an important step toward such an account would be the identification of the norms that govern this process. The norms of validity and Bayes' Theorem can guide investigations into the actual inferences and probabilistic reasoning that organisms perform. Is there a norm that can do the same for intension-fixing? I argue that before this can be resolved, some problems with the biosemantic account of extension must be resolved. I attempt to do so by offering a complexity-based account of the natural extension of a representation R: for a given set of ancestral co-occurrences Z, the natural extension is the extension of the least complex intension that best covers Z. Minimal description length is considered as a means for measuring complexity. Some advantages of and problems with the account are identified
Recommended from our members
The physical mandate for belief-goal psychology
This article describes a heuristic argument for understanding certain physical systems in terms of properties that resemble the beliefs and goals of folk psychology. The argument rests on very simple assumptions. The core of the argument is that predictions about certain events can legitimately be based on assumptions about later events, resembling Aristotelian âfinal causationâ; however, more nuanced causal entities (resembling fallible beliefs) must be introduced into these types of explanation in order for them to remain consistent with a causally local Universe
Recommended from our members
Functionalism, revisionism, and qualia
From the editor's introduction: "Ron Chrisley and Aaron Sloman open Part I of this issue with their article âFunctionalism, Revisionism, and Qualia.â Chrisley and Sloman discuss revisionism about qualiaâthe view that tries to navigate between naĂŻve qualia realism and reductive eliminativism. The authors discuss the relevance of their approach to AI. They also relate to the works they view as following the main tenets of revisionism about qualia. This includes Gilbert Harmanâs version of functionalism, discussed in much detail (including Harmanâs article âExplaining the Explanatory Gap,â published in the spring 2007 issue of this newsletter) and also the psychomotoric approach to qualia by Kevin OâRegan.
A human-centered approach to AI ethics: a perspective from cognitive science
This chapter explores a human-centered approach to AI and robot ethics. It demonstrates how a human-centered approach can resolve some problems in AI and robot ethics that arise from the fact that AI systems and robots have cognitive states, and yet have no welfare, and are not responsible. In particular, the approach allows that violence toward robots can be wrong even if robots cannot be harmed. More importantly, the approach encourages people to shift away from designing robots as if they were human ethical deliberators. Ultimately, the cognitive states of AI systems and robots may have a role to play in the proper ethical analysis of situations involving them, even if it is not by virtue of conferring welfare or responsibilities on those systems or robots
Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli
While theories of consciousness differ substantially, the âconscious access hypothesisâ, which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it
Some foundational issues concerning anticipatory systems
Some foundational conceptual issues concerning anticipatory systems are identified and discussed: 1) The doubly temporal nature of anticipation is noted: anticipations are directed toward one time, and exist at another; 2) Anticipatory systems can be open: they can perturb and be perturbed by states external to the system; 3) Anticipation may be facilitated by a system modeling the relation between its own output, its environment, and its future input; 4) Anticipations must be a part of the system whose anticipations they are. Each of these points are made more precise by considering what changes they require to be made to the basic equation characterising anticipatory systems. In addition, some philosophical questions concerning the content of anticipatory representations are considered
Philosophical foundations of artificial consciousness
Objective: Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (Al). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of Al method. Since Al is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than Al supposedly is. The objective of this paper is to evaluate the soundness of this inference. Methods: The results are achieved by means of conceptual analysis and argumentation. Results and conclusions: It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of Al, and a lack of awareness of the possible roles Al might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the author's work in AC - interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology - are used to illustrate and motivate the distinctions and the defences of AC they make possible
Why Everything Doesn't Realize Every Computation
Some have suggested that there is no fact to the matter as to whether or not a particular physical system realizes a particular computational description. This suggestion has been taken to imply that computational states are not "real", and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell one anything about the nature of that system. Putnam's argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one's view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. Key words. Computation, philosophy of computation, embeddedness, foundations of cognitive science, formality, multiple realization
- âŠ