399,814 research outputs found
A Computational Theory of Subjective Probability
In this article we demonstrate how algorithmic probability theory is applied
to situations that involve uncertainty. When people are unsure of their model
of reality, then the outcome they observe will cause them to update their
beliefs. We argue that classical probability cannot be applied in such cases,
and that subjective probability must instead be used. In Experiment 1 we show
that, when judging the probability of lottery number sequences, people apply
subjective rather than classical probability. In Experiment 2 we examine the
conjunction fallacy and demonstrate that the materials used by Tversky and
Kahneman (1983) involve model uncertainty. We then provide a formal
mathematical proof that, for every uncertain model, there exists a conjunction
of outcomes which is more subjectively probable than either of its constituents
in isolation.Comment: Maguire, P., Moser, P. Maguire, R. & Keane, M.T. (2013) "A
computational theory of subjective probability." In M. Knauff, M. Pauen, N.
Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of
the Cognitive Science Society (pp. 960-965). Austin, TX: Cognitive Science
Societ
Quantum Entanglement in Concept Combinations
Research in the application of quantum structures to cognitive science
confirms that these structures quite systematically appear in the dynamics of
concepts and their combinations and quantum-based models faithfully represent
experimental data of situations where classical approaches are problematical.
In this paper, we analyze the data we collected in an experiment on a specific
conceptual combination, showing that Bell's inequalities are violated in the
experiment. We present a new refined entanglement scheme to model these data
within standard quantum theory rules, where 'entangled measurements and
entangled evolutions' occur, in addition to the expected 'entangled states',
and present a full quantum representation in complex Hilbert space of the data.
This stronger form of entanglement in measurements and evolutions might have
relevant applications in the foundations of quantum theory, as well as in the
interpretation of nonlocality tests. It could indeed explain some
non-negligible 'anomalies' identified in EPR-Bell experiments.Comment: 16 pages, no figure
The Sleeping Beauty Problem -- A Real-World Solution
The Sleeping Beauty Problem remains a paradoxical problem that penetrates
multiple disciplines that include probability theory, self-locating belief,
decision theory, cognitive science, the philosophy of mathematics and science.
It asks the credence of Sleeping Beauty on a coin toss being Heads in the
experiment that incites two main stances, that of the Halfers and Thirders.
Here a real-world empirical approach numerically highlights breakdown between
these groups and considers the role of how a real-world application of such an
experiment with sleep induction by anesthesia and pharmacological amnesia
induction would affect the coin credence probability of Sleeping Beauty
Modeling Memes: A Memetic View of Affordance Learning
This research employed systems social science inquiry to build a synthesis model that would be useful for modeling meme evolution. First, a formal definition of memes was proposed that balanced both ontological adequacy and empirical observability. Based on this definition, a systems model for meme evolution was synthesized from Shannon Information Theory and elements of Bandura\u27s Social Cognitive Learning Theory. Research in perception, social psychology, learning, and communication were incorporated to explain the cognitive and environmental processes guiding meme evolution. By extending the PMFServ cognitive architecture, socio-cognitive agents were created who could simulate social learning of Gibson affordances. The PMFServ agent based model was used to examine two scenarios: a simulation to test for potential memes inside the Stanford Prison Experiment and a simulation of pro-US and anti-US meme competition within the fictional Hamariyah Iraqi village. The Stanford Prison Experiment simulation was designed, calibrated, and tested using the original Stanford Prison Experiment archival data. This scenario was used to study potential memes within a real-life context. The Stanford Prison Experiment simulation was complemented by internal and external validity testing. The Hamariyah Iraqi village was used to analyze meme competition in a fictional village based upon US Marine Corps human terrain data. This simulation demonstrated how the implemented system can infer the personality traits and contextual factors that cause certain agents to adopt pro-US or anti-US memes, using Gaussian mixture clustering analysis and cross-cluster analysis. Finally, this research identified significant gaps in empirical science with respect to studying memes. These roadblocks and their potential solutions are explored in the conclusions of this work
IT for Creativity in Problem Formulation
Creativity is fostered by distributed cognitive activity as individuals interact with artifacts and with each other, often with the aid of artifacts. In this design science research-in-progress, we describe a software artifact known as Theory Garden that was developed to facilitate distributed cognitive activity. We detail an experimental process through which we are assessing the potential for this artifact to facilitate creative outcomes in the context of problem formulation. The experiment involves two different phases: one that addresses managerial problem definition, and one that deals with the requirements processes of software development teams. We conclude with potential implications of this research, should we find evidence that Theory Garden software does support more creative outcomes
Chinese whispers Chinese rooms: the poetry of John Ashbery and cognitive studies
This thesis examines the relationship of John Ashbery’s poetry to developments in cognitive studies over the course of the last sixty years, particularly the science of linguistics as viewed from a Chomskyan perspective. The thesis is divided into four chapters which position particular topics in cognitive studies as organising principles for examining Ashbery’s poetry. The first chapter concentrates on developments in syntactic theory in relation to Ashbery’s experiments with poetic syntax. The second chapter examines the notion of “intention” and “intentionality” in Ashbery’s writing from the perspective of cognitive “theory of context” writing, particularly the work of Deirdre Wilson and Daniel Sperber. The final two chapters consider cognitive questions using Ashbery’s poetry as a means of entry into controversial areas in formal cognitive studies. The third chapter examines his poetry in relation to temporality, suggesting that Ashbery’s experiments with time form “theories of consciousness” as they consciously manipulate readerly consciousness and attention. The final chapter explores perception in relation to Ashbery’s writing. The thesis argues that poetry can be conceived of as a less formalised method of cognitive study, and that poetic experiment can lead to significant reconceptualisations of cognitive notions which may play a role in framing critical questions for more formal experiments in cognitive science-philosophy going forward. The thesis concludes with reflections on the wider implications for literary cognitive studies in general
Recommended from our members
Memory for the meaningless: How chunks help
It is a classic result in cognitive science that chess masters can recall briefly presented positions better than weaker players when these positions are meaningful, but that their superiority disappears with random positions. However, Gobet and Simon (1996a) have recently shown that there is a skill effect with random chess positions as well. The impact of this result for theories of expert memory is discussed. CHREST, a computational, chunking model of chess expertise based on EPAM (Feigenbaum & Simon, 1984) accounts for this skill difference. The model is also compared with human data from an experiment where the role of presentation time for random positions was systematically varied from 1 second to 60 seconds. Simulations show that the model captures the main features of the human data, thus adding support to the EPAM theory. They also corroborate earlier estimates that visual short-term memory may contain three or four chunks
Identifying Quantum Structures in the Ellsberg Paradox
Empirical evidence has confirmed that quantum effects occur frequently also
outside the microscopic domain, while quantum structures satisfactorily model
various situations in several areas of science, including biological, cognitive
and social processes. In this paper, we elaborate a quantum mechanical model
which faithfully describes the 'Ellsberg paradox' in economics, showing that
the mathematical formalism of quantum mechanics is capable to represent the
'ambiguity' present in this kind of situations, because of the presence of
'contextuality'. Then, we analyze the data collected in a concrete experiment
we performed on the Ellsberg paradox and work out a complete representation of
them in complex Hilbert space. We prove that the presence of quantum structure
is genuine, that is, 'interference' and 'superposition' in a complex Hilbert
space are really necessary to describe the conceptual situation presented by
Ellsberg. Moreover, our approach sheds light on 'ambiguity laden' decision
processes in economics and decision theory, and allows to deal with different
Ellsberg-type generalizations, e.g., the 'Machina paradox'.Comment: 16 pages, no figures. arXiv admin note: substantial text overlap with
arXiv:1208.235
The Effects of Using Multimedia Presentations and Modular Worked-out Examples as Instructional Methodologies to Manage the Cognitive Processing Associated with Information Literacy Instruction at the Graduate and Undergraduate Levels of Nursing Education
Information literacy is a complex knowledge domain. Cognitive processing theory describes the effects an instructional subject and the learning environment have on working memory. Essential processing is one component of cognitive processing theory that explains the inherent complexity of knowledge domains such as information literacy. Prior research involving cognitive processing relied heavily on instructional subjects from the areas of math, science and technology. For this study, the instructional subject of information literacy was situated within the literature describing ill-defined problems using modular worked-out examples instructional design techniques. The purpose of this study was to build on the limited research into cognitive processing, ill-defined problems and modular worked-out examples by examining the use of a multimedia audiobook as an instructional technique to manage the cognitive processing occurring during information literacy instruction. Two experiments were conducted using convenience samples of doctoral nursing students (Experiment 1, n = 38) and undergraduate nursing students (Experiment 2, n = 80). Students in Experiment 1 completed a pretest, were exposed to a brief eight-minute and sixteen-second (8:16) multimedia audiobook instructional session, and then completed a posttest. The pretest and posttest consisted of one ill-defined problem presented as an essay-style question, and eleven multiple-choice questions. Experiment 2 built upon Experiment 1 through the addition of three questions measuring extraneous processing, generative processing and essential processing. Experiment 1 results indicated a large Cohen\u27s effect size for the multiple-choice set of questions (d = 1.08) and a medium effect size for the essay-style, ill-defined problem (d = 0.73). Experiment 2, results indicated a medium effect size for the multiple-choice set of questions (d = 0.55) and a medium effect size for the essay-style, ill-defined problem (d = 0.67). With respect to Experiment 2, there were statistically significant differences between generative processing and extraneous processing, t(79) = 6.84, p \u3c .001 and between essential processing and extraneous processing was t(79) = 4.37, p \u3c .001. There was no statistically significant difference between essential processing and generative processing was t(79) = 1.69, p = .09
- …