7 research outputs found
Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience
Jackendoff (2002) posed four challenges that linguistic combinatoriality and
rules of language present to theories of brain function. The essence of these
problems is the question of how to neurally instantiate the rapid construction
and transformation of the compositional structures that are typically taken to
be the domain of symbolic processing. He contended that typical connectionist
approaches fail to meet these challenges and that the dialogue between
linguistic theory and cognitive neuroscience will be relatively unproductive
until the importance of these problems is widely recognised and the challenges
answered by some technical innovation in connectionist modelling. This paper
claims that a little-known family of connectionist models (Vector Symbolic
Architectures) are able to meet Jackendoff's challenges.Comment: This is a slightly updated version of the paper presented at the
Joint International Conference on Cognitive Science, 13-17 July 2003,
University of New South Wales, Sydney, Australia. 6 page
Integrative (Synchronisations-)Mechanismen der (Neuro-)Kognition vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisationsparadigmas
Der Gegenstand der vorliegenden Arbeit besteht darin, aufbauend auf dem (Haupt-)Thema, der Darlegung und Untersuchung der Lösung des Bindungsproblems anhand von temporalen integrativen (Synchronisations-)Mechanismen im Rahmen der kognitiven (Neuro-)Architekturen im (Neo-)Konnektionismus mit Bezug auf die Wahrnehmungs- und Sprachkognition, vor allem mit Bezug auf die dabei auftretende KompositionalitĂ€ts- und SystematizitĂ€tsproblematik, die Konstruktion einer noch zu entwickelnden integrativen Theorie der (Neuro-)Kognition zu skizzie-ren, auf der Basis des ReprĂ€sentationsformats einer sog. âvektoriellen Formâ, u.z. vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisations-Paradigmas
Implications of Computational Cognitive Models for Information Retrieval
This dissertation explores the implications of computational cognitive modeling for information retrieval. The parallel between information retrieval and human memory is that the goal of an information retrieval system is to find the set of documents most relevant to the query whereas the goal for the human memory system is to access the relevance of items stored in memory given a memory probe (Steyvers & Griffiths, 2010).
The two major topics of this dissertation are desirability and information scent. Desirability is the context independent probability of an item receiving attention (Recker & Pitkow, 1996). Desirability has been widely utilized in numerous experiments to model the probability that a given memory item would be retrieved (Anderson, 2007). Information scent is a context dependent measure defined as the utility of an information item (Pirolli & Card, 1996b). Information scent has been widely utilized to predict the memory item that would be retrieved given a probe (Anderson, 2007) and to predict the browsing behavior of humans (Pirolli & Card, 1996b).
In this dissertation, I proposed the theory that desirability observed in human memory is caused by preferential attachment in networks. Additionally, I showed that documents accessed in large repositories mirror the observed statistical properties in human memory and that these properties can be used to improve document ranking. Finally, I showed that the combination of information scent and desirability improves document ranking over existing well-established approaches
The cognitive basis for encoding and navigating linguistic structure
This dissertation is concerned with the cognitive mechanisms that are used to encode and navigate linguistic structure. Successful language understanding requires mechanisms for efficiently encoding and navigating linguistic structure in memory. The timing and accuracy of linguistic dependency formation provides valuable insights into the cognitive basis of these mechanisms. Recent research on linguistic dependency formation has revealed a profile of selective fallibility: some linguistic dependencies are rapidly and accurately implemented, but others are not, giving rise to "linguistic illusions". This profile is not expected under current models of grammar or language processing. The broad consensus, however, is that the profile of selective fallibility reflects dependency-based differences in memory access strategies, including the use of different retrieval mechanisms and the selective use of cues for different dependencies. In this dissertation, I argue that (i) the grain-size of variability is not dependency-type, and (ii) there is not a homogenous cause for linguistic illusions. Rather, I argue that the variability is a consequence of how the grammar interacts with general-purpose encoding and access mechanisms. To support this argument, I provide three types of evidence. First, I show how to "turn on" illusions for anaphor resolution, a phenomena that has resisted illusions in the past, reflecting a cue- combinatorics scheme that prioritizes structural information in memory retrieval. Second, I show how to "turn off" a robust illusion for negative polarity item (NPI) licensing, reflecting access to the internal computations during the encoding and interpretation of emerging semantic/pragmatic representations. Third, I provide computational simulations that derive both the presence and absence of the illusions from within the same memory architecture. These findings lead to a new conception of how we mentally encode and navigate structured linguistic representations