3,261 research outputs found
Mechanisms for the generation and regulation of sequential behaviour
A critical aspect of much human behaviour is the generation and regulation of sequential activities. Such behaviour is seen in both naturalistic settings such as routine action and language production and laboratory tasks such as serial recall and many reaction time experiments. There are a variety of computational mechanisms that may support the generation and regulation of sequential behaviours, ranging from those underlying Turing machines to those employed by recurrent connectionist networks. This paper surveys a range of such mechanisms, together with a range of empirical phenomena related to human sequential behaviour. It is argued that the empirical phenomena pose difficulties for most sequencing mechanisms, but that converging evidence from behavioural flexibility, error data arising from when the system is stressed or when it is damaged following brain injury, and between-trial effects in reaction time tasks, point to a hybrid symbolic activation-based mechanism for the generation and regulation of sequential behaviour. Some implications of this view for the nature of mental computation are highlighted
SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks
In this paper, we describe a so-called screening approach for learning robust
processing of spontaneously spoken language. A screening approach is a flat
analysis which uses shallow sequences of category representations for analyzing
an utterance at various syntactic, semantic and dialog levels. Rather than
using a deeply structured symbolic analysis, we use a flat connectionist
analysis. This screening approach aims at supporting speech and language
processing by using (1) data-driven learning and (2) robustness of
connectionist networks. In order to test this approach, we have developed the
SCREEN system which is based on this new robust, learned and flat analysis.
In this paper, we focus on a detailed description of SCREEN's architecture,
the flat syntactic and semantic analysis, the interaction with a speech
recognizer, and a detailed evaluation analysis of the robustness under the
influence of noisy or incomplete input. The main result of this paper is that
flat representations allow more robust processing of spontaneous spoken
language than deeply structured representations. In particular, we show how the
fault-tolerance and learning capability of connectionist networks can support a
flat analysis for providing more robust spoken-language processing within an
overall hybrid symbolic/connectionist framework.Comment: 51 pages, Postscript. To be published in Journal of Artificial
Intelligence Research 6(1), 199
The Many Functions of Discourse Particles: A Computational Model of Pragmatic Interpretation
We present a connectionist model for the interpretation of discourse\ud
particles in real dialogues that is based on neuronal\ud
principles of categorization (categorical perception, prototype\ud
formation, contextual interpretation). It can be shown that\ud
discourse particles operate just like other morphological and\ud
lexical items with respect to interpretation processes. The description\ud
proposed locates discourse particles in an elaborate\ud
model of communication which incorporates many different\ud
aspects of the communicative situation. We therefore also\ud
attempt to explore the content of the category discourse particle.\ud
We present a detailed analysis of the meaning assignment\ud
problem and show that 80%– 90% correctness for unseen discourse\ud
particles can be reached with the feature analysis provided.\ud
Furthermore, we show that ‘analogical transfer’ from\ud
one discourse particle to another is facilitated if prototypes\ud
are computed and used as the basis for generalization. We\ud
conclude that the interpretation processes which are a part of\ud
the human cognitive system are very similar with respect to\ud
different linguistic items. However, the analysis of discourse\ud
particles shows clearly that any explanatory theory of language\ud
needs to incorporate a theory of communication processes
Are developmental disorders like cases of adult brain damage? Implications from connectionist modelling
It is often assumed that similar domain-specific behavioural impairments found in cases of adult brain damage and developmental disorders correspond to similar underlying causes, and can serve as convergent evidence for the modular structure of the normal adult cognitive system. We argue that this correspondence is contingent on an unsupported assumption that atypical development can produce selective deficits while the rest of the system develops normally (Residual Normality), and that this assumption tends to bias data collection in the field. Based on a review of connectionist models of acquired and developmental disorders in the domains of reading and past tense, as well as on new simulations, we explore the computational viability of Residual Normality and the potential role of development in producing behavioural deficits. Simulations demonstrate that damage to a developmental model can produce very different effects depending on whether it occurs prior to or following the training process. Because developmental disorders typically involve damage prior to learning, we conclude that the developmental process is a key component of the explanation of endstate impairments in such disorders. Further simulations demonstrate that in simple connectionist learning systems, the assumption of Residual Normality is undermined by processes of compensation or alteration elsewhere in the system. We outline the precise computational conditions required for Residual Normality to hold in development, and suggest that in many cases it is an unlikely hypothesis. We conclude that in developmental disorders, inferences from behavioural deficits to underlying structure crucially depend on developmental conditions, and that the process of ontogenetic development cannot be ignored in constructing models of developmental disorders
NASA JSC neural network survey results
A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc
Connectionist models of language learning: implications for writing pedagogy
Connectionism -an interdisciplinary approach that draws heaüly from hard science- promises to be the new paradigm shift for linguistics and psychology, and has important implications for both composition studies and the teaching of writing. The models are innovative primarily because -in a manner extendable to neurobiological reality- they process in a parallel rather than a serial manner and address subsymbolic rather tan symbolic representations. As neuroscientific knowledge expands, such models may be amended and developed to mirror learning of all types. Even at their current level of development, they proüde several important insights into the nature of cognition. This investigation uses connectionist assumptions as analytical tools to explain much about past theoretical frameworks in written composition, and -more significantly- to suggest some important Considerations for writing pedagogy
The unexplained nature of reading.
The effects of properties of words on their reading aloud response times (RTs) are 1 major source of evidence about the reading process. The precision with which such RTs could potentially be predicted by word properties is critical to evaluate our understanding of reading but is often underestimated due to contamination from individual differences. We estimated this precision without such contamination individually for 4 people who each read 2,820 words 50 times each. These estimates were compared to the precision achieved by a 31-variable regression model that outperforms current cognitive models on variance-explained criteria. Most (around 2/3) of the meaningful (non-first-phoneme, non-noise) word-level variance remained unexplained by this model. Considerable empirical and theoretical-computational effort has been expended on this area of psychology, but the high level of systematic variance remaining unexplained suggests doubts regarding contemporary accounts of the details of the mechanisms of reading at the level of the word. Future assessment of models can take advantage of the availability of our precise participant-level database
Biologically Plausible Connectionist Prediction of Natural Language Thematic Relations
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.Fapesp - Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, Brazil[2008/08245-4
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
- …