228,134 research outputs found

    Self-Organizing Grammar Induction Using a Neural Network Model

    Full text link
    This paper presents a self-organizing, real-time, hierarchical neural network model of sequential processing, and shows how it can be used to induce recognition codes corresponding to word categories and elementary grammatical structures. The model, first introduced in Mannes (1992), learns to recognize, store, and recall sequences of unitized patterns in a stable manner, either using short-term memory alone, or using long-term memory weights. Memory capacity is only limited by the number of nodes provided. Sequences are mapped to unitized patterns, making the model suitable for hierarchical operation. By using multiple modules arranged in a hierarchy and a simple mapping between output of lower levels and the input of higher levels, the induction of codes representing word category and simple phrase structures is an emergent property of the model. Simulation results are reported to illustrate this behavior.National Science Foundation (IRI-9024877

    Person memory : associative networks, categories, and schemas

    No full text
    A model of impression memory was utilized which partitioned impression information into category-consistent (schema) and category-inconsistent information (peripheral). The memory for consistent and inconsistent information was investigated in a series of experiments. Two information-processing models, the Schema-Pointer-Plus-Tag (SP+T) and the Associative-Network-Plus-Elaborative-Processing (AN+EP) models were tested for their ability to account for impression memory data. Experiment 1 tested subjects' recognition memory for an impression. Subjects' hit rates for schema and peripheral impression traits were equal both at immediate and delayed test intervals. False alarm rates were greater for schema nonimpression distractors than for peripheral dis tractors at each retention interval. Reaction times paralleled recognition performance. Times to make schema and peripheral hits were equal, while correct rejections of schema nonimpression distractors took longer than peripheral distractors. Experiment 2 tested subjects' recall memory for an impression. Subjects recalled equal proportions of schema and peripheral impression traits at immediate test, but recalled a greater proportion of schema traits after a delay. Recall intrusions at immediate test were equal for schema and peripheral nonimpression traits, but there were more schema than peripheral intrusions at the delayed test. Experiment 3 also tested subjects recall memory for an impression, though in this experiment subjects were required to furnish impression traits themselves, rather than selecting them from a checklist as in Experiments 1 and 2. The pattern of results was similar to that of Experiment 2, except that there was a greater proportion of schema impression traits recalled at both retention intervals, rather than only at the delayed test. Experiment 4 found that recalled impression traits were no more or less associated or linked with other impression traits than nonrecalled impression traits. Additionally, recalled traits were no more likely to be linked with other recalled traits than nonrecalled traits, and no more likely to be the source of links to other impression traits. Experiment 5 found a similar pattern of results to Experiment 4 with respect to not only those links within the impression, but also between impression traits and other traits in the subjects' vocabularies. Recalled impression traits did not differ from nonrecalled traits on any of the measures of association or interlinking. Experiment 6 found a significant relationship between traits categorized together as measured in Experiments 1-3, and traits seen as associated when judged on a pairwise basis as measured in Experiments 4-5. The better recall of schema traits as found in Experiments 2 and 3, which contrasts with the failure of Experiments 4 and 5 to show that recalled traits were simply more interassociated than nonrecalled traits, cannot be attributed to measurement factors. Both the SP+T and the AN+EP models were found to be inappropriate for modelling impression memory data, as was a simple associative model. Memory for impression information was shown to be schematically driven, though not to the extent suggested by the SP+ T model

    Analogical Retrieval via Intermediate Features: The Goldilocks Hypothesis

    Get PDF
    Analogical reasoning has been implicated in many important cognitive processes, such as learning, categorization, planning, and understanding natural language. Therefore, to obtain a full understanding of these processes, we must come to a better understanding of how people reason by analogy. Analogical reasoning is thought to occur in at least three stages: retrieval of a source description from memory upon presentation of a target description, mapping of the source description to the target description, and transfer of relationships from source description to target description. Here we examine the first stage, the retrieval of relevant sources from long-term memory for their use in analogical reasoning. Specifically we ask: what can people retrieve from long-term memory, and how do they do it?Psychological experiments show that subjects display two sorts of retrieval patterns when reasoning by analogy: a novice pattern and an expert pattern. Novice-like subjects are more likely to recall superficiallysimilar descriptions that are not helpful for reasoning by analogy. Conversely, expert-like subjects are more likely to recall structurally-related descriptions that are useful for further analogical reasoning. Previous computational models of the retrieval stage have only attempted to model novice-like retrieval. We introduce a computational model that can demonstrate both novice-like and expert-like retrieval with the same mechanism. The parameter of the model that is varied to produce these two types of retrieval is the average size of the features used to identify matches in memory. We find that, in agreement with an intuition from the work of Ullman and co-workers regarding the use of features in visual classification (Ullman, Vidal-Naquet,& Sali, 2002), that features of an intermediate size are most useful for analogical retrieval.We conducted two computational experiments on our own dataset of fourteen formally described stories, which showed that our model gives the strongest analogical retrieval, and is most expert-like, when it uses features that are on average of intermediate size. We conducted a third computational experiment on the Karla the Hawk dataset which showed a modest effect consistent with our predictions. Because our model and Ullmans work both rely on intermediate-sized features to perform recognition-like tasks, we take both as supporting what we call the Goldilocks hypothesis: that on the average those features that are maximally useful for recognition are neither too small nor too large, neither too simple nor too complex, but rather are in the middle, of intermediate size and complexity

    Enhancing Dynamic Hand Gesture Recognition using Feature Concatenation via Multi-Input Hybrid Model

    Get PDF
    Radar-based hand gesture recognition is an important research area that provides suitable support for various applications, such as human-computer interaction and healthcare monitoring. Several deep learning algorithms for gesture recognition using Impulse Radio Ultra-Wide Band (IR-UWB) have been proposed. Most of them focus on achieving high performance, which requires a huge amount of data. The procedure of acquiring and annotating data remains a complex, costly, and time-consuming task. Moreover, processing a large volume of data usually requires a complex model with very large training parameters, high computation, and memory consumption. To overcome these shortcomings, we propose a simple data processing approach along with a lightweight multi-input hybrid model structure to enhance performance. We aim to improve the existing state-of-the-art results obtained using an available IR-UWB gesture dataset consisting of range-time images of dynamic hand gestures. First, these images are extended using the Sobel filter, which generates low-level feature representations for each sample. These represent the gradient images in the x-direction, the y-direction, and both the x- and y-directions. Next, we apply these representations as inputs to a three-input Convolutional Neural Network- Long Short-Term Memory- Support Vector Machine (CNN-LSTM-SVM) model. Each one is provided to a separate CNN branch and then concatenated for further processing by the LSTM. This combination allows for the automatic extraction of richer spatiotemporal features of the target with no manual engineering approach or prior domain knowledge. To select the optimal classifier for our model and achieve a high recognition rate, the SVM hyperparameters are tuned using the Optuna framework. Our proposed multi-input hybrid model achieved high performance on several parameters, including 98.27% accuracy, 98.30% precision, 98.29% recall, and 98.27% F1-score while ensuring low complexity. Experimental results indicate that the proposed approach improves accuracy and prevents the model from overfitting

    Memory function in multiple sclerosis

    Get PDF
    Multiple Sclerosis (MS) is a disease of the central nervous system. Its diffuse pathology results in a variety of physical and psychological symptoms. Memory dysfunction is one of the most prevalent cognitive deficits associated with MS. However, the accurate assessment of memory in MS is often compromised by the coincident physical and/or cognitive difficulties of the patients. Also, there are no conventional memory tests suitable for MS patients, which grade varying types of verbal and spatial memory ability. The aim of this thesis was to develop a new test of memory which reduced the handicap imposed by sensori-motor dysfunction on cognitive test performance, and assessed recall memory, paired association, and recognition memory using matched verbal and spatial tasks. The New Test Of Memory was standardised using a sample of 85 healthy controls, stratified for age, sex, and IQ. The measure demonstrated the effects of ageing on normal memory performance, and showed good internal reliability (Cronbach's alpha verbal sections: 0.76; spatial sections: 0.75), consistency, and construct and factorial validity. The validation sample comprised 100 MS patients. The applicability of the tasks for patients with MS was demonstrated by the absence of a relationship between memory performance and measures of visual integrity and manual dexterity. The patient assessments also showed good internal reliability (Cronbach's alpha verbal sections: 0.85; spatial sections: 0.74), consistency, and construct, factorial, convergent, and discriminant validity. Patient performance was significantly impaired relative to controls, with 23 % of patients scoring more than 2 standard deviations below the age group control mean on the verbal sections, and 15 % on the spatial sections. The patterns of impairment demonstrated by the patients did not provide support for either the acquisition or retrieval deficit hypotheses, suggesting that memory deficiencies in MS may not fit a simple, single deficit model

    Models of verbal working memory capacity: What does it take to make them work?

    Get PDF
    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multiword units or chunks. Toward a more comprehensive theory of capacity limits, we examined models of forced-choice recognition of words within printed lists, using materials designed to produce multiword chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the interword associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory component unlimited in capacity was needed. A fixed-capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model

    Notes About Spiking Neural P Systems

    Get PDF
    Spiking neural P systems (SN P systems, for short) are much investigated in the last years in membrane computing, but still many open problems and research topics are open in this area. Here, we first recall two such problems (both related to neural biology) from. One of them asks to build an SN P system able to store a number, and to provide it to a reader without losing it, so that the number is available for a further reading. We build here such a memory module and we discuss its extension to model/implement more general operations, specific to (simple) data bases. Then, we formulate another research issue, concerning pattern recognition in terms of SN P systems. In the context, we define a recent version of SN P systems, enlarged with rules able to request spikes from the environment; based on this version, so-called SN dP systems were recently introduced, extending to neural P systems the idea of a distributed dP automaton. Some details about such devices are also given, as a further invitation to the reader to this area of research.Junta de Andalucía P08 – TIC 0420
    corecore