3,282 research outputs found
Reading as Active Sensing: A Computational Model of Gaze Planning in Word Recognition
We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting
Percepcija tipiÄnosti u leksikonu: tipiÄnost oblika rijeÄi, leksiÄka gustoÄa i morfonotaktiÄka ograniÄenja
The extent to which a symbolic timeāseries (a sequence of sounds or letters) is a typical
word of a language, referred to as WORDLIKENESS, has been shown to have effects in speech
perception and production, reading proficiency, lexical development and lexical access,
shortāterm and longāterm verbal memory. Two quantitative models have been suggested to
account for these effects: serial phonotactic probabilities (the likelihood for a given symbolic
sequence to appear in the lexicon) and lexical density (the extent to which other words can
be obtained from a target word by changing, deleting or inserting one or more symbols
in the target). The two measures are highly correlated and thus easy to be confounded in
measuring their effects in lexical tasks. In this paper, we propose a computational model
of lexical organisation, based on SelfāOrganising Maps with Hebbian connections defined
over a temporal layer (TSOMs), providing a principled algorithmic account of effects of
lexical acquisition, processing and access, to further investigate these issues. In particular,
we show that (morphoā)phonotactic probabilities and lexical density, though correlated in
lexical organisation, can be taken to focus on different aspects of speakersā word processing
behaviour and thus provide independent cognitive contributions to our understanding of
the principles of perception of typicality that govern lexical organisation.Pokazano je da stupanj do kojeg je odreÄeni simboliÄki vremenski slijed (slijed zvukova ili slova)
tipiÄna rijeÄ u jeziku, odnosno TIPIÄNOST OBLIKA RIJEÄI, ima uÄinaka u proizvodnji i percepciji
govora, uspjeÅ”nosti Äitanja, leksiÄkom razvoju i pristupu leksemima te kratkotrajnoj i dugotrajnoj
verbalnoj memoriji. Predložena su dva kvantitativna modela kako bi se objasnili navedeni uÄinci:
serijalne fonotaktiÄke vjerojatnosti (vjerojatnost pojavljivanja odreÄenog simboliÄkog slijeda u
leksikonu) i leksiÄka gustoÄa (mjera do koje se druge rijeÄi mogu proizvesti zamjenom, brisanjem
ili umetanjem jednog ili viÅ”e simbola u ciljnu rijeÄ). Te dvije mjere visoko koreliraju, zbog Äega su
teÅ”ko razdvojive pri mjerenju njihovih uÄinaka u leksiÄkim zadacima. U ovom radu predlažemo
raÄunalni model leksiÄke organizacije koji pruža sustavan algoritamski prikaz uÄinaka leksiÄkog
usvajanja, obrade i pristupa kako bi se dodatno istražila ova pitanja. Taj se model temelji na
samoorganizirajuÄim mapama s hebijanskim vezama definiranim preko vremenske razine (engl.
TSOMs). Posebice pokazujemo da se (morfo-)fonotaktiÄke vjerojatnosti i leksiÄka gustoÄa, iako
korelirani u leksiÄkoj organizaciji, mogu shvatiti kao naÄini usredotoÄavanja na razliÄite aspekte
govornikova ponaÅ”anja pri obradi rijeÄi i tako pružiti nezavisne kognitivne doprinose naÅ”em
razumijevanju principa percepcije i tipiÄnosti koji upravljaju leksiÄkom organizacijom
Perception of typicality in the lexicon: wordlikeness, lexical density and morphonotactic constraints
The extent to which a symbolic timeāseries (a sequence of sounds or letters) is a typical word of a language, referred to as WORDLIKENESS, has been shown to have effects in speech perception and production, reading proficiency, lexical development and lexical access, shortāterm and longāterm verbal memory. Two quantitative models have been suggested to account for these effects: serial phonotactic probabilities (the likelihood for a given symbolic sequence to appear in the lexicon) and lexical density (the extent to which other words can be obtained from a target word by changing, deleting or inserting one or more symbols in the target). The two measures are highly correlated and thus easy to be confounded in measuring their effects in lexical tasks. In this paper, we propose a computational model of lexical organisation, based on SelfāOrganising Maps with Hebbian connections defined over a temporal layer (TSOMs), providing a principled algorithmic account of effects of lexical acquisition, processing and access, to further investigate these issues. In particular, we show that (morphoā)phonotactic probabilities and lexical density, though correlated in lexical organisation, can be taken to focus on different aspects of speakersā word processing behaviour and thus provide independent cognitive contributions to our understanding of the principles of perception of typicality that govern lexical organisation
High Efficiency Real-Time Sensor and Actuator Control and Data Processing
The advances in sensor and actuator technology foster the use of large multitransducer networks in many different fields. The increasing complexity of such networks poses problems in data processing, especially when high-efficiency is required for real-time applications. In fact, multi-transducer data processing usually consists of interconnection and co-operation of several modules devoted to process different tasks. Multi-transducer network modules often include tasks such as control, data acquisition, data filtering interfaces, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, fuzzy-rules used to implement such tasks may introduce module interconnection and co-operation issues. To help dealing with these problems the author here presents a software library architecture for a dynamic and efficient management of multi-transducer data processing and control techniques. The frameworkās base architecture and the implementation details of several extensions are described. Starting from the base models available in the framework core dedicated models for control processes and neural network tools have been derived. The Facial Automaton for Conveying Emotion (FACE) has been used as a test field for the control architecture
Lexical emergentism and the "frequency-by-regularity" interaction
In spite of considerable converging evidence of the role of inflectional paradigms in word acquisition and processing, little efforts have been put so far into providing detailed, algorithmic models of the interaction between lexical token frequency, paradigm frequency, paradigm regularity. We propose a neurocomputational account of this interaction, and discuss some theoretical implications of preliminary experimental results
T2HSOM: Understanding the Lexicon by Simulating Memory Processes for Serial Order
Over the last several years, both theoretical and empirical approaches to lexical knowledge and encoding have prompted a radical reappraisal of the traditional dichotomy between lexicon and grammar. The lexicon is not simply a large waste basket of exceptions and sub-regularities, but a dynamic, possibly redundant repository of linguistic knowledge whose principles of relational organization are the driving force of productive generalizations. In this paper, we overview a few models of dynamic lexical organization based on neural network architectures that are purported to meet this challenging view. In particular, we illustrate a novel family of Kohonen self-organizing maps (T2HSOMs) that have the potential of simulating competitive storage of symbolic time series while exhibiting interesting properties of morphological organization and generalization. The model, tested on training samples of as morphologically diverse languages as Italian, German and Arabic, shows sensitivity to manifold types of morphological structure and can be used to bootstrap morphological knowledge in an unsupervised way
Evaluating Hebbian Self-Organizing Memories for Lexical Representation and Access
The lexicon is the store of words in long-term memory. Any attempt at modelling lexical competence must take issues of string storage seriously. In the present contribution, we discuss a few desiderata that any biologically-inspired computational model of the mental lexicon has to meet, and detail a multi-task evaluation protocol for their assessment. The proposed protocol is applied to a novel computational architecture for lexical storage and acquisition, the "Topological Temporal Hebbian SOMs" (T2HSOMs), which are grids of topologically organised memory nodes with dedicated sensitivity to time-bound sequences of letters. These maps can provide a rigorous and testable conceptual framework within which to provide a comprehensive, multi-task protocol for testing the performance of Hebbian self-organising memories, and a comprehensive picture of the complex dynamics between lexical processing and the acquisition of morphological structure
Deep Learning of Inflection and the Cell-Filling Problem
Machine learning offers two basic strategies for morphology induction: lexical segmentation and surface word relation. The first approach assumes that words can be segmented into morphemes. Inferring a novel inflected form requires identification of morphemic constituents and a strategy for their recombination. The second approach dispenses with segmentation: lexical representations form part of a network of associatively related inflected forms. Production of a novel form consists in filling in one empty node in the network. Here, we present the results of a task of word inflection by a recurrent LSTM network that learns to fill in paradigm cells of incomplete verb paradigms. Although the task does not require morpheme segmentation, we show that accuracy in carrying out the inflection task is a function of the modelās sensitivity to paradigm distribution and morphological structure
- ā¦