12,268 research outputs found
A Computational Model of Children's Semantic Memory
A computational model of children's semantic memory is built from the Latent Semantic Analysis (LSA) of a multisource child corpus. Three tests of the model are described, simulating a vocabulary test, an association test and a recall task. For each one, results from experiments with children are presented and compared to the model data. Adequacy is correct, which means that this simulation of children's semantic memory can be used to simulate a variety of children's cognitive processes
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field
Modelling word meaning using efficient tensor representations
Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information
Semantic spaces revisited: investigating the performance of auto-annotation and semantic retrieval using semantic spaces
Semantic spaces encode similarity relationships between objects as a function of position in a mathematical space. This paper discusses three different formulations for building semantic spaces which allow the automatic-annotation and semantic retrieval of images. The models discussed in this paper require that the image content be described in the form of a series of visual-terms, rather than as a continuous feature-vector. The paper also discusses how these term-based models compare to the latest state-of-the-art continuous feature models for auto-annotation and retrieval
Effect of Tuned Parameters on a LSA MCQ Answering Model
This paper presents the current state of a work in progress, whose objective
is to better understand the effects of factors that significantly influence the
performance of Latent Semantic Analysis (LSA). A difficult task, which consists
in answering (French) biology Multiple Choice Questions, is used to test the
semantic properties of the truncated singular space and to study the relative
influence of main parameters. A dedicated software has been designed to fine
tune the LSA semantic space for the Multiple Choice Questions task. With
optimal parameters, the performances of our simple model are quite surprisingly
equal or superior to those of 7th and 8th grades students. This indicates that
semantic spaces were quite good despite their low dimensions and the small
sizes of training data sets. Besides, we present an original entropy global
weighting of answers' terms of each question of the Multiple Choice Questions
which was necessary to achieve the model's success.Comment: 9 page
Semantic spaces
Any natural language can be considered as a tool for producing large
databases (consisting of texts, written, or discursive). This tool for its
description in turn requires other large databases (dictionaries, grammars
etc.). Nowadays, the notion of database is associated with computer processing
and computer memory. However, a natural language resides also in human brains
and functions in human communication, from interpersonal to intergenerational
one. We discuss in this survey/research paper mathematical, in particular
geometric, constructions, which help to bridge these two worlds. In particular,
in this paper we consider the Vector Space Model of semantics based on
frequency matrices, as used in Natural Language Processing. We investigate
underlying geometries, formulated in terms of Grassmannians, projective spaces,
and flag varieties. We formulate the relation between vector space models and
semantic spaces based on semic axes in terms of projectability of subvarieties
in Grassmannians and projective spaces. We interpret Latent Semantics as a
geometric flow on Grassmannians. We also discuss how to formulate G\"ardenfors'
notion of "meeting of minds" in our geometric setting.Comment: 32 pages, TeX, 1 eps figur
- …