663 research outputs found
Discovery of Linguistic Relations Using Lexical Attraction
This work has been motivated by two long term goals: to understand how humans
learn language and to build programs that can understand language. Using a
representation that makes the relevant features explicit is a prerequisite for
successful learning and understanding. Therefore, I chose to represent
relations between individual words explicitly in my model. Lexical attraction
is defined as the likelihood of such relations. I introduce a new class of
probabilistic language models named lexical attraction models which can
represent long distance relations between words and I formalize this new class
of models using information theory.
Within the framework of lexical attraction, I developed an unsupervised
language acquisition program that learns to identify linguistic relations in a
given sentence. The only explicitly represented linguistic knowledge in the
program is lexical attraction. There is no initial grammar or lexicon built in
and the only input is raw text. Learning and processing are interdigitated. The
processor uses the regularities detected by the learner to impose structure on
the input. This structure enables the learner to detect higher level
regularities. Using this bootstrapping procedure, the program was trained on
100 million words of Associated Press material and was able to achieve 60%
precision and 50% recall in finding relations between content-words. Using
knowledge of lexical attraction, the program can identify the correct relations
in syntactically ambiguous sentences such as ``I saw the Statue of Liberty
flying over New York.''Comment: dissertation, 56 page
Towards Visual Syntactical Understanding
Syntax is usually studied in the realm of linguistics and refers to the
arrangement of words in a sentence. Similarly, an image can be considered as a
visual 'sentence', with the semantic parts of the image acting as 'words'.
While visual syntactic understanding occurs naturally to humans, it is
interesting to explore whether deep neural networks (DNNs) are equipped with
such reasoning. To that end, we alter the syntax of natural images (e.g.
swapping the eye and nose of a face), referred to as 'incorrect' images, to
investigate the sensitivity of DNNs to such syntactic anomaly. Through our
experiments, we discover an intriguing property of DNNs where we observe that
state-of-the-art convolutional neural networks, as well as vision transformers,
fail to discriminate between syntactically correct and incorrect images when
trained on only correct ones. To counter this issue and enable visual syntactic
understanding with DNNs, we propose a three-stage framework- (i) the 'words'
(or the sub-features) in the image are detected, (ii) the detected words are
sequentially masked and reconstructed using an autoencoder, (iii) the original
and reconstructed parts are compared at each location to determine syntactic
correctness. The reconstruction module is trained with BERT-like masked
autoencoding for images, with the motivation to leverage language model
inspired training to better capture the syntax. Note, our proposed approach is
unsupervised in the sense that the incorrect images are only used during
testing and the correct versus incorrect labels are never used for training. We
perform experiments on CelebA, and AFHQ datasets and obtain classification
accuracy of 92.10%, and 90.89%, respectively. Notably, the approach generalizes
well to ImageNet samples which share common classes with CelebA and AFHQ
without explicitly training on them
Learning Language from a Large (Unannotated) Corpus
A novel approach to the fully automated, unsupervised extraction of
dependency grammars and associated syntax-to-semantic-relationship mappings
from large text corpora is described. The suggested approach builds on the
authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well
as on a number of prior papers and approaches from the statistical language
learning literature. If successful, this approach would enable the mining of
all the information needed to power a natural language comprehension and
generation system, directly from a large, unannotated corpus.Comment: 29 pages, 5 figures, research proposa
A Defense of Pure Connectionism
Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production.
Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association
Learnability and falsifiability of Construction Grammars
The strength of Construction Grammar (CxG) is its descriptive power; its weakness is the learnability and falsifiability of its unconstrained representations. Learnability is the degree to which the optimum set of constructions can be consistently selected from the large set of potential constructions; falsifiability is the ability to make testable predictions about the constructions present in a dataset. This paper uses grammar induction to evaluate learnability and falsifiability: given a discovery-device CxG and a set of observed utterances, its learnability is its stability over sub-sets of data and its falsifiability is its ability to predict a CxG
Designing Statistical Language Learners: Experiments on Noun Compounds
The goal of this thesis is to advance the exploration of the statistical
language learning design space. In pursuit of that goal, the thesis makes two
main theoretical contributions: (i) it identifies a new class of designs by
specifying an architecture for natural language analysis in which probabilities
are given to semantic forms rather than to more superficial linguistic
elements; and (ii) it explores the development of a mathematical theory to
predict the expected accuracy of statistical language learning systems in terms
of the volume of data used to train them.
The theoretical work is illustrated by applying statistical language learning
designs to the analysis of noun compounds. Both syntactic and semantic analysis
of noun compounds are attempted using the proposed architecture. Empirical
comparisons demonstrate that the proposed syntactic model is significantly
better than those previously suggested, approaching the performance of human
judges on the same task, and that the proposed semantic model, the first
statistical approach to this problem, exhibits significantly better accuracy
than the baseline strategy. These results suggest that the new class of designs
identified is a promising one. The experiments also serve to highlight the need
for a widely applicable theory of data requirements.Comment: PhD thesis (Macquarie University, Sydney; December 1995), LaTeX
source, xii+214 page
- …