10,586 research outputs found
Transfer and Multi-Task Learning for Noun-Noun Compound Interpretation
In this paper, we empirically evaluate the utility of transfer and multi-task
learning on a challenging semantic classification task: semantic interpretation
of noun--noun compounds. Through a comprehensive series of experiments and
in-depth error analysis, we show that transfer learning via parameter
initialization and multi-task learning via parameter sharing can help a neural
classification model generalize over a highly skewed distribution of relations.
Further, we demonstrate how dual annotation with two distinct sets of relations
over the same set of compounds can be exploited to improve the overall accuracy
of a neural classifier and its F1 scores on the less frequent, but more
difficult relations.Comment: EMNLP 2018: Conference on Empirical Methods in Natural Language
Processing (EMNLP
Concept Blending and Dissimilarity: Factors for Creative Design Process: A Comparison between the Linguistic Interpretation Process and Design Process
This study investigated the design process in order to clarify the characteristics of the essence of the creative design process vis-à -vis the interpretation process, by carrying out design experiments. The authors analyzed the characteristics of the creative design process by comparing it with the linguistic interpretation process, from the viewpoints of thought types (analogy, blending, and thematic relation) and recognition types (commonalities and alignable and nonalignable differences). A new concept can be created by using the noun-noun phrase as the process of synthesizing two concepts—the simplest and most essential process in formulating a new concept from existing ones. Furthermore, the noun-noun phrase can be interpreted in a natural way. In our experiment, the subjects were required to interpret a novel noun-noun phrase, create a design concept from the same noun-noun phrase, and list the similarities and dissimilarities between the two nouns. The authors compare the results of the thought types and recognition types, focusing on the perspective of the manner in which things were viewed, i.e., in terms of similarities and dissimilarities. A comparison of the results reveals that blending and nonalignable differences characterize the creative design process. The findings of this research will contribute a framework of design practice, to enhance both students’ and designers’ creativity for concept formation in design, which relates to the development of innovative design.
Keywords:
Noun-Noun phrase; Design; Creativity; Blending; Nonalignable difference</p
Designing Statistical Language Learners: Experiments on Noun Compounds
The goal of this thesis is to advance the exploration of the statistical
language learning design space. In pursuit of that goal, the thesis makes two
main theoretical contributions: (i) it identifies a new class of designs by
specifying an architecture for natural language analysis in which probabilities
are given to semantic forms rather than to more superficial linguistic
elements; and (ii) it explores the development of a mathematical theory to
predict the expected accuracy of statistical language learning systems in terms
of the volume of data used to train them.
The theoretical work is illustrated by applying statistical language learning
designs to the analysis of noun compounds. Both syntactic and semantic analysis
of noun compounds are attempted using the proposed architecture. Empirical
comparisons demonstrate that the proposed syntactic model is significantly
better than those previously suggested, approaching the performance of human
judges on the same task, and that the proposed semantic model, the first
statistical approach to this problem, exhibits significantly better accuracy
than the baseline strategy. These results suggest that the new class of designs
identified is a promising one. The experiments also serve to highlight the need
for a widely applicable theory of data requirements.Comment: PhD thesis (Macquarie University, Sydney; December 1995), LaTeX
source, xii+214 page
Linguistically-Informed Neural Architectures for Lexical, Syntactic and Semantic Tasks in Sanskrit
The primary focus of this thesis is to make Sanskrit manuscripts more
accessible to the end-users through natural language technologies. The
morphological richness, compounding, free word orderliness, and low-resource
nature of Sanskrit pose significant challenges for developing deep learning
solutions. We identify four fundamental tasks, which are crucial for developing
a robust NLP technology for Sanskrit: word segmentation, dependency parsing,
compound type identification, and poetry analysis. The first task, Sanskrit
Word Segmentation (SWS), is a fundamental text processing task for any other
downstream applications. However, it is challenging due to the sandhi
phenomenon that modifies characters at word boundaries. Similarly, the existing
dependency parsing approaches struggle with morphologically rich and
low-resource languages like Sanskrit. Compound type identification is also
challenging for Sanskrit due to the context-sensitive semantic relation between
components. All these challenges result in sub-optimal performance in NLP
applications like question answering and machine translation. Finally, Sanskrit
poetry has not been extensively studied in computational linguistics.
While addressing these challenges, this thesis makes various contributions:
(1) The thesis proposes linguistically-informed neural architectures for these
tasks. (2) We showcase the interpretability and multilingual extension of the
proposed systems. (3) Our proposed systems report state-of-the-art performance.
(4) Finally, we present a neural toolkit named SanskritShala, a web-based
application that provides real-time analysis of input for various NLP tasks.
Overall, this thesis contributes to making Sanskrit manuscripts more accessible
by developing robust NLP technology and releasing various resources, datasets,
and web-based toolkit.Comment: Ph.D. dissertatio
Morphological awareness in readers of IsiXhosa
This study focuses particularly on the development of four Morphological Awareness reading tests in isiXhosa and on the relationship of Morphological Awareness to reading success among 74 Grade 3 isiXhosa-speaking foundation-phase learners from three peri-urban schools. It explores in-depth why not all previously established Morphological Awareness tests for other languages suit the morphology of isiXhosa and how these tests have been revised in order to do so. Conventionally, the focus of Morphological Awareness literature has been on derivational morphology and reading comprehension. This study did not find significant correlations with comprehension, but rather with the children's ability to decode. Fluency and Morphological Awareness have not been given as much attention in the literature, but Morphological Awareness could be important for processing the agglutinating structure of the language in reading. This study also argues that it is not a specific awareness of derivational morphology over inflectional morphology, but rather a general awareness of one's language structure that is more important at this stage in their literacy development; specifically a general awareness of prefixes and suffixes. In addition, it was found that an explicit awareness of the morphological structure of the language related more to fluency and tests that accessed an innate and implicit Morphological Awareness had the strongest correlations overall with comprehension. The findings from this report have implications regarding how future curriculum developments for morphologically rich languages like isiXhosa should be approached. The positive and practical implications of including different types of Morphological Awareness tutoring in curricula is argued for, especially when teaching younger readers how to approach morphologically complex words in texts
LLMs Perform Poorly at Concept Extraction in Cyber-security Research Literature
The cybersecurity landscape evolves rapidly and poses threats to
organizations. To enhance resilience, one needs to track the latest
developments and trends in the domain. It has been demonstrated that standard
bibliometrics approaches show their limits in such a fast-evolving domain. For
this purpose, we use large language models (LLMs) to extract relevant knowledge
entities from cybersecurity-related texts. We use a subset of arXiv preprints
on cybersecurity as our data and compare different LLMs in terms of entity
recognition (ER) and relevance. The results suggest that LLMs do not produce
good knowledge entities that reflect the cybersecurity context, but our results
show some potential for noun extractors. For this reason, we developed a noun
extractor boosted with some statistical analysis to extract specific and
relevant compound nouns from the domain. Later, we tested our model to identify
trends in the LLM domain. We observe some limitations, but it offers promising
results to monitor the evolution of emergent trends.Comment: 24 pages, 9 figure
- …