726 research outputs found
On the Impact of Temporal Representations on Metaphor Detection
State-of-the-art approaches for metaphor detection compare their literal - or
core - meaning and their contextual meaning using metaphor classifiers based on
neural networks. However, metaphorical expressions evolve over time due to
various reasons, such as cultural and societal impact. Metaphorical expressions
are known to co-evolve with language and literal word meanings, and even drive,
to some extent, this evolution. This poses the question of whether different,
possibly time-specific, representations of literal meanings may impact the
metaphor detection task. To the best of our knowledge, this is the first study
that examines the metaphor detection task with a detailed exploratory analysis
where different temporal and static word embeddings are used to account for
different representations of literal meanings. Our experimental analysis is
based on three popular benchmarks used for metaphor detection and word
embeddings extracted from different corpora and temporally aligned using
different state-of-the-art approaches. The results suggest that the usage of
different static word embedding methods does impact the metaphor detection task
and some temporal word embeddings slightly outperform static methods. However,
the results also suggest that temporal word embeddings may provide
representations of the core meaning of the metaphor even too close to their
contextual meaning, thus confusing the classifier. Overall, the interaction
between temporal language evolution and metaphor detection appears tiny in the
benchmark datasets used in our experiments. This suggests that future work for
the computational analysis of this important linguistic phenomenon should first
start by creating a new dataset where this interaction is better represented.Comment: 12 pages, 4 figure
Recommended from our members
On the Impact of Temporal Representations on Metaphor Detection
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using metaphor classifiers based on neural networks. However, metaphorical expressions evolve over time due to various reasons, such as cultural and societal impact. Metaphorical expressions are known to co-evolve with language and literal word meanings, and even drive, to some extent, this evolution. This poses the question of whether different, possibly time-specific, representations of literal meanings may impact the metaphor detection task. To the best of our knowledge, this is the first study that examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings. Our experimental analysis is based on three popular benchmarks used for metaphor detection and word embeddings extracted from different corpora and temporally aligned using different state-of-the-art approaches. The results suggest that the usage of different static word embedding methods does impact the metaphor detection task and some temporal word embeddings slightly outperform static methods. However, the results also suggest that temporal word embeddings may provide representations of the core meaning of the metaphor even too close to their contextual meaning, thus confusing the classifier. Overall, the interaction between temporal language evolution and metaphor detection appears tiny in the benchmark datasets used in our experiments. This suggests that future work for the computational analysis of this important linguistic phenomenon should first start by creating a new dataset where this interaction is better represented
UNIMIB @ DIACR-Ita: Aligning Distributional Embeddings with a Compass for Semantic Change Detection in the Italian Language
In this paper, we present our results related to the EVALITA 2020 challenge, DIACR-Ita, for semantic change detection for the Italian language. Our approach is based on measuring the semantic distance across time-specific word vectors generated with Compass-aligned Distributional Embeddings (CADE). We first generate temporal embeddings with CADE, a strategy to align word embeddings that are specific for each time period; the quality of this alignment is the main asset of our proposal. We then measure the semantic shift of each word, combining two different semantic shift measures. Eventually, we classify a word meaning as changed or not changed by defining a threshold over the semantic distance across time
ABSTAT-HD: a scalable tool for profiling very large knowledge graphs
AbstractProcessing large-scale and highly interconnected Knowledge Graphs (KG) is becoming crucial for many applications such as recommender systems, question answering, etc. Profiling approaches have been proposed to summarize large KGs with the aim to produce concise and meaningful representation so that they can be easily managed. However, constructing profiles and calculating several statistics such as cardinality descriptors or inferences are resource expensive. In this paper, we present ABSTAT-HD, a highly distributed profiling tool that supports users in profiling and understanding big and complex knowledge graphs. We demonstrate the impact of the new architecture of ABSTAT-HD by presenting a set of experiments that show its scalability with respect to three dimensions of the data to be processed: size, complexity and workload. The experimentation shows that our profiling framework provides informative and concise profiles, and can process and manage very large KGs
Special Issue on: Personalisation in E-Government and Smart Cities
The abstract is included in the text
- …