9 research outputs found
Human similarity judgments of emojis support alignment of conceptual systems across modalities
Supplementary Data: Historical evolution of concrete and abstract language revisited
Supplementary materials from Snefjella, B., Généreux, M., & Kuperman, V. (2019). Historical evolution of concrete and abstract language revisited. Behavior Research Methods, 51(4), 1693-1705
Recommended from our members
Predicting Human Judgments of Relational Similarity: A Comparison of Computational Models Based on Vector Representations of Meaning
Computational models of verbal analogy and relational
similarity judgments can employ different types of vector representations of word meanings (embeddings) generated by machine-learning algorithms. An important
question is whether human-like relational processing depends on explicit representations of relations (i.e., representations separable from those of the concepts being related), or whether implicit relation representations suffice. Earlier machine-learning models produced static embeddings for individual words, identical across all
contexts. However, more recent Large Language Models (LLMs), which use transformer architectures applied to much larger training corpora, are able to produce contextualized embeddings that have the potential to capture implicit knowledge of semantic relations. Here we compare multiple models based on different types of embeddings to human data concerning judgments of relational similarity and solutions of verbal analogy problems. For two datasets, a model that learns explicit representations of relations, Bayesian Analogy with Relational Transformations (BART), captured human performance more successfully than either a model using static embeddings (Word2vec) or models using contextualized embeddings created by LLMs (BERT, RoBERTa, and GPT-2). These findings support the proposal that human thinking depends on representations that separate relations from the concepts they relate
Recommended from our members
Human similarity judgments of emojis support alignment of conceptual systems across modalities
Humans can readily generalize their learning to new visual concepts, and infer their associated meanings. How do people align the different conceptual systems learned from different modalities? In the present paper, we examine emojis— pictographs uniquely situated between visual and linguistic modalities—to explore the role of alignment and multimodality in visual and linguistic semantics. Simulation experiments show that relational structures of emojis captured in visual and linguistic conceptual systems can be aligned, and that the ease of alignment increases as the number of emojis increases. We also found that emojis with subjective impressions of high popularity are easier to align between their visual and linguistic representations. A behavioral experiment was conducted to measure similarity patterns between 48 emojis, and to compare human similarity judgments with three models based on visual, semantic and multimodal-joint representations of emojis. We found that the model trained with multimodal data by aligning visual and semantic spaces best accounts for human judgments