Concepts play a central role in many applications. This includes settings
where concepts have to be modelled in the absence of sentence context. Previous
work has therefore focused on distilling decontextualised concept embeddings
from language models. But concepts can be modelled from different perspectives,
whereas concept embeddings typically mostly capture taxonomic structure. To
address this issue, we propose a strategy for identifying what different
concepts, from a potentially large concept vocabulary, have in common with
others. We then represent concepts in terms of the properties they share with
the other concepts. To demonstrate the practical usefulness of this way of
modelling concepts, we consider the task of ultra-fine entity typing, which is
a challenging multi-label classification problem. We show that by augmenting
the label set with shared properties, we can improve the performance of the
state-of-the-art models for this task.Comment: Accepted for EMNLP 202