Zero-shot learning relies on semantic class representations such as
hand-engineered attributes or learned embeddings to predict classes without any
labeled examples. We propose to learn class representations from common sense
knowledge graphs. Common sense knowledge graphs are an untapped source of
explicit high-level knowledge that requires little human effort to apply to a
range of tasks. To capture the knowledge in the graph, we introduce ZSL-KG, a
general-purpose framework with a novel transformer graph convolutional network
(TrGCN) for generating class representations. Our proposed TrGCN architecture
computes non-linear combinations of the node neighbourhood and shows
improvements on zero-shot learning tasks in language and vision. Our results
show ZSL-KG outperforms the best performing graph-based zero-shot learning
framework by an average of 2.1 accuracy points with improvements as high as 3.4
accuracy points. Our ablation study on ZSL-KG with alternate graph neural
networks shows that our TrGCN adds up to 1.2 accuracy points improvement on
these tasks