Historical manuscript processing poses challenges like limited annotated
training data and novel class emergence. To address this, we propose a novel
One-shot learning-based Text Spotting (OTS) approach that accurately and
reliably spots novel characters with just one annotated support sample. Drawing
inspiration from cognitive research, we introduce a spatial alignment module
that finds, focuses on, and learns the most discriminative spatial regions in
the query image based on one support image. Especially, since the low-resource
spotting task often faces the problem of example imbalance, we propose a novel
loss function called torus loss which can make the embedding space of distance
metric more discriminative. Our approach is highly efficient and requires only
a few training samples while exhibiting the remarkable ability to handle novel
characters, and symbols. To enhance dataset diversity, a new manuscript dataset
that contains the ancient Dongba hieroglyphics (DBH) is created. We conduct
experiments on publicly available VML-HD, TKH, NC datasets, and the new
proposed DBH dataset. The experimental results demonstrate that OTS outperforms
the state-of-the-art methods in one-shot text spotting. Overall, our proposed
method offers promising applications in the field of text spotting in
historical manuscripts