142,668 research outputs found
Open Set Chinese Character Recognition using Multi-typed Attributes
Recognition of Off-line Chinese characters is still a challenging problem,
especially in historical documents, not only in the number of classes extremely
large in comparison to contemporary image retrieval methods, but also new
unseen classes can be expected under open learning conditions (even for CNN).
Chinese character recognition with zero or a few training samples is a
difficult problem and has not been studied yet. In this paper, we propose a new
Chinese character recognition method by multi-type attributes, which are based
on pronunciation, structure and radicals of Chinese characters, applied to
character recognition in historical books. This intermediate attribute code has
a strong advantage over the common `one-hot' class representation because it
allows for understanding complex and unseen patterns symbolically using
attributes. First, each character is represented by four groups of attribute
types to cover a wide range of character possibilities: Pinyin label, layout
structure, number of strokes, three different input methods such as Cangjie,
Zhengma and Wubi, as well as a four-corner encoding method. A convolutional
neural network (CNN) is trained to learn these attributes. Subsequently,
characters can be easily recognized by these attributes using a distance metric
and a complete lexicon that is encoded in attribute space. We evaluate the
proposed method on two open data sets: printed Chinese character recognition
for zero-shot learning, historical characters for few-shot learning and a
closed set: handwritten Chinese characters. Experimental results show a good
general classification of seen classes but also a very promising generalization
ability to unseen characters.Comment: 29 pages, submitted to Pattern Recognitio
An overview of ensemble and feature learning in few-shot image classification using siamese networks
Siamese Neural Networks (SNNs) constitute one of the most representative approaches for addressing Few-Shot Image Classification. These schemes comprise a set of Convolutional Neural Network (CNN) models whose weights are shared across the network, which results in fewer parameters to train and less tendency to overfit. This fact eventually leads to better convergence capabilities than standard neural models when considering scarce amounts of data. Based on a contrastive principle, the SNN scheme jointly trains these inner CNN models to map the input image data to an embedded representation that may be later exploited for the recognition process. However, in spite of their extensive use in the related literature, the representation capabilities of SNN schemes have neither been thoroughly assessed nor combined with other strategies for boosting their classification performance. Within this context, this work experimentally studies the capabilities of SNN architectures for obtaining a suitable embedded representation in scenarios with a severe data scarcity, assesses the use of train data augmentation for improving the feature learning process, introduces the use of transfer learning techniques for further exploiting the embedded representations obtained by the model, and uses test data augmentation for boosting the performance capabilities of the SNN scheme by mimicking an ensemble learning process. The results obtained with different image corpora report that the combination of the commented techniques achieves classification rates ranging from 69% to 78% with just 5 to 20 prototypes per class whereas the CNN baseline considered is unable to converge. Furthermore, upon the convergence of the baseline model with the sufficient amount of data, still the adequate use of the studied techniques improves the accuracy in figures from 4% to 9%.First author is supported by the “Programa I+D+i de la Generalitat Valenciana” through grant APOSTD/2020/256. This research work was partially funded by the Spanish “Ministerio de Ciencia e Innovación” and the European Union “NextGenerationEU/PRTR” programmes through project DOREMI (TED2021-132103A-I00). Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature
Exploring the Encoding Layer and Loss Function in End-to-End Speaker and Language Recognition System
In this paper, we explore the encoding/pooling layer and loss function in the
end-to-end speaker and language recognition system. First, a unified and
interpretable end-to-end system for both speaker and language recognition is
developed. It accepts variable-length input and produces an utterance level
result. In the end-to-end system, the encoding layer plays a role in
aggregating the variable-length input sequence into an utterance level
representation. Besides the basic temporal average pooling, we introduce a
self-attentive pooling layer and a learnable dictionary encoding layer to get
the utterance level representation. In terms of loss function for open-set
speaker verification, to get more discriminative speaker embedding, center loss
and angular softmax loss is introduced in the end-to-end system. Experimental
results on Voxceleb and NIST LRE 07 datasets show that the performance of
end-to-end learning system could be significantly improved by the proposed
encoding layer and loss function.Comment: Accepted for Speaker Odyssey 201
- …