383 research outputs found
Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers
The massive amounts of digitized historical documents acquired over the last
decades naturally lend themselves to automatic processing and exploration.
Research work seeking to automatically process facsimiles and extract
information thereby are multiplying with, as a first essential step, document
layout analysis. If the identification and categorization of segments of
interest in document images have seen significant progress over the last years
thanks to deep learning techniques, many challenges remain with, among others,
the use of finer-grained segmentation typologies and the consideration of
complex, heterogeneous documents such as historical newspapers. Besides, most
approaches consider visual features only, ignoring textual signal. In this
context, we introduce a multimodal approach for the semantic segmentation of
historical newspapers that combines visual and textual features. Based on a
series of experiments on diachronic Swiss and Luxembourgish newspapers, we
investigate, among others, the predictive power of visual and textual features
and their capacity to generalize across time and sources. Results show
consistent improvement of multimodal models in comparison to a strong visual
baseline, as well as better robustness to high material variance
Flexible Keyword Spotting based on Homogeneous Audio-Text Embedding
Spotting user-defined/flexible keywords represented in text frequently uses
an expensive text encoder for joint analysis with an audio encoder in an
embedding space, which can suffer from heterogeneous modality representation
(i.e., large mismatch) and increased complexity. In this work, we propose a
novel architecture to efficiently detect arbitrary keywords based on an
audio-compliant text encoder which inherently has homogeneous representation
with audio embedding, and it is also much smaller than a compatible text
encoder. Our text encoder converts the text to phonemes using a
grapheme-to-phoneme (G2P) model, and then to an embedding using representative
phoneme vectors, extracted from the paired audio encoder on rich speech
datasets. We further augment our method with confusable keyword generation to
develop an audio-text embedding verifier with strong discriminative power.
Experimental results show that our scheme outperforms the state-of-the-art
results on Libriphrase hard dataset, increasing Area Under the ROC Curve (AUC)
metric from 84.21% to 92.7% and reducing Equal-Error-Rate (EER) metric from
23.36% to 14.4%
Few-shot Object Detection with Refined Contrastive Learning
Due to the scarcity of sampling data in reality, few-shot object detection
(FSOD) has drawn more and more attention because of its ability to quickly
train new detection concepts with less data. However, there are still failure
identifications due to the difficulty in distinguishing confusable classes. We
also notice that the high standard deviation of average precisions reveals the
inconsistent detection performance. To this end, we propose a novel FSOD method
with Refined Contrastive Learning (FSRC). A pre-determination component is
introduced to find out the Resemblance Group (GR) from novel classes which
contains confusable classes. Afterwards, refined contrastive learning (RCL) is
pointedly performed on this group of classes in order to increase the
inter-class distances among them. In the meantime, the detection results
distribute more uniformly which further improve the performance. Experimental
results based on PASCAL VOC and COCO datasets demonstrate our proposed method
outperforms the current state-of-the-art research. FSRC can not only decouple
the relevance of confusable classes to get a better performance, but also makes
predictions more consistent by reducing the standard deviation of the AP of
classes to be detected
Large-Margin Determinantal Point Processes
Determinantal point processes (DPPs) offer a powerful approach to modeling
diversity in many applications where the goal is to select a diverse subset. We
study the problem of learning the parameters (the kernel matrix) of a DPP from
labeled training data. We make two contributions. First, we show how to
reparameterize a DPP's kernel matrix with multiple kernel functions, thus
enhancing modeling flexibility. Second, we propose a novel parameter estimation
technique based on the principle of large margin separation. In contrast to the
state-of-the-art method of maximum likelihood estimation, our large-margin loss
function explicitly models errors in selecting the target subsets, and it can
be customized to trade off different types of errors (precision vs. recall).
Extensive empirical studies validate our contributions, including applications
on challenging document and video summarization, where flexibility in modeling
the kernel matrix and balancing different errors is indispensable.Comment: 15 page
- …