1,292 research outputs found
Grounding semantics in robots for Visual Question Answering
In this thesis I describe an operational implementation of an object detection and description system that incorporates in an end-to-end Visual Question Answering system and evaluated it on two visual question answering datasets for compositional language and elementary visual reasoning
Graph-based Multi-View Fusion and Local Adaptation: Mitigating Within-Household Confusability for Speaker Identification
Speaker identification (SID) in the household scenario (e.g., for smart
speakers) is an important but challenging problem due to limited number of
labeled (enrollment) utterances, confusable voices, and demographic imbalances.
Conventional speaker recognition systems generalize from a large random sample
of speakers, causing the recognition to underperform for households drawn from
specific cohorts or otherwise exhibiting high confusability. In this work, we
propose a graph-based semi-supervised learning approach to improve
household-level SID accuracy and robustness with locally adapted graph
normalization and multi-signal fusion with multi-view graphs. Unlike other work
on household SID, fairness, and signal fusion, this work focuses on speaker
label inference (scoring) and provides a simple solution to realize
household-specific adaptation and multi-signal fusion without tuning the
embeddings or training a fusion network. Experiments on the VoxCeleb dataset
demonstrate that our approach consistently improves the performance across
households with different customer cohorts and degrees of confusability.Comment: To appear in Interspeech 2022. arXiv admin note: text overlap with
arXiv:2106.0820
Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval
In this paper we address the problem of learning robust cross-domain
representations for sketch-based image retrieval (SBIR). While most SBIR
approaches focus on extracting low- and mid-level descriptors for direct
feature matching, recent works have shown the benefit of learning coupled
feature representations to describe data from two related sources. However,
cross-domain representation learning methods are typically cast into non-convex
minimization problems that are difficult to optimize, leading to unsatisfactory
performance. Inspired by self-paced learning, a learning methodology designed
to overcome convergence issues related to local optima by exploiting the
samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced
partial curriculum learning (CPPCL) framework. Compared with existing
self-paced learning methods which only consider a single modality and cannot
deal with prior knowledge, CPPCL is specifically designed to assess the
learning pace by jointly handling data from dual sources and modality-specific
prior information provided in the form of partial curricula. Additionally,
thanks to the learned dictionaries, we demonstrate that the proposed CPPCL
embeds robust coupled representations for SBIR. Our approach is extensively
evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary
SBIR and TU-Berlin Extension datasets), showing superior performance over
competing SBIR methods
- …