1,734 research outputs found
A Neural Multi-sequence Alignment TeCHnique (NeuMATCH)
The alignment of heterogeneous sequential data (video to text) is an
important and challenging problem. Standard techniques for this task, including
Dynamic Time Warping (DTW) and Conditional Random Fields (CRFs), suffer from
inherent drawbacks. Mainly, the Markov assumption implies that, given the
immediate past, future alignment decisions are independent of further history.
The separation between similarity computation and alignment decision also
prevents end-to-end training. In this paper, we propose an end-to-end neural
architecture where alignment actions are implemented as moving data between
stacks of Long Short-term Memory (LSTM) blocks. This flexible architecture
supports a large variety of alignment tasks, including one-to-one, one-to-many,
skipping unmatched elements, and (with extensions) non-monotonic alignment.
Extensive experiments on semi-synthetic and real datasets show that our
algorithm outperforms state-of-the-art baselines.Comment: Accepted at CVPR 2018 (Spotlight). arXiv file includes the paper and
the supplemental materia
Meta-Personalizing Vision-Language Models to Find Named Instances in Video
Large-scale vision-language models (VLM) have shown impressive results for
language-guided search applications. While these models allow category-level
queries, they currently struggle with personalized searches for moments in a
video where a specific object instance such as ``My dog Biscuit'' appears. We
present the following three contributions to address this problem. First, we
describe a method to meta-personalize a pre-trained VLM, i.e., learning how to
learn to personalize a VLM at test time to search in video. Our method extends
the VLM's token vocabulary by learning novel word embeddings specific to each
instance. To capture only instance-specific features, we represent each
instance embedding as a combination of shared and learned global category
features. Second, we propose to learn such personalization without explicit
human supervision. Our approach automatically identifies moments of named
visual instances in video using transcripts and vision-language similarity in
the VLM's embedding space. Finally, we introduce This-Is-My, a personal video
instance retrieval benchmark. We evaluate our approach on This-Is-My and
DeepFashion2 and show that we obtain a 15% relative improvement over the state
of the art on the latter dataset.Comment: Accepted to CVPR 2023. Project webpage:
https://danielchyeh.github.io/metaper
- …