15 research outputs found
Audio-Driven Dubbing for User Generated Contents via Style-Aware Semi-Parametric Synthesis
Existing automated dubbing methods are usually designed for Professionally
Generated Content (PGC) production, which requires massive training data and
training time to learn a person-specific audio-video mapping. In this paper, we
investigate an audio-driven dubbing method that is more feasible for User
Generated Content (UGC) production. There are two unique challenges to design a
method for UGC: 1) the appearances of speakers are diverse and arbitrary as the
method needs to generalize across users; 2) the available video data of one
speaker are very limited. In order to tackle the above challenges, we first
introduce a new Style Translation Network to integrate the speaking style of
the target and the speaking content of the source via a cross-modal AdaIN
module. It enables our model to quickly adapt to a new speaker. Then, we
further develop a semi-parametric video renderer, which takes full advantage of
the limited training data of the unseen speaker via a video-level
retrieve-warp-refine pipeline. Finally, we propose a temporal regularization
for the semi-parametric renderer, generating more continuous videos. Extensive
experiments show that our method generates videos that accurately preserve
various speaking styles, yet with considerably lower amount of training data
and training time in comparison to existing methods. Besides, our method
achieves a faster testing speed than most recent methods.Comment: TCSVT 202
A Survey on Multimodal Large Language Models
Multimodal Large Language Model (MLLM) recently has been a new rising
research hotspot, which uses powerful Large Language Models (LLMs) as a brain
to perform multimodal tasks. The surprising emergent capabilities of MLLM, such
as writing stories based on images and OCR-free math reasoning, are rare in
traditional methods, suggesting a potential path to artificial general
intelligence. In this paper, we aim to trace and summarize the recent progress
of MLLM. First of all, we present the formulation of MLLM and delineate its
related concepts. Then, we discuss the key techniques and applications,
including Multimodal Instruction Tuning (M-IT), Multimodal In-Context Learning
(M-ICL), Multimodal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning
(LAVR). Finally, we discuss existing challenges and point out promising
research directions. In light of the fact that the era of MLLM has only just
begun, we will keep updating this survey and hope it can inspire more research.
An associated GitHub link collecting the latest papers is available at
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models.Comment: Project
page:https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Model
Multi-modal Queried Object Detection in the Wild
We introduce MQ-Det, an efficient architecture and pre-training strategy
design to utilize both textual description with open-set generalization and
visual exemplars with rich description granularity as category queries, namely,
Multi-modal Queried object Detection, for real-world detection with both
open-vocabulary categories and various granularity. MQ-Det incorporates vision
queries into existing well-established language-queried-only detectors. A
plug-and-play gated class-scalable perceiver module upon the frozen detector is
proposed to augment category text with class-wise visual information. To
address the learning inertia problem brought by the frozen detector, a vision
conditioned masked language prediction strategy is proposed. MQ-Det's simple
yet effective architecture and training strategy design is compatible with most
language-queried object detectors, thus yielding versatile applications.
Experimental results demonstrate that multi-modal queries largely boost
open-world detection. For instance, MQ-Det significantly improves the
state-of-the-art open-set detector GLIP by +7.8% zero-shot AP on the LVIS
benchmark and averagely +6.3% AP on 13 few-shot downstream tasks, with merely
3% pre-training time required by GLIP. Code is available at
https://github.com/YifanXu74/MQ-Det.Comment: Under revie
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Hallucination is a big shadow hanging over the rapidly evolving Multimodal
Large Language Models (MLLMs), referring to the phenomenon that the generated
text is inconsistent with the image content. In order to mitigate
hallucinations, existing studies mainly resort to an instruction-tuning manner
that requires retraining the models with specific data. In this paper, we pave
a different way, introducing a training-free method named Woodpecker. Like a
woodpecker heals trees, it picks out and corrects hallucinations from the
generated text. Concretely, Woodpecker consists of five stages: key concept
extraction, question formulation, visual knowledge validation, visual claim
generation, and hallucination correction. Implemented in a post-remedy manner,
Woodpecker can easily serve different MLLMs, while being interpretable by
accessing intermediate outputs of the five stages. We evaluate Woodpecker both
quantitatively and qualitatively and show the huge potential of this new
paradigm. On the POPE benchmark, our method obtains a 30.66%/24.33% improvement
in accuracy over the baseline MiniGPT-4/mPLUG-Owl. The source code is released
at https://github.com/BradyFU/Woodpecker.Comment: 16 pages, 7 figures. Code Website:
https://github.com/BradyFU/Woodpecke