445 research outputs found
Foley Music: Learning to Generate Music from Videos
In this paper, we introduce Foley Music, a system that can synthesize
plausible music for a silent video clip about people playing musical
instruments. We first identify two key intermediate representations for a
successful video to music generator: body keypoints from videos and MIDI events
from audio recordings. We then formulate music generation from videos as a
motion-to-MIDI translation problem. We present a GraphTransformer framework
that can accurately predict MIDI event sequences in accordance with the body
movements. The MIDI event can then be converted to realistic music using an
off-the-shelf music synthesizer tool. We demonstrate the effectiveness of our
models on videos containing a variety of music performances. Experimental
results show that our model outperforms several existing systems in generating
music that is pleasant to listen to. More importantly, the MIDI representations
are fully interpretable and transparent, thus enabling us to perform music
editing flexibly. We encourage the readers to watch the demo video with audio
turned on to experience the results.Comment: ECCV 2020. Project page: http://foley-music.csail.mit.ed
Visually Guided Sound Source Separation using Cascaded Opponent Filter Network
The objective of this paper is to recover the original component signals from
a mixture audio with the aid of visual cues of the sound sources. Such task is
usually referred as visually guided sound source separation. The proposed
Cascaded Opponent Filter (COF) framework consists of multiple stages, which
recursively refine the source separation. A key element in COF is a novel
opponent filter module that identifies and relocates residual components
between sources. The system is guided by the appearance and motion of the
source, and, for this purpose, we study different representations based on
video frames, optical flows, dynamic images, and their combinations. Finally,
we propose a Sound Source Location Masking (SSLM) technique, which, together
with COF, produces a pixel level mask of the source location. The entire system
is trained end-to-end using a large set of unlabelled videos. We compare COF
with recent baselines and obtain the state-of-the-art performance in three
challenging datasets (MUSIC, A-MUSIC, and A-NATURAL). Project page:
https://ly-zhu.github.io/cof-net.Comment: main paper 14 pages, ref 3 pages, and supp 7 pages. Revised argument
in section 3 and
Self-supervised object detection from audio-visual correspondence
We tackle the problem of learning object detectors without supervision.
Differently from weakly-supervised object detection, we do not assume
image-level class labels. Instead, we extract a supervisory signal from
audio-visual data, using the audio component to "teach" the object detector.
While this problem is related to sound source localisation, it is considerably
harder because the detector must classify the objects by type, enumerate each
instance of the object, and do so even when the object is silent. We tackle
this problem by first designing a self-supervised framework with a contrastive
objective that jointly learns to classify and localise objects. Then, without
using any supervision, we simply use these self-supervised labels and boxes to
train an image-based object detector. With this, we outperform previous
unsupervised and weakly-supervised detectors for the task of object detection
and sound source localization. We also show that we can align this detector to
ground-truth classes with as little as one label per pseudo-class, and show how
our method can learn to detect generic objects that go beyond instruments, such
as airplanes and cats.Comment: Under revie
- …