993 research outputs found
Foley Music: Learning to Generate Music from Videos
In this paper, we introduce Foley Music, a system that can synthesize
plausible music for a silent video clip about people playing musical
instruments. We first identify two key intermediate representations for a
successful video to music generator: body keypoints from videos and MIDI events
from audio recordings. We then formulate music generation from videos as a
motion-to-MIDI translation problem. We present a GraphTransformer framework
that can accurately predict MIDI event sequences in accordance with the body
movements. The MIDI event can then be converted to realistic music using an
off-the-shelf music synthesizer tool. We demonstrate the effectiveness of our
models on videos containing a variety of music performances. Experimental
results show that our model outperforms several existing systems in generating
music that is pleasant to listen to. More importantly, the MIDI representations
are fully interpretable and transparent, thus enabling us to perform music
editing flexibly. We encourage the readers to watch the demo video with audio
turned on to experience the results.Comment: ECCV 2020. Project page: http://foley-music.csail.mit.ed
Attack Type Agnostic Perceptual Enhancement of Adversarial Images
Adversarial images are samples that are intentionally modified to deceive
machine learning systems. They are widely used in applications such as CAPTHAs
to help distinguish legitimate human users from bots. However, the noise
introduced during the adversarial image generation process degrades the
perceptual quality and introduces artificial colours; making it also difficult
for humans to classify images and recognise objects. In this letter, we propose
a method to enhance the perceptual quality of these adversarial images. The
proposed method is attack type agnostic and could be used in association with
the existing attacks in the literature. Our experiments show that the generated
adversarial images have lower Euclidean distance values while maintaining the
same adversarial attack performance. Distances are reduced by 5.88% to 41.27%
with an average reduction of 22% over the different attack and network types
FoleyGen: Visually-Guided Audio Generation
Recent advancements in audio generation have been spurred by the evolution of
large-scale deep learning models and expansive datasets. However, the task of
video-to-audio (V2A) generation continues to be a challenge, principally
because of the intricate relationship between the high-dimensional visual and
auditory data, and the challenges associated with temporal synchronization. In
this study, we introduce FoleyGen, an open-domain V2A generation system built
on a language modeling paradigm. FoleyGen leverages an off-the-shelf neural
audio codec for bidirectional conversion between waveforms and discrete tokens.
The generation of audio tokens is facilitated by a single Transformer model,
which is conditioned on visual features extracted from a visual encoder. A
prevalent problem in V2A generation is the misalignment of generated audio with
the visible actions in the video. To address this, we explore three novel
visual attention mechanisms. We further undertake an exhaustive evaluation of
multiple visual encoders, each pretrained on either single-modal or multi-modal
tasks. The experimental results on VGGSound dataset show that our proposed
FoleyGen outperforms previous systems across all objective metrics and human
evaluations
I Hear Your True Colors: Image Guided Audio Generation
We propose Im2Wav, an image guided open-domain audio generation system. Given
an input image or a sequence of images, Im2Wav generates a semantically
relevant sound. Im2Wav is based on two Transformer language models, that
operate over a hierarchical discrete audio representation obtained from a
VQ-VAE based model. We first produce a low-level audio representation using a
language model. Then, we upsample the audio tokens using an additional language
model to generate a high-fidelity audio sample. We use the rich semantics of a
pre-trained CLIP embedding as a visual representation to condition the language
model. In addition, to steer the generation process towards the conditioning
image, we apply the classifier-free guidance method. Results suggest that
Im2Wav significantly outperforms the evaluated baselines in both fidelity and
relevance evaluation metrics. Additionally, we provide an ablation study to
better assess the impact of each of the method components on overall
performance. Lastly, to better evaluate image-to-audio models, we propose an
out-of-domain image dataset, denoted as ImageHear. ImageHear can be used as a
benchmark for evaluating future image-to-audio models. Samples and code can be
found inside the manuscript
Taming Visually Guided Sound Generation
Recent advances in visually-induced audio generation are based on sampling short, low-fidelity, and one-class sounds. Moreover, sampling 1 second of audio from the state-of-the-art model takes minutes on a high-end GPU. In this work, we propose a single model capable of generating visually relevant, high-fidelity sounds prompted with a set of frames from open-domain videos in less time than it takes to play it on a single GPU. We train a transformer to sample a new spectrogram from the pre-trained spectrogram codebook given the set of video features. The codebook is obtained using a variant of VQGAN trained to produce a compact sampling space with a novel spectrogram-based perceptual loss. The generated spectrogram is transformed into a waveform using a window-based GAN that significantly speeds up generation. Considering the lack of metrics for automatic evaluation of generated spectrograms, we also build a family of metrics called FID and MKL. These metrics are based on a novel sound classifier, called Melception, and designed to evaluate the fidelity and relevance of open-domain samples. Both qualitative and quantitative studies are conducted on small- and large-scale datasets to evaluate the fidelity and relevance of generated samples. We also compare our model to the state-of-the-art and observe a substantial improvement in quality, size, and computation time. Code, demo, and samples: v-iashin.github.io/SpecVQGANpublishedVersio
Cross-modal Generative Model for Visual-Guided Binaural Stereo Generation
Binaural stereo audio is recorded by imitating the way the human ear receives
sound, which provides people with an immersive listening experience. Existing
approaches leverage autoencoders and directly exploit visual spatial
information to synthesize binaural stereo, resulting in a limited
representation of visual guidance. For the first time, we propose a visually
guided generative adversarial approach for generating binaural stereo audio
from mono audio. Specifically, we develop a Stereo Audio Generation Model
(SAGM), which utilizes shared spatio-temporal visual information to guide the
generator and the discriminator to work separately. The shared visual
information is updated alternately in the generative adversarial stage,
allowing the generator and discriminator to deliver their respective guided
knowledge while visually sharing. The proposed method learns bidirectional
complementary visual information, which facilitates the expression of visual
guidance in generation. In addition, spatial perception is a crucial attribute
of binaural stereo audio, and thus the evaluation of stereo spatial perception
is essential. However, previous metrics failed to measure the spatial
perception of audio. To this end, a metric to measure the spatial perception of
audio is proposed for the first time. The proposed metric is capable of
measuring the magnitude and direction of spatial perception in the temporal
dimension. Further, considering its function, it is feasible to utilize it
instead of demanding user studies to some extent. The proposed method achieves
state-of-the-art performance on 2 datasets and 5 evaluation metrics.
Qualitative experiments and user studies demonstrate that the method generates
space-realistic stereo audio
Generating Visually Aligned Sound from Videos
We focus on the task of generating sound from natural videos, and the sound
should be both temporally and content-wise aligned with visual signals. This
task is extremely challenging because some sounds generated \emph{outside} a
camera can not be inferred from video content. The model may be forced to learn
an incorrect mapping between visual content and these irrelevant sounds. To
address this challenge, we propose a framework named REGNET. In this framework,
we first extract appearance and motion features from video frames to better
distinguish the object that emits sound from complex background information. We
then introduce an innovative audio forwarding regularizer that directly
considers the real sound as input and outputs bottlenecked sound features.
Using both visual and bottlenecked sound features for sound prediction during
training provides stronger supervision for the sound prediction. The audio
forwarding regularizer can control the irrelevant sound component and thus
prevent the model from learning an incorrect mapping between video frames and
sound emitted by the object that is out of the screen. During testing, the
audio forwarding regularizer is removed to ensure that REGNET can produce
purely aligned sound only from visual features. Extensive evaluations based on
Amazon Mechanical Turk demonstrate that our method significantly improves both
temporal and content-wise alignment. Remarkably, our generated sound can fool
the human with a 68.12% success rate. Code and pre-trained models are publicly
available at https://github.com/PeihaoChen/regnetComment: Published in IEEE Transactions on Image Processing, 2020. Code,
pre-trained models and demo video: https://github.com/PeihaoChen/regne
- …