119,004 research outputs found
Reprogramming Audio-driven Talking Face Synthesis into Text-driven
In this paper, we propose a method to reprogram pre-trained audio-driven
talking face synthesis models to be able to operate with text inputs. As the
audio-driven talking face synthesis model takes speech audio as inputs, in
order to generate a talking avatar with the desired speech content, speech
recording needs to be performed in advance. However, this is burdensome to
record audio for every video to be generated. In order to alleviate this
problem, we propose a novel method that embeds input text into the learned
audio latent space of the pre-trained audio-driven model. To this end, we
design a Text-to-Audio Embedding Module (TAEM) which is guided to learn to map
a given text input to the audio latent features. Moreover, to model the speaker
characteristics lying in the audio features, we propose to inject visual
speaker embedding into the TAEM, which is obtained from a single face image.
After training, we can synthesize talking face videos with either text or
speech audio
Developing a Text to Speech System for Dzongkha
Text to Speech plays a vital role in imparting information to the general population who have difficulty reading text but can understand spoken language. In Bhutan, many people fall in this category in adopting the national language ‘Dzongkha’ and system of such kind will have advantages in the community. In addition, the language will heighten its digital evolution in narrowing the digital gap. The same is more important in helping people with visual impairment. Text to speech systems are widely used in talking BOTs to news readers and announcement systems. This paper presents an attempt towards developing a working model of Text to Speech system for Dzongkha language. It also presents the development of a transcription or grapheme table for phonetic transcription from Dzongkha text to its equivalent phone set. The transcription tables for both consonants and vowels have been prepared in such a way that it facilitates better compatibility in computing. A total of 3000 sentences have been manually transcribed and recorded with a single male voice. The speech synthesis is based on a statistical method with concatenative speech generation on FESTIVAL platform. The model is generated using the two variants CLUSTERGEN and CLUNITS of the FESTIVAL speech tools FESTVOX. The development of system prototype is of the first kind for the Dzongkha language. Keywords: Natural Language processing (NLP), Dzongkha, Text to speech (TTS) system, Statistical speech synthesis, phoneme, corpus, transcription DOI: 10.7176/CEIS/12-1-04 Publication date: January 31st 202
BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS
We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar) are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information) and gestures (video information), information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired), and it serves for multimedia output (by audio and visual modalities) of input textual information
Leveraging audio-visual speech effectively via deep learning
The rising popularity of neural networks, combined with the recent proliferation of online audio-visual media, has led to a revolution in the way machines encode, recognize, and generate acoustic and visual speech. Despite the ubiquity of naturally paired audio-visual data, only a limited number of works have applied recent advances in deep learning to leverage the duality between audio and video within this domain. This thesis considers the use of neural networks to learn from large unlabelled datasets of audio-visual speech to enable new practical applications. We begin by training a visual speech encoder that predicts latent features extracted from the corresponding audio on a large unlabelled audio-visual corpus. We apply the trained visual encoder to improve performance on lip reading in real-world scenarios. Following this, we extend the idea of video learning from audio by training a model to synthesize raw speech directly from raw video, without the need for text transcriptions. Remarkably, we find that this framework is capable of reconstructing intelligible audio from videos of new, previously unseen speakers. We also experiment with a separate speech reconstruction framework, which leverages recent advances in sequence modeling and spectrogram inversion to improve the realism of the generated speech. We then apply our research in video-to-speech synthesis to advance the state-of-the-art in audio-visual speech enhancement, by proposing a new vocoder-based model that performs particularly well under extremely noisy scenarios. Lastly, we aim to fully realize the potential of paired audio-visual data by proposing two novel frameworks that leverage acoustic and visual speech to train two encoders that learn from each other simultaneously. We leverage these pre-trained encoders for deepfake detection, speech recognition, and lip reading, and find that they consistently yield improvements over training from scratch.Open Acces
Vocoder-Based Speech Synthesis from Silent Videos
Both acoustic and visual information influence human perception of speech.
For this reason, the lack of audio in a video sequence determines an extremely
low speech intelligibility for untrained lip readers. In this paper, we present
a way to synthesise speech from the silent video of a talker using deep
learning. The system learns a mapping function from raw video frames to
acoustic features and reconstructs the speech with a vocoder synthesis
algorithm. To improve speech reconstruction performance, our model is also
trained to predict text information in a multi-task learning fashion and it is
able to simultaneously reconstruct and recognise speech in real time. The
results in terms of estimated speech quality and intelligibility show the
effectiveness of our method, which exhibits an improvement over existing
video-to-speech approaches.Comment: Accepted to Interspeech 202
Parts of Speech-Grounded Subspaces in Vision-Language Models
Latent image representations arising from vision-language models have proved
immensely useful for a variety of downstream tasks. However, their utility is
limited by their entanglement with respect to different visual attributes. For
instance, recent work has shown that CLIP image representations are often
biased toward specific visual properties (such as objects or actions) in an
unpredictable manner. In this paper, we propose to separate representations of
the different visual modalities in CLIP's joint vision-language space by
leveraging the association between parts of speech and specific visual modes of
variation (e.g. nouns relate to objects, adjectives describe appearance). This
is achieved by formulating an appropriate component analysis model that learns
subspaces capturing variability corresponding to a specific part of speech,
while jointly minimising variability to the rest. Such a subspace yields
disentangled representations of the different visual properties of an image or
text in closed form while respecting the underlying geometry of the manifold on
which the representations lie. What's more, we show the proposed model
additionally facilitates learning subspaces corresponding to specific visual
appearances (e.g. artists' painting styles), which enables the selective
removal of entire visual themes from CLIP-based text-to-image synthesis. We
validate the model both qualitatively, by visualising the subspace projections
with a text-to-image model and by preventing the imitation of artists' styles,
and quantitatively, through class invariance metrics and improvements to
baseline zero-shot classification.Comment: Accepted at NeurIPS 202
Large-scale unsupervised audio pre-training for video-to-speech synthesis
Video-to-speech synthesis is the task of reconstructing the speech signal
from a silent video of a speaker. Most established approaches to date involve a
two-step process, whereby an intermediate representation from the video, such
as a spectrogram, is extracted first and then passed to a vocoder to produce
the raw audio. Some recent work has focused on end-to-end synthesis, whereby
the generation of raw audio and any intermediate representations is performed
jointly. All such approaches involve training on data from almost exclusively
audio-visual datasets, i.e. every audio sample has a corresponding video
sample. This precludes the use of abundant audio-only datasets which may not
have a corresponding visual modality (e.g. audiobooks, radio podcasts, speech
recognition datasets etc.), as well as audio-only architectures that have been
developed by the audio machine learning community over the years. In this paper
we propose to train encoder-decoder models on more than 3,500 hours of audio
data at 24kHz, and then use the pre-trained decoders to initialize the audio
decoders for the video-to-speech synthesis task. The pre-training step uses
audio samples only and does not require labels or corresponding samples from
other modalities (visual, text). We demonstrate that this pre-training step
improves the reconstructed speech and that it is an unexplored way to improve
the quality of the generator in a cross-modal task while only requiring samples
from one of the modalities. We conduct experiments using both raw audio and mel
spectrograms as target outputs and benchmark our models with existing work.Comment: Submitted to IEE
Visual speech synthesis using dynamic visemes, contextual features and DNNs
This paper examines methods to improve visual speech synthesis from a text input using a deep neural network (DNN). Two representations of the input text are considered, namely into phoneme sequences or dynamic viseme sequences. From these sequences, contextual features are extracted that include information at varying linguistic levels, from frame level down to the utterance level. These are extracted from a broad sliding window that captures context and produces features that are input into the DNN to estimate visual features. Experiments first compare the accuracy of these visual features against an HMM baseline method which establishes that both the phoneme and dynamic viseme systems perform better with best performance obtained by a combined phoneme-dynamic viseme system. An investigation into the features then reveals the importance of the frame level information which is able to avoid discontinuities in the visual feature sequence and produces a smooth and realistic output
Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain
Previously, a machine speech chain, which is based on sequence-to-sequence
deep learning, was proposed to mimic speech perception and production behavior.
Such chains separately processed listening and speaking by automatic speech
recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled
them to teach each other in semi-supervised learning when they received
unpaired data. Unfortunately, this speech chain study is limited to speech and
textual modalities. In fact, natural communication is actually multimodal and
involves both auditory and visual sensory systems. Although the said speech
chain reduces the requirement of having a full amount of paired data, in this
case we still need a large amount of unpaired data. In this research, we take a
further step and construct a multimodal chain and design a closely knit chain
architecture that combines ASR, TTS, image captioning, and image production
models into a single framework. The framework allows the training of each
component without requiring a large number of parallel multimodal data. Our
experimental results also show that an ASR can be further trained without
speech and text data and cross-modal data augmentation remains possible through
our proposed chain, which improves the ASR performance.Comment: Accepted in IEEE ASRU 201
- …