E3 TTS

Easy End-to-End Diffusion-based Text to Speech

[pdf]

Yuan Gao, Nobuyuki Morioka, Yu Zhang, Nanxin Chen, ASRU 2023

Google

Abstract. We propose Easy End-to-End Diffusion-based Text to Speech, a simple and efficient end-to-end text-to-speech model based on diffusion. E3 TTS directly takes plain text as input and generates an audio waveform through an iterative refinement process. Unlike many prior work, E3 TTS does not rely on any intermediate representations like spectrogram features or alignment information. Instead, E3 TTS models the temporal structure of the waveform through the diffusion process. Without relying on additional conditioning information, E3 TTS could support flexible latent structure within the given audio. This enables E3 TTS to be easily adapted for zero-shot tasks such as editing without any additional training. Experiments show that E3 TTS can generate high-fidelity audio, approaching the performance of a state-of-the-art neural TTS system.

Model Overview

To take advantage of recent advances in large language model development, we built our system on top of a pre-trained BERT model. The BERT model takes subword input and does not rely on any other representations of speech, such as phonemes or graphemes. Then it is followed by a 1D U-Net structure, which consists of a series of downsampling and upsampling blocks connected by residual connections. The entire model is non-autoregressive and directly outputs a waveform. The speaker and alignment are dynamically determined during the diffusion process. Our model achieves comparable results to the two-stage framework on a proprietary dataset.

Samples

Here are the samples we generated with speaker given.

Speaker Ground Truth E3 TTS

Single Speaker Alignment Diversity

When speaker is given, our model can generate high quality speech with diverse alignment.

Ground Truth Sample 1 Sample 2 Sample 3 Sample 4

Speaker Diversity

In this part, we use a base model trained without any speaker information. The speaker is dynamically determined during inference. To increase the speaker diversity, we trained the model on a combination of LibriTTS-clean and LibriTTS-other split. So it generate speech with similar quality as LibriTTS-other.

Text Ground Truth Sample 1 Sample 2 Sample 3 Sample 4
Text Ground Truth Sample 1 Sample 2 Sample 3 Sample 4
Text Ground Truth Sample 1 Sample 2 Sample 3 Sample 4

Prompted Generation

In this task, we select two sentences from the same speaker in the LibriTTS-clean test split. We then use the waveform of the first sentence as a prompt to generate the second sentence, given both texts.

Prompt Text Prompt Text Ground Truth E3 TTS

Speech Impainting

In this task, we select a sentence from the LibriTTS-clean test split and mask a small fragment of the waveform (0.5 to 2.5 seconds, marked bold in the text), and ask the to generate the full waveform. To demonstrate the diffusion model's capabilities, we feed the model three examples of the same sentence with different masked part lengths (0.8x, 1.0x, and 1.2x).

Text Ground Truth Fast Medium Slow