AVATAR: Unconstrained Audiovisual Speech Recognition


Valentin Gabeur*
Paul Hongsuck Seo*
Arsha Nagrani*
Chen Sun
Karteek Alahari
Cordelia Schmid

Abstract

Audio-visual automatic speech recognition (AV-ASR) is an extension of ASR that incorporates visual cues, often from the movements of a speaker's mouth. Unlike works that simply focus on the lip motion, we investigate the contribution of entire visual frames (visual actions, objects, background etc.). This is particularly useful for unconstrained videos, where the speaker is not necessarily visible. To solve this task, we propose a new sequence-to-sequence AudioVisual ASR TrAnsformeR (AVATAR) which is trained end-to-end from spectrograms and full-frame RGB. To prevent the audio stream from dominating training, we propose different word-masking strategies, thereby encouraging our model to pay attention to the visual stream. We demonstrate the contribution of the visual modality on the How2 AV-ASR benchmark, especially in the presence of simulated noise, and show that our model outperforms all other prior work by a large margin. Finally, we also create a new, real-world test bed for AV-ASR called VisSpeech, which demonstrates the contribution of the visual modality under challenging audio conditions.

Architecture

We propose a Seq2Seq architecture for audio-visual speech recognition. Our model is trained end-to-end from RGB pixels and spectrograms.



Qualitative results

Qualitative results on the VisSpeech dataset. We show the ground truth (GT), and predictions from our audio only (A) and audio-visual model (A+V). Note how the visual context helps with objects ("chain", "eggplant", "coin", "dough"), as well as actions ("knit", "fold") which may be ambiguous from the audio stream alone. Errors in the predictions compared to the GT are highlighted in red.

Resources

  • The VisSpeech dataset can be downloaded at this link.

Publication

V. Gabeur, P. Hongsuck Seo, A. Nagrani, C. Sun, K. Alahari, C. Schmid,
INTERSPEECH, 2022
arXiv / bibtex