Automatic transcription of conversation situations
Overview
Multi-talker conversational speech recognition is concerned with the transcription of audio recordings of formal meetings or informal get-togethers in machine-readable form using distant microphones. Current solutions are far from reaching human performance. The difficulty of the task can be attributed to three factors. First, the recording conditions are challenging: The speech signal captured by microphones from a distance is noisy and reverberated and often contains nonstationary acoustic distortions, which makes it hard to decode. Second, there is a significant percentage of time with overlapped speech, where multiple speakers talk at the same time. Finally, the interaction dynamics of the scenario are challenging because speakers articulate themselves in an intermittent manner with alternating segments of speech inactivity, single-, and multi-talker speech. We aim to develop a transcription system that is able to operate on arbitrary length input, correctly handles segments of overlapped as well as non-overlapped speech, and transcribes the speech of different speakers consistently into separate output streams. While existing approaches using separately trained subsystems for diarization, separation, and recognition are by far not able to reach human performance, we believe that the missing piece is a formulation which encapsulates all aspects of meeting transcription and which allows to design a joint approach under a single optimization criterion. This project is aimed at such a coherent formulation.
Key Facts
- Grant Number:
- 448568305
- Project duration:
- 05/2021 - 12/2024
- Funded by:
- DFG
- Website:
-
DFG-Datenbank gepris