Abstract
Text normalization methods have been commonly applied to historical language or user-generated content, but less often to dialectal transcriptions. In this paper, we introduce dialect-to-standard normalization -- i.e., mapping phonetic transcriptions from different dialects to the orthographic norm of the standard variety -- as a distinct sentence-level character transduction task and provide a large-scale analysis of dialect-to-standard normalization methods. To this end, we compile a multilingual dataset covering four languages: Finnish, Norwegian, Swiss German and Slovene. For the two biggest corpora, we provide three different data splits corresponding to different use cases for automatic normalization. We evaluate the most successful sequence-to-sequence model architectures proposed for text normalization tasks using different tokenization approaches and context sizes. We find that a character-level Transformer trained on sliding windows of three words works best for Finnish, Swiss German and Slovene, whereas the pre-trained byT5 model using full sentences obtains the best results for Norwegian. Finally, we perform an error analysis to evaluate the effect of different data splits on model performance.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics: EMNLP 2023 |
Editors | Houda Bouamor, Juan Pino, Kalika Bali |
Place of Publication | Singapore |
Pages | 13814-13828 |
Number of pages | 15 |
DOIs | |
Publication status | Published - 1 Dec 2023 |
Publication type | A4 Article in conference proceedings |
Event | Conference on Empirical Methods in Natural Language Processing - , Singapore Duration: 6 Dec 2023 → 10 Dec 2023 |
Conference
Conference | Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP |
Country/Territory | Singapore |
Period | 6/12/23 → 10/12/23 |
Publication forum classification
- Publication forum level 1