Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios?

in Scientific publications
Share this publication
Author Arij Riabi ,Benoît Sagot, Djamé Seddah
Title of Journal, Proc. or Book Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
DOI 10.18653/v1/2021.wnut-1.47
Repository link https://aclanthology.org/2021.wnut-1.47/
Peer reviewed Yes
Open access Yes

Abstract

Recent impressive improvements in NLP, largely based on the success of contextual neural language models, have been mostly demonstrated on at most a couple dozen high- resource languages. Building language mod- els and, more generally, NLP systems for non- standardized and low-resource languages remains a challenging task. In this work, we fo- cus on North-African colloquial dialectal Arabic written using an extension of the Latin script, called NArabizi, found mostly on social media and messaging communication. In this low-resource scenario with data display- ing a high level of variability, we compare the downstream performance of a character-based language model on part-of-speech tagging and dependency parsing to that of monolingual and multilingual models. We show that a character-based model trained on only 99k sentences of NArabizi and fined-tuned on a small treebank of this language leads to performance close to those obtained with the same architecture pre- trained on large multilingual and monolingual models. Confirming these results a on much larger data set of noisy French user-generated content, we argue that such character-based language models can be an asset for NLP in low-resource and high language variability settings.

Previous Post
Multilingual Auxiliary Tasks Training: Bridging the Gap between Languages for Zero-Shot Transfer of Hate Speech Detection Models
Next Post
SAIRUS: Spatially-aware identification of risky users in social networks
You may also be interested in these topics
Skip to content