RobBERT: a Dutch RoBERTa-based Language Model

  • Authors: Pieter Delobelle, Thomas Winters, Bettina Berendt
  • Publication Date: 2020-01
  • Publication Venue: Findings of EMNLP2020
  • Abstract: Pre-trained language models have been dominating the field of natural language processing in recent years, and have led to significant performance gains for various complex natural language tasks. One of the most prominent pre-trained language models is BERT, which was released as an English as well as a multilingual version. Although multilingual BERT performs well on many tasks, recent studies show that BERT models trained on a single language significantly outperform the multilingual version. Training a Dutch BERT model thus has a lot of potential for a wide range of Dutch NLP tasks. While previous approaches have used earlier implementations of BERT to train a Dutch version of BERT, we used RoBERTa, a robustly optimized BERT approach, to train a Dutch language model called RobBERT. We measured its performance on various tasks as well as the importance of the fine-tuning dataset size. We also evaluated the importance of language-specific tokenizers and the model's fairness. We found that RobBERT improves state-of-the-art results for various tasks, and especially significantly outperforms other models when dealing with smaller datasets. These results indicate that it is a powerful pre-trained model for a large variety of Dutch language tasks. The pre-trained and fine-tuned models are publicly available to support further downstream Dutch NLP applications.
Read paper

Citation

APA

Delobelle, P., Winters, T., & Berendt, B. (2020). RobBERT: a Dutch RoBERTa-based Language Model. Findings of the Association for Computational Linguistics: EMNLP 2020, 3255–3265. https://doi.org/10.18653/v1/2020.findings-emnlp.292

Harvard

Delobelle, P., Winters, T. and Berendt, B. (2020) “RobBERT: a Dutch RoBERTa-based Language Model,” Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 3255–3265. doi:10.18653/v1/2020.findings-emnlp.292.

Vancouver

1.
Delobelle P, Winters T, Berendt B. RobBERT: a Dutch RoBERTa-based Language Model. Findings of the Association for Computational Linguistics: EMNLP 2020 [Internet]. 2020 Nov;3255–65. Available from: https://www.aclweb.org/anthology/2020.findings-emnlp.292

BibTeX

Related project

2020
Project collaborator

RobBERT

The state-of-the-art Dutch language model

Back to all publications