Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets

We audit 5 multilingual corpora, finding that lower-resource corpora have systematic issues.

SinNer@CLEF-HIPE2020: Sinful Adaptation of SotA models for Named Entity Recognition in Historical French and German Newspapers

In this article we present the approaches developed by the Sorbonne-INRIA for NER (SinNer) team for the CLEF-HIPE 2020 challenge on Named Entity Processing on old newspapers.

A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages

We explore the impact of the training corpus on contextualized word embeddings in five mid-resource languages.

CamemBERT: a Tasty French Language Model

We explore the impact of the training data size on a French version of RoBERTa. (Equal contribution by the first three authors).

Building a User-Generated Content North-African Arabizi Treebank: Tackling Hell

We introduce the first treebank for a romanized user-generated content variety of Algerian, a North-African Arabic dialect.

Les modèles de langue contextuels Camembert pour le Français : impact de la taille et de l'hétérogénéité des données d'entrainement

We explore the impact of the training data size and heterogeneity on French language modeling. (Equal contribution by the first three authors).

Establishing a New State-of-the-Art for French Named Entity Recognition

We explore convert the NER annotations of the French TreeBank to a more user-friendly format and establish a new state of the art for French NER.

French Contextualized Word-Embeddings with a sip of CaBeRnet: a New French Balanced Reference Corpus

We investigate the impact of different types and size of training corpora on language models.

How OCR Performance can Impact on the Automatic Extraction of Dictionary Content Structures

We explore the impact of the OCR quality on grobid-dictionaries models.

Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures

We propose a new pipeline to filter, clean and classify Common Crawl by language, we publish the final corpus under the name OSCAR.