Publications

Federated named entity recognition

Abstract

We present an analysis of the performance of Federated Learning in a paradigmatic natural-language processing task: Named-Entity Recognition (NER). For our evaluation, we use the language-independent CoNLL-2003 dataset as our benchmark dataset and a Bi-LSTM-CRF model as our benchmark NER model. We show that federated training reaches almost the same performance as the centralized model, though with some performance degradation as the learning environments become more heterogeneous. We also show the convergence rate of federated models for NER. Finally, we discuss existing challenges of Federated Learning for NLP applications that can foster future research directions.

Metadata

publication
arXiv preprint arXiv:2203.15101, 2022
year
2022
publication date
2022/3/28
authors
Joel Mathew, Dimitris Stripelis, José Luis Ambite
link
https://arxiv.org/abs/2203.15101
resource_link
https://arxiv.org/pdf/2203.15101
journal
arXiv preprint arXiv:2203.15101