Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition

Published in The Association for Computational Linguistics (ACL) Code-Switching Workshop, 2018

[PDF]


@article{CS1,
    title={Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition},
    author={Genta Indra Winata, Chien-Sheng Wu, Madotto Andrea, Pascale Fung},
    publisher = {3rd Workshop in Computational Approaches in Linguistic Code-switching},
    year = {2018}
}

Abstract

We propose an LSTM-based model with hierarchical architecture on named entity recognition from code-switching Twitter data. Our model uses bilingual character representation and transfer learning to address out-of-vocabulary words. In order to mitigate data noise, we propose to use token replacement and normalization. In the 3rd Workshop on Computational Approaches to Linguistic Code-Switching Shared Task, we achieved second place with 62.76% harmonic mean F1-score for English-Spanish language pair without using any gazetteer and knowledge-based information.