Zhiyi Huang
2025
Leveraging High-Resource English Corpora for Cross-lingual Domain Adaptation in Low-Resource Japanese Medicine via Continued Pre-training
Kazuma Kobayashi
|
Zhen Wan
|
Fei Cheng
|
Yuma Tsuta
|
Xin Zhao
|
Junfeng Jiang
|
Jiahao Huang
|
Zhiyi Huang
|
Yusuke Oda
|
Rio Yokota
|
Yuki Arase
|
Daisuke Kawahara
|
Akiko Aizawa
|
Sadao Kurohashi
Findings of the Association for Computational Linguistics: EMNLP 2025
Limited low-resource language corpora in professional domains like medicine hinder cross-lingual domain adaptation of pre-trained large language models (PLMs). While abundant English medical corpora could complement this scarcity, the effective mixture of English and target language, including machine-translated content, remains underexplored. We examined how linguistic features (e.g., token sizes and language proportions) affect performance on a Japanese–English medical knowledge benchmark. Through continued pre-training of a bilingual PLM on multilingual corpora with varying proportions of English and Japanese texts (both original and machine-translated), we analyzed correlations between linguistic features and fine-grained task performance. Our findings suggest a practical approach to optimizing multilingual corpora for cross-lingual domain adaptation, which requires leveraging specialized knowledge from English corpora while ensuring sufficient coverage of language-specific expressions in a target language (Japanese). Such insights will contribute to the development of multilingual models that effectively leverage English-language resources in various professional domains with low-resource languages.
Search
Fix author
Co-authors
- Akiko Aizawa 1
- Yuki Arase† 1
- Fei Cheng 1
- Jiahao Huang 1
- Junfeng Jiang 1
- show all...