Follow
Jingfei Du
Jingfei Du
Meta AI Research
Verified email at fb.com
Title
Cited by
Cited by
Year
Roberta: A robustly optimized bert pretraining approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
24956*2019
Supervised contrastive learning for pre-trained language model fine-tuning
B Gunel, J Du, A Conneau, V Stoyanov
arXiv preprint arXiv:2011.01403, 2020
4392020
Pretrained language models for biomedical and clinical tasks: understanding and extending the state-of-the-art
P Lewis, M Ott, J Du, V Stoyanov
Proceedings of the 3rd clinical natural language processing workshop, 146-157, 2020
1982020
Self-training improves pre-training for natural language understanding
J Du, E Grave, B Gunel, V Chaudhary, O Celebi, M Auli, V Stoyanov, ...
arXiv preprint arXiv:2010.02194, 2020
1552020
Box office prediction based on microblog
J Du, H Xu, X Huang
Expert Systems with Applications 41 (4), 1680-1689, 2014
1142014
Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model
W Xiong, J Du, WY Wang, V Stoyanov
arXiv preprint arXiv:1912.09637, 2019
1042019
Larger-scale transformers for multilingual masked language modeling
N Goyal, J Du, M Ott, G Anantharaman, A Conneau
arXiv preprint arXiv:2105.00572, 2021
862021
Efficient large scale language modeling with mixtures of experts
M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ...
arXiv preprint arXiv:2112.10684, 2021
812021
Answering complex open-domain questions with multi-hop dense retrieval
W Xiong, XL Li, S Iyer, J Du, P Lewis, WY Wang, Y Mehdad, W Yih, ...
arXiv preprint arXiv:2009.12756, 2020
522020
Few-shot learning with multilingual language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
arXiv preprint arXiv:2112.10668, 2021
412021
Few-shot learning with multilingual generative language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
372022
Knowledge-augmented language model and its application to unsupervised named-entity recognition
A Liu, J Du, V Stoyanov
arXiv preprint arXiv:1904.04458, 2019
272019
Roberta: a robustly optimized BERT pretraining approach. CoRR abs
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692 465, 1907
261907
RoBERTa: A Robustly Optimized BERT Pretraining Approach. July 2019
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 1907
191907
Speechmatrix: A large-scale mined corpus of multilingual speech-to-speech translations
PA Duquenne, H Gong, N Dong, J Du, A Lee, V Goswani, C Wang, J Pino, ...
arXiv preprint arXiv:2211.04508, 2022
182022
Improving in-context few-shot learning via self-supervised training
M Chen, J Du, R Pasunuru, T Mihaylov, S Iyer, V Stoyanov, Z Kozareva
arXiv preprint arXiv:2205.01703, 2022
162022
Prompting ELECTRA: few-shot learning with discriminative pre-trained models
M Xia, M Artetxe, J Du, D Chen, V Stoyanov
arXiv preprint arXiv:2205.15223, 2022
152022
Speech-to-speech translation for a real-world unwritten language
PJ Chen, K Tran, Y Yang, J Du, J Kao, YA Chung, P Tomasello, ...
arXiv preprint arXiv:2211.06474, 2022
112022
General purpose text embeddings from pre-trained language models for scalable inference
J Du, M Ott, H Li, X Zhou, V Stoyanov
arXiv preprint arXiv:2004.14287, 2020
112020
On the Role of Bidirectionality in Language Model Pre-Training
M Artetxe, J Du, N Goyal, L Zettlemoyer, V Stoyanov
arXiv preprint arXiv:2205.11726, 2022
62022
The system can't perform the operation now. Try again later.
Articles 1–20