Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 439 | 2023 |
Toolllm: Facilitating large language models to master 16000+ real-world apis Y Qin, S Liang, Y Ye, K Zhu, L Yan, Y Lu, Y Lin, X Cong, X Tang, B Qian, ... arXiv preprint arXiv:2307.16789, 2023 | 324 | 2023 |
Enhancing chat language models by scaling high-quality instructional conversations N Ding, Y Chen, B Xu, Y Qin, Z Zheng, S Hu, Z Liu, M Sun, B Zhou arXiv preprint arXiv:2305.14233, 2023 | 233 | 2023 |
Tool Learning with Foundation Models Arxiv Preprint, 2023 | 207* | 2023 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence, 2022 | 201 | 2022 |
On Transferability of Prompt Tuning for Natural Language Understanding Y Su, X Wang, Y Qin, CM Chan, Y Lin, Z Liu, P Li, J Li, L Hou, M Sun, ... NAACL 2022, 2021 | 131* | 2021 |
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning Y Qin, Y Lin, R Takanobu, Z Liu, P Li, H Ji, M Huang, M Sun, J Zhou ACL 2021, 2020 | 125 | 2020 |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ... arXiv preprint arXiv:2308.10848 2 (4), 6, 2023 | 122 | 2023 |
CPM: A large-scale generative Chinese pre-trained language model Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ... AI Open 2, 93-99, 2021 | 112 | 2021 |
bert2BERT: Towards Reusable Pretrained Language Models C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ... ACL 2022, 2021 | 64 | 2021 |
ELLE: Efficient Lifelong Pre-training for Emerging Data Y Qin, J Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou Findings of ACL 2022, 2022 | 52 | 2022 |
Knowledge inheritance for pre-trained language models Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ... NAACL 2022, 2021 | 52 | 2021 |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ... The Twelfth International Conference on Learning Representations, 2023 | 49 | 2023 |
Webcpm: Interactive web search for chinese long-form question answering Y Qin, Z Cai, D Jin, L Yan, S Liang, K Zhu, Y Lin, X Han, N Ding, H Wang, ... arXiv preprint arXiv:2305.06849, 2023 | 48 | 2023 |
Learning from Explanations with Neural Execution Tree Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren ICLR 2020, 2019 | 40 | 2019 |
Exploring low-dimensional intrinsic task subspace via prompt tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou Previously Accepted by Findings of ACL 2022 and EMNLP 2022, 2021 | 37 | 2021 |
ProQA: Structural Prompt-based Pre-training for Unified Question Answering W Zhong, Y Gao, N Ding, Y Qin, Z Liu, M Zhou, J Wang, J Yin, N Duan NAACL 2022, 2022 | 34 | 2022 |
Creator: Tool creation for disentangling abstract and concrete reasoning of large language models C Qian, C Han, YR Fung, Y Qin, Z Liu, H Ji arXiv preprint arXiv:2305.14318, 2023 | 26 | 2023 |
Debugbench: Evaluating debugging capability of large language models R Tian, Y Ye, Y Qin, X Cong, Y Lin, Z Liu, M Sun arXiv preprint arXiv:2401.04621, 2024 | 25 | 2024 |
Creator: Disentangling abstract and concrete reasonings of large language models through tool creation C Qian, C Han, YR Fung, Y Qin, Z Liu, H Ji arXiv preprint arXiv:2305.14318, 2023 | 24 | 2023 |