When and why are pre-trained word embeddings useful for neural machine translation? Y Qi, DS Sachan, M Felix, SJ Padmanabhan, G Neubig arXiv preprint arXiv:1804.06323, 2018 | 425 | 2018 |
Embedding-based retrieval in facebook search JT Huang, A Sharma, S Sun, L Xia, D Zhang, P Pronin, J Padmanabhan, ... Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020 | 317 | 2020 |
XNMT: The extensible neural machine translation toolkit G Neubig, M Sperber, X Wang, M Felix, A Matthews, S Padmanabhan, ... arXiv preprint arXiv:1803.00188, 2018 | 77 | 2018 |
Resolving Implicit Coordination in Multi-Agent Deep Reinforcement Learning with Deep Q-Networks & Game Theory G Adams, SJ Padmanabhan, S Shekhar arXiv preprint arXiv:2012.09136, 2020 | 4 | 2020 |
MIA 2022 Shared Task Submission: Leveraging Entity Representations, Dense-Sparse Hybrids, and Fusion-in-Decoder for Cross-Lingual Question Answering Z Tu, SJ Padmanabhan arXiv preprint arXiv:2207.01940, 2022 | 3 | 2022 |
When and why are pre-trained word embeddings useful for neural machine translation? 2018 Y Qi, DS Sachan, M Felix, SJ Padmanabhan, G Neubig arXiv preprint arXiv:1804.06323, 1804 | 3 | 1804 |
Resolving implicit coordination in multi-agent Deep RL with Deep Q-Networks & Game Theory G Adams, S Padmanabhan, S Shekhar arXiv preprint arXiv:2012.09136, 2023 | 2 | 2023 |