Yao Zhao
Yao Zhao
Google Brain
Verified email at
Cited by
Cited by
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization
J Zhang, Y Zhao, M Saleh, P Liu
International conference on machine learning, 11328-11339, 2020
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
Adversarial attacks and defences competition
A Kurakin, I Goodfellow, S Bengio, Y Dong, F Liao, M Liang, T Pang, ...
The NIPS'17 Competition: Building Intelligent Systems, 195-231, 2018
Paragraph-level neural question generation with maxout pointer and gated self-attention networks
Y Zhao, X Ni, Y Ding, Q Ke
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
The tethering of chromatin to the nuclear envelope supports nuclear mechanics
SM Schreiner, PK Koo, Y Zhao, SGJ Mochrie, MC King
Nature communications 6 (1), 7159, 2015
Talm: Tool augmented language models
A Parisi, Y Zhao, N Fiedel
arXiv preprint arXiv:2205.12255, 2022
Planning with learned entity prompts for abstractive summarization
S Narayan, Y Zhao, J Maynez, G Simões, V Nikolaev, R McDonald
Transactions of the Association for Computational Linguistics 9, 1475-1492, 2021
Slic-hf: Sequence likelihood calibration with human feedback
Y Zhao, R Joshi, T Liu, M Khalman, M Saleh, PJ Liu
arXiv preprint arXiv:2305.10425, 2023
Calibrating sequence likelihood improves conditional language generation
Y Zhao, M Khalman, R Joshi, S Narayan, M Saleh, PJ Liu
The Eleventh International Conference on Learning Representations, 2022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
Statistical rejection sampling improves preference optimization
T Liu, Y Zhao, R Joshi, M Khalman, M Saleh, PJ Liu, J Liu
arXiv preprint arXiv:2309.06657, 2023
Investigating efficiently extending transformers for long input summarization
J Phang, Y Zhao, PJ Liu
arXiv preprint arXiv:2208.04347, 2022
Out-of-distribution detection and selective generation for conditional language models
J Ren, J Luo, Y Zhao, K Krishna, M Saleh, B Lakshminarayanan, PJ Liu
The Eleventh International Conference on Learning Representations, 2022
A well-composed text is half done! composition sampling for diverse conditional generation
S Narayan, G Simões, Y Zhao, J Maynez, D Das, M Collins, M Lapata
arXiv preprint arXiv:2203.15108, 2022
Seal: Segment-wise extractive-abstractive long-form text summarization
Y Zhao, M Saleh, PJ Liu
arXiv preprint arXiv:2006.10213, 2020
Deepseek llm: Scaling open-source language models with longtermism
X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ...
arXiv preprint arXiv:2401.02954, 2024
Direct language model alignment from online ai feedback
S Guo, B Zhang, T Liu, T Liu, M Khalman, F Llinares, A Rame, T Mesnard, ...
arXiv preprint arXiv:2402.04792, 2024
ForumSum: A multi-speaker conversation summarization dataset
M Khalman, Y Zhao, M Saleh
Findings of the Association for Computational Linguistics: EMNLP 2021, 4592-4599, 2021
Smart: Sentences as basic units for text evaluation
RK Amplayo, PJ Liu, Y Zhao, S Narayan
arXiv preprint arXiv:2208.01030, 2022
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
arXiv preprint arXiv:2312.09300, 2023
The system can't perform the operation now. Try again later.
Articles 1–20