Pang Wei Koh
Cited by
Cited by
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
Understanding black-box predictions via influence functions
PW Koh, P Liang
International conference on machine learning, 1885-1894, 2017
Mobility network models of COVID-19 explain inequities and inform reopening
S Chang*, E Pierson*, PW Koh*, J Gerardin, B Redbird, D Grusky, ...
Nature, 1-6, 2020
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization
S Sagawa*, PW Koh*, TB Hashimoto, P Liang
International Conference on Learning Representations, 2019
WILDS: A Benchmark of in-the-Wild Distribution Shifts
PW Koh*, S Sagawa*, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
International Conference on Machine Learning, 5637-5664, 2021
Certified defenses for data poisoning attacks
J Steinhardt*, PW Koh*, PS Liang
Advances in neural information processing systems, 3517-3529, 2017
Concept Bottleneck Models
PW Koh*, T Nguyen*, YS Tang*, S Mussmann, E Pierson, B Kim, P Liang
International Conference on Machine Learning, 5338-5348, 2020
Peer and self assessment in massive online classes
C Kulkarni, PW Koh, H Le, D Chia, K Papadopoulos, J Cheng, D Koller, ...
Design Thinking Research, 131-168, 2015
On random weights and unsupervised feature learning
A Saxe, PW Koh, Z Chen, M Bhand, B Suresh, AY Ng
Proceedings of the 28th International Conference on Machine Learning (ICML …, 2011
Tiled convolutional neural networks
QV Le, J Ngiam, Z Chen, D Chia, PW Koh, AY Ng
Advances in Neural Information Processing Systems, 1279-1287, 2010
Mapping the pairwise choices leading from pluripotency to human bone, heart, and other mesoderm cell types
KM Loh*, A Chen*, PW Koh, TZ Deng, R Sinha, JM Tsai, AA Barkal, ...
Cell 166 (2), 451-467, 2016
Just train twice: Improving group robustness without training group information
EZ Liu, B Haghgoo, AS Chen, A Raghunathan, PW Koh, S Sagawa, ...
International Conference on Machine Learning, 6781-6792, 2021
Sparse filtering
J Ngiam, PW Koh, Z Chen, SA Bhaskar, AY Ng
Advances in Neural Information Processing Systems, 1125-1133, 2011
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
An investigation of why overparameterization exacerbates spurious correlations
S Sagawa*, A Raghunathan*, PW Koh*, P Liang
International Conference on Machine Learning, 8346-8356, 2020
Openflamingo: An open-source framework for training large autoregressive vision-language models
A Awadalla, I Gao, J Gardner, J Hessel, Y Hanafy, W Zhu, K Marathe, ...
arXiv preprint arXiv:2308.01390, 2023
Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization
JP Miller, R Taori, A Raghunathan, S Sagawa, PW Koh, V Shankar, ...
International conference on machine learning, 7721-7735, 2021
Stronger data poisoning attacks break data sanitization defenses
PW Koh, J Steinhardt, P Liang
Machine Learning, 1-47, 2022
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation
S Min, K Krishna, X Lyu, M Lewis, W Yih, PW Koh, M Iyyer, L Zettlemoyer, ...
arXiv preprint arXiv:2305.14251, 2023
Learning deep energy models
J Ngiam, Z Chen, PW Koh, AY Ng
ICML, 2011
The system can't perform the operation now. Try again later.
Articles 1–20