Follow
Noam Brown
Noam Brown
Research Scientist, OpenAI
Verified email at cs.cmu.edu - Homepage
Title
Cited by
Cited by
Year
Superhuman AI for multiplayer poker
N Brown, T Sandholm
Science 365 (6456), 885-890, 2019
9862019
Superhuman AI for heads-up no-limit poker: Libratus beats top professionals
N Brown, T Sandholm
Science 359 (6374), 418-424, 2018
9382018
Deep counterfactual regret minimization
N Brown, A Lerer, S Gross, T Sandholm
International Conference on Machine Learning, 2019
2772019
Safe and nested subgame solving for imperfect-information games
N Brown, T Sandholm
Neural Information Processing Systems, 2017
233*2017
Human-level play in the game of Diplomacy by combining language models with strategic reasoning
Meta Fundamental AI Research Diplomacy Team (FAIR)†, A Bakhtin, ...
Science 378 (6624), 1067-1074, 2022
2252022
Solving imperfect-information games via discounted regret minimization
N Brown, T Sandholm
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 1829-1836, 2019
1852019
Combining deep reinforcement learning and search for imperfect-information games
N Brown, A Bakhtin, A Lerer, Q Gong
Advances in Neural Information Processing Systems 33, 17057-17069, 2020
1612020
Libratus: The Superhuman AI for No-Limit Poker.
N Brown, T Sandholm
IJCAI, 5226-5228, 2017
1522017
Depth-limited solving for imperfect-information games
N Brown, T Sandholm, B Amos
Advances in neural information processing systems 31, 2018
972018
Hierarchical abstraction, distributed equilibrium computation, and post-processing, with application to a champion no-limit Texas Hold'em agent
N Brown, S Ganzfried, T Sandholm
Workshops at the twenty-ninth AAAI conference on artificial intelligence, 2015
862015
Improving Policies via Search in Cooperative Partially Observable Games
A Lerer, H Hu, J Foerster, N Brown
AAAI Conference on Artificial Intelligence, 2020
852020
Off-belief learning
H Hu, A Lerer, B Cui, L Pineda, N Brown, J Foerster
International Conference on Machine Learning, 4369-4379, 2021
692021
Modeling strong and human-like gameplay with KL-regularized search
AP Jacob, DJ Wu, G Farina, A Lerer, H Hu, A Bakhtin, J Andreas, N Brown
International Conference on Machine Learning, 9695-9728, 2022
592022
A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games
S Sokota, R D'Orazio, JZ Kolter, N Loizou, M Lanctot, I Mitliagkas, ...
arXiv preprint arXiv:2206.05825, 2022
572022
Dream: Deep regret minimization with advantage baselines and model-free learning
E Steinberger, A Lerer, N Brown
arXiv preprint arXiv:2006.10410, 2020
572020
Human-level performance in no-press diplomacy via equilibrium search
J Gray, A Lerer, A Bakhtin, N Brown
arXiv preprint arXiv:2010.02923, 2020
542020
Dynamic thresholding and pruning for regret minimization
N Brown, C Kroer, T Sandholm
Proceedings of the AAAI conference on artificial intelligence 31 (1), 2017
542017
No-press diplomacy from scratch
A Bakhtin, D Wu, A Lerer, N Brown
Advances in Neural Information Processing Systems 34, 18063-18074, 2021
452021
Mastering the game of no-press Diplomacy via human-regularized reinforcement learning and planning
A Bakhtin, DJ Wu, A Lerer, J Gray, AP Jacob, G Farina, AH Miller, ...
arXiv preprint arXiv:2210.05492, 2022
432022
Regret-based pruning in extensive-form games
N Brown, T Sandholm
Advances in neural information processing systems 28, 2015
432015
The system can't perform the operation now. Try again later.
Articles 1–20