Follow
Anas Awadalla
Anas Awadalla
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Openflamingo: An open-source framework for training large autoregressive vision-language models
A Awadalla, I Gao, J Gardner, J Hessel, Y Hanafy, W Zhu, K Marathe, ...
arXiv preprint arXiv:2308.01390, 2023
315*2023
Are aligned neural networks adversarially aligned?
N Carlini, M Nasr, CA Choquette-Choo, M Jagielski, I Gao, PWW Koh, ...
Advances in Neural Information Processing Systems 36, 2024
1592024
Multimodal c4: An open, billion-scale corpus of images interleaved with text
W Zhu, J Hessel, A Awadalla, SY Gadre, J Dodge, A Fang, Y Yu, ...
Advances in Neural Information Processing Systems 36, 2024
1052024
Visit-bench: A benchmark for vision-language instruction following inspired by real-world use
Y Bitton, H Bansal, J Hessel, R Shao, W Zhu, A Awadalla, J Gardner, ...
arXiv preprint arXiv:2308.06595, 2023
352023
Reliable and trustworthy machine learning for health using dataset shift detection
C Park, A Awadalla, T Kohno, S Patel
Advances in Neural Information Processing Systems 34, 3043-3056, 2021
292021
Exploring the landscape of distributional robustness for question answering models
A Awadalla, M Wortsman, G Ilharco, S Min, I Magnusson, H Hajishirzi, ...
arXiv preprint arXiv:2210.12517, 2022
20*2022
Catwalk: A unified language model evaluation framework for many datasets
D Groeneveld, A Awadalla, I Beltagy, A Bhagia, I Magnusson, H Peng, ...
arXiv preprint arXiv:2312.10253, 2023
32023
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness
KR Chandu, L Li, A Awadalla, X Lu, JS Park, J Hessel, L Wang, Y Choi
arXiv preprint arXiv:2407.01942, 2024
2024
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
A Awadalla, L Xue, O Lo, M Shu, H Lee, EK Guha, M Jordan, S Shen, ...
arXiv preprint arXiv:2406.11271, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–9