Ishita Dasgupta
Ishita Dasgupta
Senior Research Scientist, DeepMind
Verified email at - Homepage
Cited by
Cited by
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
Findings of the Association for Computational Linguistics: EMNLP 2022, 537--563, 2022
Are Convolutional Neural Networks or Transformers more like human vision?
S Tuli, I Dasgupta, E Grant, TL Griffiths
Proceedings of the Annual Meeting of the Cognitive Science Society 43 (43), 2021
Language models show human-like content effects on reasoning
I Dasgupta, AK Lampinen, SCY Chan, A Creswell, D Kumaran, ...
arXiv preprint arXiv:2207.07051, 2022
Where do hypotheses come from?
I Dasgupta, E Schulz, SJ Gershman
Cognitive psychology 96, 1-25, 2017
Evaluating compositionality in sentence embeddings
I Dasgupta, D Guo, A Stuhlmüller, SJ Gershman, ND Goodman
Proceedings of the Annual Meeting of the Cognitive Science Society 40 (40), 2018
Causal reasoning from meta-reinforcement learning
I Dasgupta, J Wang, S Chiappa, J Mitrovic, P Ortega, D Raposo, ...
arXiv preprint arXiv:1901.08162, 2019
A theory of learning to infer.
I Dasgupta, E Schulz, JB Tenenbaum, SJ Gershman
Psychological review 127 (3), 412, 2020
Remembrance of inferences past: Amortization in human hypothesis generation
I Dasgupta, E Schulz, ND Goodman, SJ Gershman
Cognition 178, 67-81, 2018
Memory as a computational resource
I Dasgupta, SJ Gershman
Trends in cognitive sciences 25 (3), 240-251, 2021
Collaborating with language models for embodied reasoning
I Dasgupta, C Kaeser-Chen, K Marino, A Ahuja, S Babayan, F Hill, ...
NeurIPS 2022 Language and Reinforcement Learning Workshop, 2022
Tell me why! explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
Using natural language and program abstractions to instill human inductive biases in machines
S Kumar, CG Correa, I Dasgupta, R Marjieh, MY Hu, R Hawkins, ...
Advances in Neural Information Processing Systems 35, 167-180, 2022
Transformers generalize differently from information stored in context vs in weights
SCY Chan, I Dasgupta, J Kim, D Kumaran, AK Lampinen, F Hill
Memory in Artificial and Real Intelligence (MemARI) workshop NeurIPS 2022, 2022
A buried ionizable residue destabilizes the native state and the transition state in the folding of monellin
N Aghera, I Dasgupta, JB Udgaonkar
Biochemistry 51 (45), 9058-9066, 2012
Meta-Learning of Structured Task Distributions in Humans and Machines
S Kumar, I Dasgupta, JD Cohen, ND Daw, TL Griffiths
International Conference on Learning Representations, 2021, 2020
Meta-learned models of cognition
M Binz, I Dasgupta, AK Jagadish, M Botvinick, JX Wang, E Schulz
Behavioral and Brain Sciences, 1-38, 2023
Pivot: Iterative visual prompting elicits actionable knowledge for vlms
S Nasiriany, F Xia, W Yu, T Xiao, J Liang, I Dasgupta, A Xie, D Driess, ...
arXiv preprint arXiv:2402.07872, 2024
Distilling internet-scale vision-language models into embodied agents
T Sumers, K Marino, A Ahuja, R Fergus, I Dasgupta
arXiv preprint arXiv:2301.12507, 2023
Passive attention in artificial neural networks predicts human visual selectivity
TA Langlois, HC Zhao, E Grant, I Dasgupta, TL Griffiths, N Jacoby
Advances in Neural Information Processing Systems 34, 2021
The system can't perform the operation now. Try again later.
Articles 1–20