Seguir
Shuhei Kurita
Shuhei Kurita
RIKEN AIP
Dirección de correo verificada de riken.jp - Página principal
Título
Citado por
Citado por
Año
Reconstructing neuronal circuitry from parallel spike trains
R Kobayashi, S Kurita, A Kurth, K Kitano, K Mizuseki, M Diesmann, ...
Nature communications 10 (1), 4468, 2019
842019
ScanQA: 3D Question Answering for Spatial Scene Understanding
D Azuma, T Miyanishi, S Kurita, M Kawanabe
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
682022
Neural joint model for transition-based Chinese syntactic analysis
S Kurita, D Kawahara, S Kurohashi
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
402017
Multi-Task Semantic Dependency Parsing with Policy Gradient for Learning Easy-First Strategies
S Kurita, A Søgaard
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
322019
Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes' Rule
S Kurita, K Cho
Ninth International Conference on Learning Representations (ICLR2021), 2021
192021
Neural Adversarial Training for Semi-supervised Japanese Predicate-argument Structure Analysis
S Kurita, D Kawahara, S Kurohashi
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
142018
Visual Recipe Flow: A Dataset for Learning Visual State Changes of Objects with Recipe Flows
K Shirai, A Hashimoto, T Nishimura, H Kameko, S Kurita, Y Ushiku, S Mori
Proceedings of the 29th International Conference on Computational Linguistics, 2022
72022
RefEgo: Referring Expression Comprehension Dataset from First-Person Perception of Ego4D
S Kurita, N Katsura, E Onami
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
62023
Scanqa: 3d question answering for spatial scene understanding. 2022 IEEE
D Azuma, T Miyanishi, S Kurita, M Kawanabe
CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19107-19117, 2022
52022
Cross3DVG: Baseline and Dataset for Cross-Dataset 3D Visual Grounding on Different RGB-D Scans
T Miyanishi, D Azuma, S Kurita, M Kawanabe
arXiv preprint arXiv:2305.13876, 2023
22023
ARKitSceneRefer: Text-based Localization of Small Objects in Diverse Real-World 3D Indoor Scenes
S Kato, S Kurita, C Chu, S Kurohashi
Findings of the Association for Computational Linguistics: EMNLP 2023, 784-799, 2023
12023
Iterative Span Selection: Self-Emergence of Resolving Orders in Semantic Role Labeling
S Kurita, H Ouchi, K Inui, S Sekine
Proceedings of the 29th International Conference on Computational …, 2022
12022
Text360Nav: 360-Degree Image Captioning Dataset for Urban Pedestrians Navigation
C Nishimura, S Kurita, Y Seki
Proceedings of the 2024 Joint International Conference on Computational …, 2024
2024
Text-driven Affordance Learning from Egocentric Vision
T Yoshida, S Kurita, T Nishimura, S Mori
arXiv preprint arXiv:2404.02523, 2024
2024
JDocQA: Japanese Document Question Answering Dataset for Generative Language Models
E Onami, S Kurita, T Miyanishi, T Watanabe
arXiv preprint arXiv:2403.19454, 2024
2024
Vision Language Model-based Caption Evaluation Method Leveraging Visual Context Extraction
K Maeda, S Kurita, T Miyanishi, N Okazaki
arXiv preprint arXiv:2402.17969, 2024
2024
SlideAVSR: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition
H Wang, S Kurita, S Shimizu, D Kawahara
arXiv preprint arXiv:2401.09759, 2024
2024
Query-based Image Captioning from Multi-context 360cdegree Images
K Maeda, S Kurita, T Miyanishi, N Okazaki
Findings of the Association for Computational Linguistics: EMNLP 2023, 6940-6954, 2023
2023
Query-based Image Captioning from Multi-context 360° Images
K Maeda, S Kurita, T Miyanishi, N Okazaki
The 2023 Conference on Empirical Methods in Natural Language Processing, 2023
2023
Language and Robotics: Toward Building Robots Coexisting with Human Society Using Language Interface
Y Nakamura, S Kurita, K Yoshino
Proceedings of the 13th International Joint Conference on Natural Language …, 2023
2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20