Follow
Yoav Levine
Yoav Levine
Stanford University
Verified email at mail.huji.ac.il - Homepage
Title
Cited by
Cited by
Year
Quantum entanglement in deep learning architectures
Y Levine, O Sharir, N Cohen, A Shashua
Physical review letters 122 (6), 065301, 2019
2442019
Deep autoregressive models for the efficient variational simulation of many-body quantum systems
O Sharir, Y Levine, N Wies, G Carleo, A Shashua
Physical review letters 124 (2), 020503, 2020
2222020
Sensebert: Driving some sense into bert
Y Levine, B Lenz, O Dagan, D Padnos, O Sharir, S Shalev-Shwartz, ...
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
2102020
In-context retrieval-augmented language models
O Ram, Y Levine, I Dalmedigos, D Muhlgay, A Shashua, K Leyton-Brown, ...
Transactions of the Association for Computational Linguistics 11, 1316-1331, 2023
1642023
Deep learning and quantum entanglement: Fundamental connections with implications to network design
Y Levine, D Yakira, N Cohen, A Shashua
6th International Conference on Learning Representations (ICLR), 2018
135*2018
Fundamental limitations of alignment in large language models
Y Wolf, N Wies, Y Levine, A Shashua
arXiv preprint arXiv:2304.11082, 2023
742023
PMI-Masking: Principled masking of correlated spans
Y Levine, B Lenz, O Lieber, O Abend, K Leyton-Brown, M Tennenholtz, ...
9th International Conference on Learning Representations (ICLR), 2021
532021
MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
E Karpas, O Abend, Y Belinkov, B Lenz, O Lieber, N Ratner, Y Shoham, ...
arXiv preprint arXiv:2205.00445, 2022
452022
Limits to Depth Efficiencies of Self-Attention
Y Levine, N Wies, O Sharir, H Bata, A Shashua
Advances in Neural Information Processing Systems 34 (NeurIPS), 2020
44*2020
Analysis and design of convolutional networks via hierarchical tensor decompositions
N Cohen, O Sharir, Y Levine, R Tamari, D Yakira, A Shashua
arXiv preprint arXiv:1705.02302, 2017
402017
Standing on the shoulders of giant frozen language models
Y Levine, I Dalmedigos, O Ram, Y Zeldes, D Jannai, D Muhlgay, Y Osin, ...
arXiv preprint arXiv:2204.10019, 2022
372022
The learnability of in-context learning
N Wies, Y Levine, A Shashua
Advances in Neural Information Processing Systems 36, 2024
312024
Parallel context windows for large language models
N Ratner, Y Levine, Y Belinkov, O Ram, I Magar, O Abend, E Karpas, ...
arXiv preprint arXiv:2212.10947, 2022
282022
Benefits of depth for long-term memory of recurrent networks
Y Levine, O Sharir, A Shashua
6th International Conference on Learning Representations (ICLR) workshop, 2018
28*2018
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Y Levine, N Wies, D Jannai, D Navon, Y Hoshen, A Shashua
10th International Conference on Learning Representations (ICLR), 2022
252022
Generating benchmarks for factuality evaluation of language models
D Muhlgay, O Ram, I Magar, Y Levine, N Ratner, Y Belinkov, O Abend, ...
arXiv preprint arXiv:2307.06908, 2023
232023
Tensors for deep learning theory: Analyzing deep learning architectures via tensorization
Y Levine, N Wies, O Sharir, N Cohen, A Shashua
Tensors for Data Processing, 215-248, 2022
20*2022
Which transformer architecture fits my data? a vocabulary bottleneck in self-attention
N Wies, Y Levine, D Jannai, A Shashua
International Conference on Machine Learning, 11170-11181, 2021
172021
Realizing topological superconductivity with superlattices
Y Levine, A Haim, Y Oreg
Physical Review B 96 (16), 165147, 2017
172017
Sub-task decomposition enables learning in sequence to sequence tasks
N Wies, Y Levine, A Shashua
ICLR 2023, 2023
152023
The system can't perform the operation now. Try again later.
Articles 1–20