Liran Tam
Introductory / Core Works:
- Finn et al. (2017), MAML: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv:1703.03400
- Rajeswaran et al. (2019) - iMAML: Implicit MAML for Efficient Meta-Learning. arXiv:1909.04630
- Hospedales et al. (2021) - Meta-Learning in Neural Networks: A Survey.
arXiv:2004.05439
Advanced / Recent Developments:
- Choe et al. (2023), SAMA: Making Scalable Meta Learning Practical.
arxiv:2310.05674
- Kemaev et al. (2025), Scalable meta‑learning via mixed‑mode differentiation.
arXiv:2505.00793
- Chayti et al. (2024), A New First-Order Meta-Learning Algorithm with Convergence Guarantees.
arxiv:2409.03682
- Perera et al. (2024), DIPA: Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Few-Shot Cross-Domain Learning.
arxiv:2403.04492
- Sinha et al. (2024), MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning.
arxiv:2405.11446
(Very) Cool Applications:
- Sun et al. (2024), Learning to (Learn at Test Time): RNNs with Expressive Hidden States.
arxiv:2407.04620
// They made RNN hidden states "learn on the fly" at test time, letting the model adapt to long sequences without(!) expensive attention… basically smarter memory that updates itself :)
- Bartler et al. (2021). MT3: Meta test-time training for self-supervised test-time adaptation.
arXiv:2103.16201
// Adapt models at test time (deployment) using self-supervised signals, so they can handle new data distributions without extra labels.
Infrastructures // Everything you need to start playing with Meta-Learning:
Online Courses / Tutorials:
Assaf Elovic