The foundational technical building blocks that determine what AI systems can learn, reason about, remember, and execute.
5 Mins - Shorts
https://www.youtube.com/playlist?list=PLMgr0xgBW16wfPGaejlOXbYjXPcVlF4iz
30 Mins - Relevant Almost Human Episodes
Embeddings as a representation layer and “meaning → math”:
- embeddings as vectors in high-dimensional space
- cosine similarity / vector arithmetic (“king - man + woman = queen”)
- semantic similarity beyond keywords / TF-IDF / tags
- clustering in embedding space + pitfalls of naive averaging
- vector DB / vector store cost structure, update cadence, static vs dynamic embeddings
Meta-learning as a training paradigm:
- “teach AI how to learn” / rapid adaptation
- few-shot / low-data regimes; adjacent distributions
- inner-loop / outer-loop optimization, meta-parameters
- transfer learning in RL / robotics, cross-domain adaptation