| Week | Topic(s) | Additional Resources | |
|---|---|---|---|
| Week 01 |
Introduction [pdf] |
||
| Week 02 |
Tokenization, Word Embeddings and
Representational Learning - Tokenization [pdf] - Word Emdeddings [pdf] - Contextual Embeddings[pdf] |
|
|
| Week 03 | - LLMs Basics [pdf] | - Simple Neural Networks and Neural Language Models [pdf] - Large Language Models explained briefly by 3Blue1Brown [video] - SLP3 Book Chapter 7 [pdf] |
|
| Week 04 |
Attention in Transformers - Attention [pdf] - Numerical Example [pdf] |
- Attention Is All You Need [pdf] - The Illustrated Transformer [html] |
|
| Week 05 | - Transformer Arhitecture [pdf] | - Transformers, the tech behind LLMs [video] - Attention in transformers, step-by-step [video] - How might LLMs store facts [video] - LLM Visualization [html] - Efficient Attention Mechanisms for Large Language Models: A Survey [pdf] |
|