| Day 1 6th Sept. | Day 2 7th Sept. |
|
|---|---|---|
| 9am to 10.30am | Deep-dive into Transformers (and Attention) | State-of-the-art LLM inferencing |
| 11am to 1pm | Build your own GPT model | KV cache setup for LLM inferencing |
| 2pm to 3.45pm | GPUs and the LLM lifecycle | Distributed training algorithms with multiple GPUs |
| 4pm to 5.30pm | Resource usage and profiling of LLMs | Distributed training algorithms with multiple GPUs |