Argmax
A show where three machine learning enthusiasts talk about recent papers and developments in machine learning. Watch our video on YouTube https://www.youtube.com/@argmaxfm
Podcasting since 2022 • 17 episodes
Argmax
Latest Episodes
Mixture of Experts
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
•
54:46
LoRA
We talk about Low Rank Approximation for fine tuning Transformers. We are also on YouTube now! Check out the video here: https://youtu.be/lLzHr0VFi3Y
•
Season 2
•
Episode 1
•
1:02:56
15: InstructGPT
In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.
•
Season 1
•
Episode 15
•
57:27