State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490 - YouTube
- RLVR (Reinforcement Learning with Verifiable Rewards): We still haven’t seen the limits of Scaling Laws here.
- if you learn more, you forget more (No Free Lunch): LLMs also tend to forget previously learned information as they learn new things.
- Is the dream of AGI dying?: The dream of achieving AGI with a single model is dying; instead, we are moving towards realizing AGI through the collaboration of multiple specialized agents.
- rasbt/LLMs-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step