2026 Week 6 - Weekly Reading

State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490 - YouTube RLVR (Reinforcement Learning with Verifiable Rewards): We still haven’t seen the limits of Scaling Laws here. if you learn more, you forget more (No Free Lunch): LLMs also tend to forget previously learned information as they learn new things. Is the dream of AGI dying?: The dream of achieving AGI with a single model is dying; instead, we are moving towards realizing AGI through the collaboration of multiple specialized agents. rasbt/LLMs-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

February 8, 2026 · 1 min · Kaoru Babasaki

2026 Week 5 - Weekly Reading

Highlight of the week: I chipped my front tooth (3rd time in 5 years) Best Ways to Build Better Habits & Break Bad Ones | James Clear - Huberman Lab How to form good habits and break bad ones. Standard advice that sounds familiar but is solid: Good habits: Make it obvious, attractive, easy, satisfying Bad habits: Make it invisible, unattractive, difficult, unsatisfying Personally, I wanted to break the habit of mindlessly checking LINE News, but I couldn’t delete the LINE app itself. Inspired by this podcast, I looked it up and found this method to hide just the News tab. (Game changer!) Tweet from Andrej Karpathy A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in… ...

February 1, 2026 · 2 min · Kaoru Babasaki

2026 Week 4 - Weekly Reading

The Best Way to Read a Book (That Nobody’s Doing) A video where Jeremy Howard explains his close reading workflow with Solveit. The core idea: “load a lot of relevant context first, then read chapter-by-chapter while chatting with an AI, carrying context forward each time”. It feels close to what I’ve been doing with gptel for papers/books, but the explicit chapter-level context handoff was new to me. (Related) Past post: A refined Emacs LLM setup

January 25, 2026 · 1 min · Kaoru Babasaki

2026 Week 3 - Weekly Reading

I haven’t been updating my blog for a while, so I’m restarting with a lightweight weekly memo: things I read/watched this week that I enjoyed. Onoguchi Snow Dome A collector’s site showcasing their snow globe collection with photos. The early-Heisei-web vibe is oddly charming. The security paradox of local LLMs - Quesma Blog (HN) Local LLMs feel “more secure” than cloud LLMs at first glance, but it’s not that simple. The example attack prompts are fun to read (especially Attack #2). Love Generation filming locations (map) A fan-made location guide for a classic 90s drama I got hooked on again via Netflix. Kimura Takuya is way too cool.

January 18, 2026 · 1 min · Kaoru Babasaki