2026 Week 6 - Weekly Reading

State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490 - YouTube RLVR (Reinforcement Learning with Verifiable Rewards): We still haven’t seen the limits of Scaling Laws here. if you learn more, you forget more (No Free Lunch): LLMs also tend to forget previously learned information as they learn new things. Is the dream of AGI dying?: The dream of achieving AGI with a single model is dying; instead, we are moving towards realizing AGI through the collaboration of multiple specialized agents. rasbt/LLMs-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

February 8, 2026 · 1 min · Kaoru Babasaki

2026 Week 5 - Weekly Reading

Highlight of the week: I chipped my front tooth (3rd time in 5 years) Best Ways to Build Better Habits & Break Bad Ones | James Clear - Huberman Lab How to form good habits and break bad ones. Standard advice that sounds familiar but is solid: Good habits: Make it obvious, attractive, easy, satisfying Bad habits: Make it invisible, unattractive, difficult, unsatisfying Personally, I wanted to break the habit of mindlessly checking LINE News, but I couldn’t delete the LINE app itself. Inspired by this podcast, I looked it up and found this method to hide just the News tab. (Game changer!) Tweet from Andrej Karpathy A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in… ...

February 1, 2026 · 2 min · Kaoru Babasaki

2026 Week 4 - Weekly Reading

The Best Way to Read a Book (That Nobody’s Doing) A video where Jeremy Howard explains his close reading workflow with Solveit. The core idea: “load a lot of relevant context first, then read chapter-by-chapter while chatting with an AI, carrying context forward each time”. It feels close to what I’ve been doing with gptel for papers/books, but the explicit chapter-level context handoff was new to me. (Related) Past post: A refined Emacs LLM setup

January 25, 2026 · 1 min · Kaoru Babasaki

2026 Week 3 - Weekly Reading

I haven’t been updating my blog for a while, so I’m restarting with a lightweight weekly memo: things I read/watched this week that I enjoyed. Onoguchi Snow Dome A collector’s site showcasing their snow globe collection with photos. The early-Heisei-web vibe is oddly charming. The security paradox of local LLMs - Quesma Blog (HN) Local LLMs feel “more secure” than cloud LLMs at first glance, but it’s not that simple. The example attack prompts are fun to read (especially Attack #2). Love Generation filming locations (map) A fan-made location guide for a classic 90s drama I got hooked on again via Netflix. Kimura Takuya is way too cool.

January 18, 2026 · 1 min · Kaoru Babasaki

To Those Who Think Data Scientists are Becoming Obsolete

Figure 1: At 27, I have worries I can’t tell anyone. Introduction: The Problem with the “AI Replaces Experts” Narrative In recent years, we hear everywhere that specialized white-collar jobs will be taken by AI. I’ve been deeply immersed in Data Science (DS) since my undergrad—through work, research, and hobbies. Lately, tech-illiterate family members and friends with no programming experience have started asking me (without any malice), “Are you still doing programming?” or “Can’t AI do everything now?” ...

August 31, 2025 · 12 min · Kaoru Babasaki

My Master's Thesis Hit arXiv!

Whassup, peeps! It’s been a minute since my last post (shoutout to the one person probably reading this, you the real MVP!). My Master’s thesis, with some fresh updates, just dropped on arXiv. Check it: Paper: Babasaki, K., Sugasawa, S., McAlinn, K. and Takanashi, K. (2024). Ensemble doubly robust Bayesian inference via regression synthesis. (arXiv:2409.06288) So, this paper, it’s all about takin’ this ensemble method called Bayesian Predictive Synthesis (BPS) that Professor McAlinn cooked up, and flexing it into the world of causal inference, specifically for estimatin’ Average Treatment Effects (ATE). We’re callin’ our new method “doubly robust Bayesian regression synthesis”. If you wanna get into the nitty-gritty, peep the paper, ya dig? ...

October 5, 2024 · 2 min · B.Kaoru