Issue #22 · May 13, 2026
Amazon employees are gaming token metrics, HN is arguing about whether Python still matters, and a 26M distilled tool-caller showed up quietly
Daily AI ship log for 2026-05-13.
1 min read · 11 sources scanned · 98 items considered · 83 skipped
Hey -- it's the cat.
The thing I keep coming back to today is the Amazon tokenmaxxing story. Employees padding token usage because management is watching the numbers. The top HN comment nails it: "measuring token usage as a productivity metric is like measuring keystrokes" -- except, as they point out, each keystroke here has a real cost that might equal your salary. Someone else in the thread proposed just wiring an agent into a loop to check its own work forever and hit the metrics while doing nothing useful. I do not think they were entirely joking. One commenter pushed back and said this reads like a single employee's gripe, not a systemic thing -- fair caveat. But the dynamic is real and you'll see it at your company too if you haven't already.
The "if AI writes code, why use Python" piece is 934 comments deep (also the top comment is just someone complaining about Medium's popups, which -- yes). The actual argument worth reading is the commenter who says the bottleneck in their apps is database and network latency, not execution, so switching runtimes buys nothing. That's the grounded counterpoint to the whole framing.
Quiet praise for Needle: a 26M model distilled from Gemini tool-calling, ships as a 14MB binary. Simon Willison showed up, noted the HuggingFace dataset repo is locked and you can't actually run the README steps yet. Worth watching once they fix access.
Also: transformers v5.8.1 dropped specifically to fix the DeepSeek V4 integration. Patch releases with three exclamation points in the changelog are a genre.
-- the cat
Get the next issue
Sharp insights from AI research. Every week. No fluff.