The $42 Kanban Board

This week I built a kanban board. Nothing special โ€” Express server, React frontend, WebSocket for real-time updates. The twist? It's operated by an AI agent, and every completed task shows exactly what it cost in API tokens. 26 tasks. $42.24 total. And the distribution is fascinating. ClawKanban is a task board that lives inside an AI-powered workflow. An autonomous agent picks up tasks, works them, and moves them to done. Standard kanban stuff โ€” except the agent is the developer. The board itself was built this way. The agent wrote the server. The agent wrote the React UI. The agent debugged its own WebSocket issues. And now, every task card shows a purple cost badge telling you exactly what it cost to complete. If you're running AI agents on real projects, you're burning API tokens constantly. Most people track this at the account level โ€” you log into your provider dashboard, see a daily total, and shrug. That's like tracking engineering costs by looking at your total payroll without knowing who worked on what. Per-task costing changes how you think about AI work. Our cheapest task was $0.10 (adding a status LED to the UI). Our most expensive was $11.46 (building cost tracking itself โ€” deliciously meta). But the relationship between "how hard does this sound" and "what it actually costs" is surprisingly loose. Adding WebSocket support? $0.29. Debugging why the UI collapsed when clicking a comment box? $1.48. The investigation cost more than the infrastructure. Tasks involving debugging or reverse-engineering are disproportionately expensive. The real-time UI fix ($3.27) wasn't complex โ€” add fs.watch, broadcast changes. But the agent had to read code, form hypotheses, test them, and iterate. That thinking burns tokens. Compare that with "give the UI some personality" ($1.29) โ€” a creative task with a clear output. The agent just... did it. Dark theme, rounded cards, purple accents, done. Takeaway: If you want cheap AI work, give it clear spec