Dennis asked a simple question: “This guy wrote this open source thing that figures out how many tokens I’m using. Can you install this on my computer so I can run it myself?”
He pasted a link to a tweet. That was it. No repo name, no instructions, no technical context. Just a URL and a request.
Here is exactly what happened next — every step, every decision, every tool — documented by the agent that did the work.
Start With a Tweet, Not a Repo
The link pointed to a post on X by Paweł Huryn. The agent could not scrape X directly due to platform restrictions. So it pivoted to web search immediately.
Two parallel searches ran at once. One searched for “PawelHuryn token usage tracking open source tool.” The other searched GitHub directly for “PawelHuryn token counter.” Neither returned a direct match on the first pass.
The agent ran two more searches. One for “Pawel Huryn github claude code token dashboard.” The other for his Twitter handle paired with “github.” This time, results came back with context from a related tweet thread and a GitHub username: phuryn.
From there, the agent fetched his GitHub profile page at github.com/phuryn and identified three repos. The one that matched was claude-usage — described as “A local dashboard for tracking your Claude Code token usage, costs, and session history.”
Total searches to get from a tweet URL to the correct repo: four.
Review the Tool Before Installing It
The agent cloned the repo and read the full README before doing anything else. Here is what it found.
claude-usage is a Python tool that reads the JSONL log files Claude Code writes to ~/.claude/projects/ after every session. It parses token counts, model names, session metadata, and cache behavior. It stores everything in a local SQLite database at ~/.claude/usage.db. Then it serves a Chart.js dashboard on localhost:8080.
No pip install. No virtual environment. No build step. Zero third-party dependencies — it uses only Python’s standard library (sqlite3, http.server, json, pathlib).
It tracks usage from Claude Code CLI, the VS Code extension, and dispatched sessions. It does not track Cowork sessions, which run server-side.
Cost estimates use Anthropic API pricing as of April 2026. Opus at $6.15 per million input tokens. Sonnet at $3.69. Haiku at $1.23. These are API-equivalent costs — useful for understanding the value of a Max subscription, even though subscription pricing works differently.
Install It in One Step
Because the tool has no dependencies, installation was just a git clone. The agent cloned the repo and copied it to Dennis’s working folder. Done.
To run it, one command: python3 cli.py dashboard. That scans the logs, builds the database, and opens the dashboard in a browser. Other commands — today, stats, scan — give terminal-only summaries for quick checks.
Dennis never opened a terminal. Never ran a command. Never read a README.
Schedule a Weekly Email Report
Dennis then asked: “Are you able to run this every week and send me an email with what you find? Choose, like, Monday, I guess, morning.”
The agent used the scheduling skill built into Cowork to create a recurring task. Here are the parameters it set:
Task name: weekly-claude-usage-report
Schedule: Every Monday at 8:00 AM local time (cron: 0 8 * * 1)
What it does each run: Locates the claude-usage repo on the machine. Runs python3 cli.py scan to pick up new session data. Runs python3 cli.py stats and python3 cli.py today to gather the numbers. Composes a plain-language summary with token breakdowns by model, estimated API-equivalent costs, session counts, and any notable spikes. Emails the report to Dennis at his Gmail address.
If the tool is missing or the scan fails, the scheduled task emails Dennis to let him know something went wrong — rather than silently failing.
The first automatic run is the following Monday. Dennis was advised to trigger one manual “Run now” first to pre-approve the tool permissions (file access, Gmail) so that future runs execute hands-free.
Understand Why This Matters
Claude Code on a Max plan gives you a progress bar. That is the extent of your usage visibility. No breakdown by project. No model split. No cost-per-session data. Just a monthly bill and a bar that fills up.
Paweł Huryn ran 440 sessions and 18,000 turns over 30 days. His estimated API-equivalent cost was $1,588. One day spiked to 700 million cached tokens — which turned out to be a bug on Anthropic’s side. Without this dashboard, none of that would have been visible.
For anyone running Claude Code seriously — across multiple projects, multiple models, hundreds of sessions — this kind of visibility is not optional. It is the difference between knowing what you are spending and guessing.
Count the Steps Dennis Took
Dennis pasted a link and said “install this.” Then he said “schedule it on Monday mornings.” Two messages. Zero terminal commands. Zero logins. Zero configuration.
The agent handled tweet resolution, repo discovery, README review, installation, and recurring task scheduling — including the email integration — across a single conversation.
This is what meta articles are for. Not to show that AI can do things. To show exactly how it did them, so the process is documented, repeatable, and improvable.
The tool is open source under MIT license at github.com/phuryn/claude-usage. The definitive article on meta articles and how they feed back into process improvement is at BlitzMetrics.
