Timesheets for your agent. A tiny HTTP + SQLite server that every AI agent can POST work entries to.
You run a dozen AI agents across different repos. At 5pm you have no idea what happened today. Each agent is sandboxed to its own working directory and can't write to a shared worklog.
hrs is a local daemon that gives every agent a single HTTP endpoint to push structured work entries to. You get markdown files on disk and a TUI to see what got done.
Agents POST JSON to localhost:9746/entries. Humans use hrs log. Both work, same database.
Agents call GET /schema to learn the API. No hardcoded formats in your prompts.
Every entry lands in SQLite and renders to daily markdown files. Human-readable, git-friendly, grep-able.
Vim-style TUI to browse entries by day. Scroll with j/k, switch days with h/l, delete with d. All in your terminal.
The CLI tries the server first, then writes directly to SQLite. Works whether the daemon is running or not.
Pure Go, no CGo. Download a prebuilt binary or go install. DB auto-creates at ~/.hrs/hrs.db.
# via Go
go install github.com/heuwels/hrs@latest
# or download a binary from GitHub Releases:
# darwin_arm64 (macOS Apple Silicon)
# darwin_amd64 (macOS Intel)
# linux_amd64 / linux_arm64
# Log from any agent or terminal
hrs log -c dev -t "built auth flow" -b "oauth2 pkce;token refresh;tests" -e 3
# See what happened today
hrs ls
# Browse interactively
hrs tui
# Start the HTTP daemon (optional — CLI works without it)
hrs serve &
Add to your CLAUDE.md, .cursorrules, or equivalent:
After completing significant work, log it:
hrs log -c dev -t "Short description" -b "outcome one;outcome two" -e 2
Log proactively. Don't wait to be asked.
Zero config — data goes to ~/.hrs/hrs.db automatically. See the full docs for agent integration, API reference, and all CLI commands.