← back

Pi as Dotfiles for Agents

May 2, 2026/AI/AI, Agents, pi, Dotfiles, Tmux

Coding agents are starting to feel less like apps and more like shells.

That is what I like about pi. It is not just a coding agent I drop into. It is something I can shape. The same way my shell, tmux, and editor have accumulated years of small personal choices, my pi setup is becoming a set of agent dotfiles.

That matters because agents are easier to trust when you understand their tools. Claude Code has a lot built in, but a lot of people use it like a black box. With pi, the extension layer is simple enough that I can build the pieces I actually rely on.

My setup lives here:

~/.dotfiles/pi/extensions/

This is the custom sauce.

Tmux Status

I already had a tmux status script for Claude Code and OpenCode. It colors the tmux tab based on agent state:

purple  active
 yellow waiting
 green  finished
 blue   compacting
 red    error

The pi extension just maps pi lifecycle events to the same script.

agent_start      -> set-active
tool bash start  -> set-waiting
agent_end        -> set-finished
compact          -> set-compacting
error/retry      -> set-error

This is small, but it matters. I usually have multiple agent sessions open. I do not want to switch tabs just to see which one is done. The tmux bar tells me.

Background Jobs

This is the big one.

The way I work with agents is parallel. I want to run tests, send a subagent to review code, ask another one to scout a repo, and keep the main thread focused. I could open more terminals, but then the main agent has no idea what happened.

The background jobs extension gives pi that context.

Keybindings:

ctrl+b  background the current job
ctrl+j  background bashes
ctrl+s  background subagents

Tools:

list_bashes              read_bash_output
list_subagents           read_subagent_output
wait_job                 defer_job
kill_bashes              kill_subagents

The important rule is simple:

Every background job completion is surfaced exactly once.

If the agent waits on the job, the returned output is the notification. No duplicate event later.

If nobody waits on it, pi emits a runtime event when it finishes:

Runtime event: subagent job 1970 completed.

That tells me the job is done, but it does not pretend the output has been read.

There are two kinds of background work:

Non-gated work

This is fire-and-forget background work. It notifies me when it completes, but it does not wake the model or block the answer.

Example: run a long cleanup, build something, or start optional research.

Answer-gated work

This is work the assistant must read before answering.

When it finishes, pi wakes the model with a minimal message:

Answer-gated background jobs reached terminal state:
- subagent job 6452 completed

Inspect output before final answer.

Then the assistant reads the output with read_subagent_output or read_bash_output before answering.

That is the behavior I wanted. If the result matters, the agent knows. If it does not matter, I still get notified without derailing the session.

There is also a real UI. The background jobs menu shows running and recent jobs, lets me scroll output, and can open logs in my editor. It is not just hidden model state.

/btw

/btw is for side questions.

Sometimes I want to ask something while the main task is still going. I do not want to derail the session, and I do not want to lose the thought.

Commands:

/btw
/btw:clarify
/btw:steer
/btw:followup
/btw:last

The side answer can stay separate, or I can inject it back into the main session as context. I used this while debugging the background job behavior. It is basically a scratch conversation attached to the real one.

Project Memory

My memory setup is intentionally plain markdown.

.pi/memory/
  MEMORY.md
  *.md

MEMORY.md is the compact index. Topic files hold details. The directory is private and ignored through .git/info/exclude.

Rules:

read when relevant
write only when explicitly asked
no silent memory extraction

Commands:

/memory
/memory:search <query>
/memory:remember <text>
/memory:tidy

Tools:

memory_list
memory_read
memory_search
memory_write

The important part is control. If I ask pi to remember something, it can save it. If I ask it to tidy memory, it does the work visibly in the main session instead of running hidden cleanup somewhere else.

Cosmetic Fixes

Not every extension is deep infrastructure. Some are just things I did not like staring at.

I changed the working indicator and messages. Those live in:

pi/working-messages.json

I changed assistant code block rendering because code blocks, tool calls, and transcript text were too visually similar.

I added ctrl+r prompt history search for the current session.

These are small, but they are exactly why I like owning the environment. A custom shell is mostly small things that stop annoying you.

Custom Providers

I also have provider extensions for my model setup.

The main one is LiteLLM. Pi can dynamically register models from my LiteLLM instances, so the agent does not care whether a model is local, remote, experimental, or routed through something else.

That is the right abstraction for me. Model plumbing belongs in the provider layer, not scattered through every workflow.

Why This Matters

The value is not that my extensions are perfect. They are not.

The value is that they encode how I work:

  • tmux tells me what needs attention
  • background jobs preserve parallel work
  • gated jobs keep the model honest
  • /btw keeps side questions out of the main task
  • memory is explicit and private
  • UI annoyances are fixable
  • providers match my infrastructure

Pi is moving toward power users. That is the exciting part.

People who live with coding agents all day should have their own pi-like thing. Not my exact setup, but their own cockpit: their keybindings, their notifications, their memory rules, their providers, their UI polish.

That is what dotfiles are. They are not just config. They are accumulated knowledge about how you work.