Back to Blog
Back to Customer Stories
Engineering

Worktree CLI: Parallel Feature Shipping with AI Agents

Nicolas Tinte
January 12, 2026
January 12, 2026
Updated:
TL;DR

This post covers how we built Worktree CLI, a tool that lets us run multiple AI coding agents simultaneously by spinning up isolated local development environments on demand.

Motivation

We believe in collapsing the talent stack: the frontend engineer who also designs, the product leader who also writes code, the designer who also writes copy. Tighter conduits for decision-making and synthesizing information create speed. When one person holds context across multiple domains, you skip the handoff meetings, the spec documents, the alignment sessions.

Our team is small, multi-disciplinary, and heavily AI-enabled. AI agents multiply what each person can do. A single engineer with Claude Code can run multiple development streams in parallel. But only if the infrastructure supports it. When your local environment becomes the bottleneck, you've capped what AI can do for you.

We built Worktree CLI so our small team could operate like a much larger one.

The Problem

Running multiple development streams on a single local environment creates conflicts. Branch switches require server restarts. Database state from one feature pollutes another. Port collisions break things.

This gets worse with AI coding agents. When you have Claude Code implementing a feature, another agent reproducing a bug, and a third reviewing a PR, they all need running environments. A single localhost can't support that.

The Solution

Worktree CLI spins up isolated AirOps development environments on demand. Each environment gets its own ports, databases, and URL.

bin/worktree create feature-name
# → Creates isolated environment at http://feature-name.localhost


We built this on top of Git worktrees, which allow multiple branches to be checked out simultaneously into separate directories. We added AirOps-specific tooling to handle environment setup, database isolation, and local URL routing.

Technical Approach

We tested two approaches:

  1. Mostly local: Running services directly on the host machine
  2. All Docker: Running everything in containers

We went with Docker. The main advantage is portability: we can share environments with people outside engineering, and we have the option to run these in the cloud in the future.

The tradeoff is resource consumption and initial spawn time. But if you keep worktrees long-lived instead of creating new ones each time, the spawn time becomes a one-time cost.

How It Works

When you run bin/worktree create feature-name:

  1. Creates a new folder with a fresh Git checkout
  2. Creates a new branch with the given name
  3. Copies the necessary env files to boot AirOps locally
  4. Outputs the URL where the environment will be available

Each worktree is fully isolated. The environments don't share any data, so you can have different users, workspaces, and database state in each one.

Usage

Setup (one-time):

bin/worktree proxy setup


This configures local DNS so *.localhost domains resolve to the correct environment.

Create and start an environment:

bin/worktree create err-2002
bin/worktree start err-2002
bin/worktree setup err-2002
# → http://err-2002.localhost is now live


Each environment gets its own Docker containers and database.

Manage environments:

bin/worktree	# See all environments and their status
bin/worktree stop err-2002	# Stop (preserves data)
bin/worktree delete err-2002	# Delete folder and branch

Use Cases

Parallel agent orchestration. Run Claude Code on a feature in one worktree while debugging in another. Each agent gets a stable environment without contention.

PR review without context switching. Spin up a dedicated worktree, test the PR in isolation, delete when done. Main environment stays untouched.

Bug reproduction. Create clean environments that match production conditions without polluting the development database.

Demo environments. Spin up a worktree, seed with demo data, share the URL.

Constraints

Memory: Each AirOps instance can consume up to 3GB of Docker memory. How many you can run depends on how much RAM you have.

First boot time: Initial environment creation takes a while because it needs to create Docker images, set up the database, and download files. Subsequent boots are much faster since files are cached. The workaround: keep worktrees long-lived instead of spawning new ones each time, and switch branches within them as needed.

Rough edges: We're still exploring this workflow and iterating on the tooling. We're working on improving the initial boot time.

What's Next

  • Ephemeral cloud environments that spin up on PR creation
  • Shared worktree configs for team-wide reproducibility
  • Automatic cleanup policies for stale environments

Join Us

AirOps helps brands get found in the AI era. We're building the first end-to-end content engineering platform, giving marketing teams the systems to win visibility across traditional and AI search.

Our engineering team is small, multi-disciplinary, and ships fast. We build internal tools like Worktree CLI because velocity matters when you're defining a new category. If you want to work on AI search infrastructure with a team that takes craft seriously, we're hiring.

Win AI Search.

Increase brand visibility across AI search and Google with the only platform taking you from insights to action.

Book a CallStart Building

Get the latest on AI content & marketing

New insights every week
Thank you for subscribing!
Oops! Something went wrong while submitting the form.

Table of Contents

Part 1: How to use AI for content workflows - ship winning content with AI

Get the latest in growth and AI workflows delivered to your inbox each week

Thank you for subscribing!
Oops! Something went wrong while submitting the form.

More from AirOps

No items found.
Start Building
No items found.