what-is-claude-ai-3616144407135393.jpg

I stumbled upon how Boris Cherny—yes, the guy who made Claude Code—actually uses his own creation. And honestly? It's kind of wild. Here's the breakdown:

1. Parallel Execution Setup

He runs 5 Claude instances in parallel from the terminal, numbering tabs 1-5 and using system notifications to catch when input is needed. Because apparently, one Claude just isn't enough.

2. Web + Local in Parallel

On top of the local setup, he runs another 5-10 Claude instances on claude.ai/code. He hands off local sessions to web (using --teleport for two-way switching), starts sessions directly in Chrome, and even kicks off sessions from the iOS app in the morning to check on later. This man is running a Claude farm.

3. Model Choice: Opus 4.5 with Thinking

Uses Opus 4.5 with thinking for everything. Claims it's the best coding model he's ever used. Yes, it's bigger and slower than Sonnet, but it needs less steering and is better with tools—ultimately delivering faster end results than smaller models. The "go big or go home" approach, basically.

4. Team Knowledge in CLAUDE.md

Maintains a single CLAUDE.md file that the whole team shares in the Claude Code repo. It's checked into git, with the team contributing multiple times per week. Every time Claude does something wrong? Add it to CLAUDE.md so it doesn't repeat the mistake. Different teams maintain their own versions, each responsible for keeping theirs up to date.

5. Updating CLAUDE.md During Code Reviews

During code reviews, they tag @.claude on colleague PRs to add content to CLAUDE.md as part of the PR process. Uses the Claude Code GitHub Action (/install-github-action). Very much in the spirit of Dan Shipper's "Compounding Engineering" concept.

6. Plan Mode + Auto-Accept Workflow

Starts most sessions in Plan mode (shift+tab twice). If the goal is writing a PR, he iterates with Claude in Plan mode until the plan feels right. Once locked in, switches to auto-accept edits mode—and Claude usually nails it in one shot. A good plan really matters.

7. Slash Commands for Repetitive Tasks

Creates slash commands for "inner loop" workflows that happen multiple times a day. Saves on repetitive prompting, and Claude can leverage these too. Commands are checked into git under .claude/commands/. Example: /commit-push-pr gets used dozens of times daily. Inline bash pre-computes info like git status to avoid unnecessary back-and-forth with the model.

8. Sub-Agents

Regularly uses multiple sub-agents:

  • code-simplifier: Simplifies code after Claude finishes

  • verify-app: Contains detailed instructions for Claude Code E2E testing

Same concept as slash commands—automate the most common workflows you do on most PRs.

9. PostToolUse Hooks for Code Formatting

Uses PostToolUse hooks to handle Claude's code formatting. Claude already generates well-formatted code by default; the hook handles the remaining 10% to prevent formatting errors in CI later.

10. Permission Management

Doesn't use --dangerously-skip-permissions. Instead, uses /permissions to pre-allow common bash commands known to be safe in the environment. Avoids unnecessary permission prompts. Most of this is checked into .claude/settings.json and shared with the team.

11. Tool Integration via Claude Code

Lets Claude Code handle all the tooling:

  • Search and post to Slack (via MCP server)

  • Run BigQuery queries (via bq CLI) for analytics questions

  • Fetch error logs from Sentry

Slack MCP config is checked into .mcp.json and shared with the team.

12. Handling Long-Running Tasks

For very long tasks, picks from three approaches:

  • (a) Prompt to verify work via background agent when complete

  • (b) Use Agent Stop hooks for more deterministic verification

  • (c) Use the ralph-wiggum plugin (created by Geoffrey Huntley)

In sandboxed environments, uses --permission-mode=dontAsk or --dangerously-skip-permissions to let Claude focus without permission prompts.

13. The Most Important Tip: Provide Verification Feedback Loops

The single most important factor for great results with Claude Code: give Claude a way to verify its work. With this feedback loop in place, final output quality improves 2-3x.

He tests every change landing on claude.ai/code using the Claude Chrome Extension—opens a browser, tests the UI, and iterates until the code works and the UX feels right.

Verification methods vary by domain:

  • Could be as simple as running a bash command

  • Running test suites

  • Testing the app in a browser or phone simulator

Invest in building robust verification processes. It pays off.

My Takeaway

After reading all this, my conclusion is pretty simple:

claude-context-memory-v0-8j942ncnvtjf1.png.webp

Token power is king. 

Just throw Opus 4.5 at it and let the compute do the heavy lifting. Sometimes the answer really is "more tokens = better results." Who knew brute force could be so elegant?

https://x.com/bcherny/status/2007179832300581177