Claude Code Custom Commands: Build Your Own CLI Workflow


Most developers use Claude Code for one-off tasks — “fix this bug”, “explain this function.” I did too, for months. Then I discovered custom commands, and my workflow changed completely.

Custom commands are reusable prompts saved as markdown files. Type /deploy and Claude runs your deployment checklist. Type /pr-review 42 and it pulls the diff, reads the context, and writes a review. I ended up building an entire multi-blog content editor this way — but let’s start smaller.

TL;DR

FeatureWhat It Does
Project commands<project>/.claude/commands/*.md — scoped to a repo
Global commands~/.claude/commands/*.md — available everywhere
Arguments$ARGUMENTS placeholder in the markdown
Auto-detectCommands can read config files to adapt to context
No code neededJust markdown files with instructions

How Custom Commands Work

A custom command is a markdown file that becomes a slash command. Put a file at .claude/commands/deploy.md, and you get /deploy in your project.

No plugin API, no build step. The markdown content becomes Claude’s instructions when you invoke the command. (See the official docs for the full spec.)

Two scopes:

  • Project commands<your-repo>/.claude/commands/ — shared with your team via git
  • Global commands~/.claude/commands/ — your personal toolkit, available in any project

Your First Command: A Changelog Generator

Create .claude/commands/changelog.md:

# Changelog Generator

## Instructions

1. Run `git log --oneline -20` to see recent commits
2. Group commits by type (feat, fix, refactor, docs)
3. Write a changelog entry for the current date
4. Append to CHANGELOG.md (create if missing)

Type /changelog and it reads the git history, groups by category, and writes a formatted changelog. I used to do this by hand before every release — now it’s one command.

Adding Arguments with $ARGUMENTS

Commands get interesting when they accept input. Use $ARGUMENTS as a placeholder:

Create .claude/commands/pr-review.md:

# PR Review

Review pull request $ARGUMENTS for this project.

## Instructions

1. Run `gh pr diff $ARGUMENTS` to get the diff
2. Read changed files for full context
3. Check for: bugs, security issues, performance,
   missing tests, unclear naming
4. Output a review with specific line references
5. Suggest improvements as code snippets

Usage: /pr-review 42

Claude pulls the diff, reads the surrounding code, and outputs a structured review with line references. Not a replacement for human review, but it catches things I’d miss on a Friday afternoon.

5 Commands You Can Build Today

/test-gen <file> — Generate Tests

# Test Generator

Generate tests for $ARGUMENTS.

## Instructions
1. Read the source file $ARGUMENTS
2. Identify public functions/methods and edge cases
3. Generate tests using the project's existing test framework
4. Run the tests and fix any failures

/migrate <description> — Database Migration

# Migration Generator

Create a database migration for: $ARGUMENTS

## Instructions
1. Read the existing migrations directory for naming conventions
2. Read the relevant model/schema files
3. Generate a migration file with up and down operations
4. Verify the SQL syntax is correct for the project's database

I initially tried writing a more detailed version with specific ORM instructions, but it actually performed worse — Claude picks up the conventions from existing migration files on its own.

/dep-check — Dependency Audit

# Dependency Check

## Instructions
1. Read package.json (or requirements.txt, go.mod, etc.)
2. Run the appropriate audit command (npm audit, pip-audit)
3. For each vulnerability, explain the risk and suggest a fix
4. Check for outdated major versions worth upgrading

/doc <function> — API Documentation

# Document Function

Generate documentation for $ARGUMENTS.

## Instructions
1. Find $ARGUMENTS in the codebase
2. Read the implementation and any existing comments
3. Write JSDoc/docstring with: description, params,
   return value, example usage, edge cases
4. Add the documentation directly to the source file

/onboard — Project Onboarding

# Onboard

## Instructions
1. Read README.md, package.json, and directory structure
2. Identify the framework, language, and key dependencies
3. Explain: how to run the project, how to run tests,
   where the main entry points are, and the architecture
4. Keep it under 300 words

Design Patterns (and Mistakes I Made)

Auto-detect context

My first version of the blog editor required --blog ai-tech-blog every time. Obviously nobody wants to type that. Now the command reads a config file and matches the current working directory:

1. Read config.json and match the current working directory
2. If no match, ask the user which project

Same command, different behavior per project.

Always report what happened

Early on I had a command that silently modified files. I’d run it, wait, see “done”, and then wonder what actually changed. Now every command ends with:

## Output
- Files created/modified
- Warnings or issues found
- Suggested next steps

Chain commands into workflows

Individual commands are fine. A pipeline is better. I built a blog content flow that looks like this:

/draft my-article      → generates draft with screenshot placeholders
/screenshots my-article → converts images to WebP, embeds in article
/seo-check my-article  → validates title, headings, word count, links
/publish my-article    → flips draft:false, runs build, verifies

The key is that each command leaves breadcrumbs for the next. /draft outputs <!-- SCREENSHOT: ... --> placeholders, and /screenshots knows how to find and replace them.

Use external scripts for heavy lifting

Commands are instructions — not runtime code. Image processing in a markdown prompt doesn’t work well (I tried). Write a real script and call it:

Run: `npx tsx ~/scripts/process-images.ts $ARGUMENTS`

Claude handles orchestration, the script handles computation.

Fail gracefully

This one I learned the hard way. A /publish command that flipped draft: false and then failed the build left an article half-published. Now:

If the build fails, revert changes and report the error.

Project Commands vs Global Commands

Project commands (.claude/commands/) — team-shared workflows, project-specific tasks, onboarding. These go in git.

Global commands (~/.claude/commands/) — personal shortcuts, cross-project tools. These stay on your machine.

What worked for me: start global, promote to project when the team asks for it.

Limitations

  • Vague instructions → vague results. Claude interprets the markdown literally. “Handle errors” does nothing useful. “If the build fails, revert draft to true and print the error” does.
  • No state between commands. Each invocation starts fresh. If commands need to share data, use files.
  • Token cost scales with reads. A command that reads 20 files costs more. Keep the scope focused. Prompting Claude efficiently also matters — see our prompt engineering guide for techniques that reduce token waste.
  • CLI only. No GUI, no dashboard. If you need that, you’re building a different kind of tool.

Getting Started

  1. Create ~/.claude/commands/ or <project>/.claude/commands/
  2. Add a markdown file — the filename becomes the command name
  3. Write instructions with $ARGUMENTS for user input
  4. Type /<command-name> in Claude Code

Start with whatever task you do most often that takes more than 2 minutes. That’s your first command.

For more on Claude Code itself, see the AI code editor comparison.

FAQ

How many custom commands can I create?

No hard limit. Each command is a markdown file. I have 10+ across multiple projects.

Can I share commands with my team?

Project commands live in .claude/commands/ and are tracked by git. Push and they’re available — no installation.

Do commands work in VS Code and JetBrains?

Yes. The same .claude/commands/ directory is read by the CLI, VS Code extension, and JetBrains extension.

Custom commands vs MCP servers — when to use which?

Custom commands are prompt templates — natural language instructions, no code, instant to create. Good for workflows.

MCP servers are programmatic integrations — they give Claude access to external APIs, databases, or services via the Model Context Protocol. More powerful, but you need to write and run a server.

Rule of thumb: if Claude can already do it (read files, run shell commands, use git), make a custom command. If Claude needs access to something new (Slack, a database, a third-party API), build an MCP server.

How do I debug a command that isn’t working?

  1. Read the output — Claude shows its reasoning. If it misunderstood, reword.
  2. Be more specific — “Read the config file” fails when there are three config files. Name the exact path.
  3. Test in pieces — If a 5-step command breaks at step 3, test step 3 alone first.

Can commands call external APIs?

Yes, through curl, gh, or any CLI tool available in the shell. A command can run gh api repos/owner/repo/issues to fetch GitHub issues. For heavier API integration, an MCP server is a better fit.

What’s the difference between custom commands and CLAUDE.md?

CLAUDE.md is always-on context — rules that apply to every interaction. Custom commands are on-demand — they run only when invoked. Use CLAUDE.md for “always do X”, commands for “do X when I ask.” For the best ways to use Claude in day-to-day productivity workflows, see our AI productivity tools roundup.