Block Prompts with Hook Guardrails

John Lindquist
InstructorJohn Lindquist

Social Share Links

Tweet

Blocking prompts with hooks creates essential guardrails. Validate inputs, enforce conventions, and stop inappropriate requests by exiting with the right code and clear feedback.

The blocking rules

Two requirements to reject a prompt:

  1. Exit with code 2 - Exactly 2, not 0 or 1
  2. Write to stderr - User sees your message

Block forbidden content

Stop prompts with banned keywords:

import { type UserPromptSubmitHookInput } from "@anthropic-ai/claude-code"

const input = await Bun.stdin.json() as UserPromptSubmitHookInput

if (input.prompt.includes("Bruno")) {
  console.error("We don't talk about Bruno!")
  process.exit(2)
}
// No exit = prompt continues

Test it:

claude
How's Bruno doing?
# Error: "We don't talk about Bruno!"

Context-aware rules

Use working directory for location-based blocking:

const input = await Bun.stdin.json() as UserPromptSubmitHookInput

// Block prompts in production directories
if (input.cwd.includes("/production")) {
  console.error("⚠️ Claude Code disabled in production directories")
  process.exit(2)
}

// Enforce tool preferences
if (input.prompt.match(/\bnpm install\b/)) {
  console.error("Use 'pnpm install' instead (team convention)")
  process.exit(2)
}

Real-world patterns

  • Secret protection: Block prompts containing API keys
  • Environment safety: Disable in prod/staging folders
  • Team standards: Enforce pnpm/yarn over npm
  • Cost control: Block overly broad requests

Try it

Prompts:

How's Bruno doing?
npm install react