Injecting live data through hooks eliminates copy-paste workflows. Create custom functions that fetch real-time info and rewrite prompts with fresh context before Claude sees them.
Without hooks, live data needs manual steps:
Build a load() function that fetches on demand:
import { type UserPromptSubmitHookInput } from "@anthropic-ai/claude-code"
const input = await Bun.stdin.json() as UserPromptSubmitHookInput
// Match load(username) pattern
const match = input.prompt.match(/load\((.*)\)/)
if (match) {
const username = match[1]
const response = await fetch(`https://api.github.com/users/${username}`)
const data = await response.json()
// Replace load() with actual data
console.log(
input.prompt.replace(
`load(${username})`,
JSON.stringify(data, null, 2)
)
)
}
Type prompts with your custom function:
How many repositories does John have?
load(johnlindquist)
Claude receives:
How many repositories does John have?
{
"login": "johnlindquist",
"public_repos": 247,
...
}
weather(city) - Inject current conditionsstock(AAPL) - Latest price datadb(query) - Internal database resultsjira(PROJ-123) - Issue detailsPrompts:
How many repos does Octocat have?
load(octocat)
[00:00] We can get into some more dynamic prompt rewriting by adding logic like matchInput.prompt for a load pseudo function. And if we look at this code that Cursor is suggesting, mostly because I just wrote this code, deleted it, and started recording, but we're going to check to see if this regex matches, which will be load with params surrounding any text, and then we're going to treat this matching text as a username and then fetch the result as json. Then essentially just take the prompt itself and rewrite this load section of the prompt as the JSON data result of this. So if I invoke clod now, I'll fire up clod and I'll say how many repositories does John have, and then type load john-lindquist for my github username, Hit enter. Behind the scenes in this hook it invoked this fetch, got the data back, and was able to dynamically populate that information into the prompt.
[00:59] So if I pick someone else I'll just replace John with Octocat and then replace this with Octocat. Hit enter. You'll see Octocat has eight public repos. So you get this complete flexibility of building in these functions with custom logic in them to invoke whatever you want. And this doesn't take up any context.
[01:19] It doesn't take up any MCP definitions or any command definitions. It simply only acts after a user submits some text and then you parse out the text behind the scenes and return a new rewritten prompt.