Getting predictable, structured data from an AI can feel like a game of chance. When you need a specific format like JSON to drive an automation pipeline, you can't afford guesswork. This lesson demonstrates a powerful workflow to enforce structured output from AI models using JSON schemas, and then process that data with full type-safety in a local script.
You'll learn how to take complete control over both the AI's input and its output, enabling complex and reliable automation.
This lesson walks through a complete end-to-end process:
tasks.schema.json file that defines the exact structure for a list of tasks.<schema> tags. This instructs the AI to generate its response according to that specific structure.stdin.json-schema-to-zod and zod to automatically generate TypeScript types from your JSON schema. This allows you to validate and process the data with full type-safety, catching errors and leveraging editor autocompletion.This approach transforms the AI from a simple text generator into a reliable component in your development toolchain.
Generating the initial schema for tasks:
Create a schemas directory and generate a schema for tasks which is. Is simply an array where each task has a name and a description.
Installing dependencies and generating TypeScript types from the schema:
Please install a library which can convert a JSON schema into Zod types. We'll probably have to install Zod as well. Add that convert. Conversion logic to my package.json so that I can update it whenever the schema changes. And then in my index.ts, import the types and Type my tasks so that my tasks are typed to the schema properly.
Running a Gomplate template and piping the result to Claude Code to get a JSON array of tasks:
gomplate -f prompt.txt | claude -p
The full pipeline: prompt generation, AI processing, and piping the final JSON into a Bun script for processing.
gomplate -f prompt.txt | claude -p | bun run index.ts
Installing the necessary dependencies using Bun:
bun install json-schema-to-zod zod
Running the script in package.json to generate Zod types from the JSON schema:
bun run generate-types
The core prompt file (prompt.txt) that uses Gomplate to include project context (repomix) and a specific JSON schema.
{{ repomix }}
<schema>
{{ file.Read "schemas/tasks.schema.json" }}
</schema>
Create a list of tasks to improve the code following the <schema>.
Output _only_ the json, nothing else. Don't even output the json markdown codefence. JUST THE JSON!!!
A simple Gomplate configuration (.gomplate.yaml) to ignore the schemas directory when gathering repository context, preventing the schema from being included twice.
plugins:
repomix:
cmd: /Users/johnlindquist/.npm-global/bin/repomix
args: ["--stdout", "--include", "**/*.json,**/*.ts", "--no-file-summary", "--ignore", "schemas"]
The final index.ts script, which reads from stdin, validates the input against an auto-generated Zod schema, and processes the now type-safe data.
import { tasksSchema, type Tasks } from "./schemas/tasks.zod";
const tasksInput = await Bun.stdin.json();
const tasks: Tasks = tasksSchema.parse(tasksInput);
console.log(tasks.map((task) => task.name));
The generated tasks.schema.json file.
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Tasks",
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the task"
},
"description": {
"type": "string",
"description": "A detailed description of the task"
}
},
"required": ["name", "description"],
"additionalProperties": false
}
}
The generated schemas/tasks.zod.ts file, creating a Zod schema and inferring a TypeScript type from it.
import { z } from "zod";
export const tasksSchema = z.array(z.object({ "name": z.string(), "description": z.string() }));
export type Tasks = z.infer<typeof tasksSchema>;
The scripts section of package.json with a command to regenerate the Zod types.
{
"scripts": {
"generate-types": "json-schema-to-zod -s schemas/tasks.schema.json -t schemas/tasks.zod.ts"
}
}
[00:00] Now let's ask Claude to create a schemas directory and generate a schema for tasks, which is simply an array where each task has a name and a description. Then we'll let this run. I'll hit shift-tab right away to turn on the accept edits mode. So now we have a nice schema which we can use to define objects to be output from Cloud. So now if I inline this schema I'm just going to wrap it with a schema tag.
[00:25] So file read schemas and then the text after this is just to, I'll turn on Word Wrap, Create a list of tasks to improve the code following the schema, and output only the JSON. Nothing else. Don't even output the markdown code fence. Because if you only say output JSON, it still tries to respond to markdown. You have to be explicit about that.
[00:45] So now I'll close out of this clod session, clear this out, I'll just show that this is working, that this is now our entire prompt with the initial prompt files, and then the files that we included through our plugin, and then the schema. Well it looks like it's actually including the schema twice because we have the schema down here and it's included up above. So I'll go into my complete yaml, we'll regrab this, and I'll say ignore, then we'll ignore our schemas directory. This way anything inside of our schemas directory should be ignored. If I run this again, so now we have the schema but it shouldn't be in this list of files here.
[01:30] Okay, perfect. So now if we take this and run it and pass it over to Clot with our dash p, we should see an array of objects that are tasks with names and descriptions following the schema. So here we go. It's an array with objects all with names and descriptions. So what we can do now with an actual object this time is inside of our index.ts, Bun has a way of reading from standard in and parsing the JSON and turning it into an actual JavaScript object.
[02:03] So if I just call this tasks, and for right now I'll just console.log over the tasks, we'll map and just grab their names. We'll leave this as any for now and type it in a bit. So now if we take our previous run, then pipe that into bun run index.ts. This array of tasks will be passed into our script and we'll now have an actual object to work with to build up sequences of tasks or orchestrations or communicating with APIs and lots of scenarios. So for right now it is console.log.
[02:35] An array of all the names of the tasks. And if we want, let's just go back into Cloud. We should have the index and context. I'm going to bring in the schema. I'm going to bring in the index, bring in the package.json, and say something like, please install a library which can convert a JSON schema into Zod types.
[02:52] We'll probably have to install Zod as well. Add that conversion logic to my package.json so that I can update it whenever the schema changes. And then in my index.ts, import the types and type my tasks so that my tasks are typed to the schema properly. Then we'll just let Cloud Code do its thing. It'll prompt to install a library.
[03:15] We're not using pnpm, we're using bun. Please always install using bun. So we'll allow this. We'll allow this one to generate the types. Looks like it errored out, so we can go ahead and fix it.
[03:32] So it wrapped everything up, got everything sorted out, now our tasks are properly typed where they have names and strings, and this schema is auto generated so this won't change anything in behavior. But now if we're working with this and hand coding it or asking Cloud Code to do anything, it's going to understand much better just from the context of this what code it needs to write and what code is appropriate and keep our code strictly and safely typed. All right, and again this array is coming out of this script being run, where it's just console logging the task names, and that's all built up from this initial prompt where we have tone and steps. We can probably get rid of steps now since we have a schema in place, we have the files we want, and we have a nice list of tasks that we can review and build on top of. And we just have complete control over the context and complete control over the output of the AI of what APIs we want to hit, what files we want to write, so we can take steps from there and then even go back to Cloud Code and have it continue after we have tasks laid out for it.