AI models won't always give you the answer you expect; sometimes they'll return invalid data or "hallucinate" a response that doesn't fit your schema. This lesson teaches you how to make your local AI script more robust by implementing essential error handling for user commands.
You'll learn how to validate the AI's output against a predefined list of valid commands and gracefully catch errors when the model generates an unexpected response, ensuring your script fails predictably and provides helpful feedback to the user.
Workflow demonstrated in this lesson:
validCommands (e.g., summarize, translate, review) to constrain the AI's choices.generateObject call in a try...catch block to handle cases where the AI's output doesn't match the Zod schema.isValidCommand) to check if the command returned by the AI is in your list of valid commands.By adding these checks, you make your script significantly more reliable and user-friendly, preventing it from crashing on bad input and guiding the user toward correct usage.
[00:00] Now we're definitely going to want some examples that we can provide in here. So we'll toss in example, and that means we're going to have to limit the types of commands that can be passed in. So if we write a list of valid commands, for now, we'll have summarize, translate, and review, and we'll toss these into an enum. And to fix the type error this time, we'll just say as const, meaning that valid commands will always have this length, so it meets this criteria, and we can remove this. Now you might think that this would prevent generate object from accepting bad commands, but if you look at the result, if we log out the object at exit and I just type bun index taco, you'll see that it still tries to aggressively pick something, essentially hallucinating an answer, because it's trying its best based on the word taco.
[00:52] So when using LLMs, if you have a limited amount of options like valid commands, sometimes you have to take this out from here. We'll make this just a string again so that I'm going to paste in another type guard for isValidCommand and if this isn't a valid command we'll go ahead and exit. So now if we run this you might expect if we pass in taco that it would hit that conditional but we're actually going to get an invalid response error here, meaning that the response from our model did not match the schema of our command and our file path. So now we can come up here, we can catch this error, and then customize it to something like model generated an invalid response for object prompt. So we can run this again, bun index taco, and you'll see model generated invalid response.
[01:41] We'll run this again with delete the package, hit enter, And now we run into our invalid command. To make that message a bit stronger, let's say valid commands are, and then join those together. So run this again, invalid command, valid commands are summarize, translate, and review.