Building Claude skills in plain English is intuitive and allows the AI agent to use sub-agents to figure out complex tasks. However, this approach can be slow, less reliable, and costly as the agent has to reason through the steps every time.
This lesson demonstrates a powerful "self-improving" workflow. Start by defining a skill with natural language instructions. Run the skill once, allowing the agent to discover the necessary CLI commands. Then, instruct the agent to analyze its own execution history, generate optimized scripts based on those commands, and refactor the original skill to use these new, purpose-built scripts.
Workflow demonstrated in this lesson:
- Initial Skill Creation: Create a
good-morningskill with plain English steps that instructs the AI to use sub-agents for parallel tasks. - Test Run & Observation: Run the skill to see how the agent executes the tasks and what specific commands it uses.
- Script Generation: Prompt the agent to analyze its own actions and create dedicated TypeScript scripts to replicate the discovered commands.
- Skill Refactoring: Watch as the agent refactors the original
SKILL.mdfile, replacing the abstract sub-agent instructions with a direct call to a single, efficient script. - Final Execution: Run the new, improved skill to see a much faster and more consistent result.
Key benefits:
- Faster Execution: Running a single, pre-made script is much faster than spawning and coordinating sub-agents.
- Increased Reliability: Scripts provide consistent, predictable results every time, removing the variability of AI reasoning.
- Lower Cost: Reduces the number of tokens required for the agent to reason about the task on each run.
- Reusable Components: The generated scripts can be used outside of the skill or in other automations.
Summary
This lesson showcases a powerful workflow for evolving Claude skills from flexible but slow natural language instructions into highly efficient, scripted automations. The process begins by creating a skill using plain English to define a multi-step task, allowing the AI to use sub-agents to figure out the necessary commands. After a test run, the AI is instructed to analyze its own execution history, create optimized TypeScript scripts based on the commands it ran, and then refactor the original skill. The final skill replaces the dynamic sub-agent logic with a direct call to a single, purpose-built script, resulting in faster execution, greater reliability, and lower operational costs.
Prompts
Terminal Commands
Code Snippets
The initial version of the SKILL.md file uses plain English and sub-agents to perform its task.
After the self-improvement pass, the skill is refactored to use a dedicated script, making it much more efficient and reliable.
