AI Dev Essentials #17: Claude Sub-Agents, Weekly Limits, and Copilot Multi-Tab RAG

John Lindquist
Instructor

John Lindquist

AI Dev Essentials - Issue #17

Hey Everyone đź‘‹,

John Lindquist here with the 17th issue of AI Dev Essentials!

My first Claude Code Power User Workshop sold out in less than seven days—thanks to everyone for their support! I've already posted the next workshop for Friday, August 8th at 12pm UTC (1pm UK / 2pm Europe), and it's selling out just as quickly, so make sure to grab your spot now.

All my time has been spent preparing for these Claude Code workshops, especially with the release of new features like custom subagents. These have fundamentally changed how I work with Claude Code. I'll be making egghead videos about these topics as well, diving deeper into specialized scenarios and use cases to share with everyone.

Now, onto the news!


🚀 Major Announcements

Claude Code Custom Sub-agents Launch

Anthropic released custom sub-agents for Claude Code on July 24, 2025, enabling developers to create specialized AI teams that handle specific tasks autonomously.

Key capabilities:

  • Custom agent creation: Design agents for code review, architecture, testing, and more
  • Automatic invocation: Claude Code intelligently calls relevant agents based on context
  • Template ecosystem: Community sharing agents via GitHub repositories
  • Project-level customization: Store agents in .claude/agents/ or globally in ~/.claude/agents/
  • Refresh requirement: Must refresh Claude Code after creating new agents

Popular agent types emerging:

  • Code refactorer: Automated refactoring with best practices
  • Security auditor: Vulnerability scanning and security recommendations
  • Frontend designer: UI/UX improvements and component design
  • PRD writer: Product requirement documentation from ideas

(Anthropic Documentation, Anthropic Blog)

Before sub-agents, I was building aliases with Claude Code and chaining them together through AI pipelines. While that's still a valuable technique, sub-agents have vastly simplified setting up these pipelines and handing tasks off to specialized child agents. It's definitely one of my new favorite features. I especially love how easy it is to create an agent using their custom slash commands—just dictate a few sentences about what you want it to do, and it generates the agent itself. Really looking forward to diving deeper into this.

Anthropic Announces Weekly Rate Limits for Claude Pro/Max

Anthropic announced on July 28, 2025, that new weekly rate limits will be implemented starting August 28, 2025, affecting less than 5% of Claude Pro and Max subscribers based on current usage patterns.

Details:

  • Context: Claude Code has seen "unprecedented demand," especially on Max plans
  • Edge cases: One user consumed "tens of thousands" in model usage on a $200 plan
  • Account sharing: Violations of usage policies through account reselling detected
  • Solution: Max plan users can purchase additional usage at standard API rates
  • Impact: Estimated to affect heavy users running Claude Code 24/7

(TechCrunch, Engadget)

I fully understand Anthropic's position here. I spent a few days running Claude Code 24/7 using background tasks with multiple subagents, and based on some calculations, I was racking up what would have been costs in the many thousands of dollars. We've seen Cursor do this in the past where they had an unlimited plan but had to back off. It seems to be a pattern: lure in customers with generous offerings, then change the policies. The main lesson is don't subscribe to yearly plans, because these companies will change the pricing underneath you. That said, I still think Anthropic's $200 a month is a screaming deal. If I ever run out of credits for the week, I'll just buy a second account. It achieves way more for $200 than I could ever achieve on my own. I'm constantly using it, prompting it, tweaking things with it. I can't imagine development without it. In fact, I'm using it right now by dictating and asking it to polish this commentary (although I still go back and hand-edit for that hand-crafted, home-grown feel).

ChatGPT Study Mode Rolls Out to All Users

OpenAI's Study Mode is now available to all free, Plus, Pro, and Teams users as of July 29, 2025, expanding educational AI capabilities across the platform.

Features:

  • Universal access: Available across all ChatGPT tiers
  • Educational focus: Designed to make learning more accessible and intentional
  • Interactive learning: Adaptive questioning and explanations
  • Subject coverage: Works across all academic disciplines

(OpenAI Official, TechCrunch)

I tried ChatGPT Study Mode briefly last night and didn't see much value compared to the prompts I use to generate quizzes on topics. But I definitely need to spend more time giving it a fair shake before drawing any real conclusions. It's fairly trivial to ask any AI agent to create quizzes and materials. Maybe this is just a streamlined UI for less advanced users. I know many of us could do this ourselves.

Google Introduces NotebookLM Video Overviews

Google launched Video Overviews in NotebookLM on July 29, 2025, creating visual summaries with slides, diagrams, and AI narration from uploaded sources.

Technical details:

  • Format: Short, engaging slide presentations
  • Content: Automatically extracts images, diagrams, quotes, and data
  • Narration: AI host provides voice-over explanations
  • Alternative to: Audio Overviews for visual learners
  • Use cases: Research synthesis, presentation creation, study materials

(Google Blog, TechCrunch)

I've generated a few of these so far and absolutely love them. The narrator's voice is extremely pleasant. It's probably the best AI voiceover I've ever heard. I love the slides, the way it highlights text, pretty much everything about it. Definitely looking forward to generating more of these.


🛠️ Developer Tooling Updates

Cursor 1.3: Enhanced Agent Capabilities

Cursor 1.3 is rolling out with significant improvements to agent interaction and development workflow transparency.

Major updates:

  • Shared terminal: Agents can now interact with your terminal session
  • Context usage visibility: See exactly what context is being used in Chat
  • Faster edits: Performance improvements across the board
  • Git integration: Checkpoints now use git under the hood
  • Notebook support: Checkpoints work with Jupyter notebooks
  • Security: Removed denylist for auto-run, improved security model
  • UI improvements: Right-click directories to send to chat

(Cursor Changelog, AlternativeTo)

Backgrounding tasks in the terminal has been one of the biggest pain points in Cursor agents, so I'm definitely excited to see this feature. Honestly, I haven't had a chance to fully test it out yet, but I'm looking forward to putting it through the ringer.

Microsoft Edge Copilot Mode: Multi-Tab RAG

Satya Nadella unveiled Copilot Mode in Edge on July 28, 2025, introducing multi-tab RAG capabilities that analyze all open browser tabs simultaneously.

Key features:

  • Multi-tab analysis: RAG across all open tabs in real-time
  • Research synthesis: Combine information from multiple sources
  • Browser-native AI: First step in "reinventing the browser for the AI age"
  • Academic use case: Demonstrated analyzing Nature journal papers
  • Contextual understanding: Maintains context across different websites

(Microsoft Edge Blog, TechCrunch)

It's fascinating watching every browser add AI capabilities. I've used Chrome forever, so I don't see any AI feature pulling me away after I've customized all my extensions. I've tried the Dia browser, but I spend so much more time using AI in ChatGPT, Gemini, or Claude's actual home pages than opening browser tabs and asking questions. Maybe I just need to adjust my mindset to optimize to chatting with multiple tabs? But my multiple tabs are usually multiple AI sessions that have done searches inline, so 🤷‍♂️

OpenTUI: Terminal UI Library for TypeScript

SST is sponsoring the development of OpenTUI, a TypeScript library for building terminal UIs, with impressive real-time rendering capabilities.

Technical specifications:

  • TypeScript-first: Native TypeScript support for terminal UIs
  • Performance: Real-time rendering without speed-up in demos
  • Based on: kmdrfx's previous work
  • Repository: github.com/sst/opentui
  • Use cases: CLI tools, development dashboards, interactive terminals

(GitHub Repository, SST Blog)

The OpenCode terminal team is definitely pushing terminals way beyond what I thought they were capable of. It does seem a bit ironic to build UIs inside terminals when that's not their purpose at all. I don't see the value when there are so many IDEs with built-in terminals where you could build way more advanced UIs that interact with the terminals and let them do what they're best at. But who knows? I could be surprised. I know this is definitely going to appeal to a certain subset of developers.


đź’Ľ AI Business & Ecosystem

Industry Shift: From Documentation to Rapid Prototyping

Tech companies are experiencing a fundamental cultural shift from documentation-heavy processes to rapid prototyping, enabled by AI coding assistants.

Cultural transformation across the industry:

  • Old model: Extensive PRDs and documentation before development
  • New model: "Vibe-code" prototypes created in hours instead of days
  • Role evolution: Product and engineering roles increasingly overlapping
  • Time equation: AI-assisted prototyping now faster than writing specifications
  • Impact: Ideas tested in code before extensive planning phases

(Industry Analysis)

I like seeing these efforts where project managers are being taught to think in AI workflows. We all need to adjust to what our new tools are capable of. The longer we fight against it and stick to traditional ways, the faster we'll fall behind when it comes time to really leverage these tools' full power. Even if we fail sometimes using these tools, it's better to be prepared for when they become more reliable. If there's one thing we've noticed, it's that AI tools have grown increasingly reliable over the past year. Think about how you use AI today compared to July 2024.

Cline's Critique of AI Subscription Economics

Cline published a detailed analysis explaining why subscription models for AI inference are economically unsustainable.

Key arguments:

  • Cost mismatch: $200/month power users can burn $500/day in AI costs
  • Provider losses: Up to $14,800/month per heavy user
  • Commodity nature: AI inference is like gas or electricity
  • Hidden limits: Providers forced to add undisclosed usage caps
  • Solution: Bring-your-own-key model with transparent usage
  • Business model: Cline monetizes through enterprise features, not inference markup

(Cline Blog, VentureBeat Analysis)

I shared this because I thought it was interesting commentary on pricing models, tokens, and tools. I don't think anyone really has insight into what the future holds regarding price versus model capabilities. How much work does $10 get you? Right now, we're all caught up in pricing model changes being pulled out from under us. But it's quite possible that by this time next year, everything will be so cheap and fast that it won't matter.


🤖 Model & Research Updates

Z.ai Launches GLM-4.5 and GLM-4.5 Air

Z.ai released GLM-4.5 (355B total/32B active) and GLM-4.5-Air (106B total/12B active) on July 28, 2025, featuring frontier reasoning, coding, and agentic capabilities.

Technical specifications:

  • GLM-4.5: 355B parameters total, 32B active, $0.6/$2.2 per 1M tokens
  • GLM-4.5-Air: 106B parameters total, 12B active, $0.2/$1.1 per 1M tokens
  • Training: Hybrid post-training with domain-specific experts
  • Features: Native coding agent inspired by Claude Code
  • Benchmarks: Pareto frontier performance at respective scales
  • Availability: Weights on Hugging Face, APIs via Z.ai and OpenRouter

Notable capabilities:

  • Website generation: Create full-stack sites from prompts
  • Slides/Poster agent: Model-native presentation creation
  • Interactive artifacts: Games, physics simulations in HTML/SVG/Python

(Z.ai Blog, Hugging Face, PR Newswire, VentureBeat)

I've seen many benchmarks, but not many people actually using. I'll dig in next week, but I've been fooled by benchmarks before so I'm tempering my excitement.

Gemini 2.5 MCP Integration: 2,700+ APIs via Pipedream

Google showcased Gemini 2.5's MCP integration connecting to 2,700+ APIs and 10K tools through Pipedream in under 100 lines of code.

Implementation details:

  • Code simplicity: Complete agent in <100 lines
  • API access: 2,700+ APIs through Pipedream
  • Tool count: 10,000+ available tools
  • Demo: Google Calendar and Gmail integration
  • Repository: GitHub example

(Google DeepMind Blog, Pipedream Platform, GitHub Example)

It's fascinating to see Gemini excel in some areas, especially with larger context windows, where other agents fail. They're tackling problems many people don't have yet—and might never have. I'm definitely never going to build a tool that connects to 2,700 APIs, but I'm glad someone's pushing the boundaries that way.


🎨 Creative AI Breakthroughs

Photoshop Beta: Harmonize Feature Revolution

Adobe's Photoshop beta introduced Harmonize on July 29, 2025, fundamentally changing how compositing works in professional image editing.

(Adobe Blog, TechCrunch)

I feel like this is the holy grail for photo editing. I'm decent with the lasso tool and yanking a picture of somebody and plopping it onto a background. But I've never been great at matching lighting or getting the colors just right. This is exciting work. For time-saving design work, especially marketing campaigns, this is going to save people an unfathomable amount of time.

Veo 3's Emergent Visual Instruction Capability

Google discovered an emergent capability in Veo 3 where the model can read and execute visual instructions drawn on the first frame of a video.

How it works:

  • Input method: Draw or annotate instructions on the starting frame
  • Execution: Model interprets and executes instructions sequentially
  • Prompt: "Delete instructions and execute in order"
  • Applications: Complex spatial relationships, multi-step animations
  • Discovery: Found by Google Labs team during testing

(Google Flow Blog, DeepNewz AI)

I love this story so much. The fact that they didn't design for this, but people discovered a way to use it, is something we'll see more and more of as these models improve. And it's not just a minor discovery—it's an awesome way of interacting with video. Classic storyboarding with arrows and text boxes is just chef's kiss perfection. It makes me wish I was doing more AI video work.


⚡ Quick Updates

  • Ian Nuttall's 7 custom Claude Code agents: code-refactorer, prd-writer, project-task-planner, vibe-coding-coach, security-auditor, frontend-designer, content-writer (GitHub, Thread)
  • Gemini CLI comparison vs Claude Code: 50+ task comparison shows tool choice depends on specific use case - Claude Code excels at multi-file edits, Gemini at single-file refactoring (Analysis)
  • Alex Albert collecting non-coding Claude Code uses: Community sharing creative applications beyond development (Thread)
  • Cursor tips thread: Eric Zakariasson gathering lesser-known Cursor features from community (Discussion)
  • Imagen 4 Ultra: Logan Kilpatrick confirms "best text to image model in the world" available in Gemini API (Announcement)
  • swyx on ambient agents: Predicts 1-2hr autonomy barrier will be broken by EOY 2025, changing usage patterns (Thread)

✨ Live Workshop: Transform into a Claude Code Power User ✨

Full details: https://egghead.io/workshop/claude-code

Learn the latest power user workflows to maximize your productivity. Join me live as I walk through building CLI chains, video prompting, triggering workflows on hooks, and much more:

  • When: Friday, August 8th at 12pm UTC (1pm UK / 2pm Europe)
  • Where: Zoom
  • Investment: $350

➡️ Register Now

Selling out quickly. Secure your spot today!


Thanks for reading! What custom agents are you building? I'd love to hear about your experiments.

Read this far? Share "AI Dev Essentials" with a friend! - https://egghead.io/newsletters/ai-dev-essentials

  • John Lindquist

egghead.io

Share with a coworker