Back to posts

AI-Assisted Coding Workflows: Delegating vs Leveraging

12 min readAI & Productivity

How AI Changed My Coding Workflow

Special thanks to my teammate Minh Le at Lorikeet, whose insights on AI-assisted development workflows have been invaluable to my learning of this topic.

When AI coding tools first came out, I thought they were just fancy autocomplete. Type a comment, get some boilerplate, move on. But modern AI coding assistants? They've completely changed the game. We're not talking about autocomplete anymore. We're talking about agents that can actually implement entire features while you grab coffee.

Here's the thing though: just because these tools are powerful doesn't mean you should throw tasks at them randomly and hope for the best. I learned this the hard way after watching several AI-generated PRs turn into complete messes. The breakthrough came when I realized there are really just two ways to work with AI: either you delegate something and walk away, or you leverage AI as your pair programming partner. Knowing which approach to use? That's what separates getting 10x productivity from ending up in frustrating rabbit holes.

The Coding Task Spectrum

So here's the mental model that helped me figure this out: think of any coding task on a spectrum. On one end, you have tasks where you know exactly what needs to happen. Like "remove this feature flag" or "add unit tests for this service." On the other end? Tasks where you're still figuring things out, like "why is this page so slow?" or "how should we architect this new feature?"

Where your task falls on this spectrum tells you everything about how to work with AI.

Task Knowledge Spectrum

Known Tasks
Unknown Tasks

Known Tasks (Delegate)

  • • Removing feature flags
  • • Adding unit tests
  • • Fixing diagnosed bugs
  • • Implementing specs

Unknown Tasks (Leverage)

  • • Diagnosing race conditions
  • • Architectural design
  • • New feature discovery
  • • Performance optimization

Workflow #1: Delegating (Assign and Forget)

Let's talk about delegation. This is where AI really shines and where most of your productivity gains will come from. The idea is simple: you write up what needs to happen, hand it to an AI agent, and go work on something else. Come back later, review what it did, and ship it. That's it.

The key is that you need to know exactly what you want. If you're writing a spec and find yourself saying "figure out the best way to do this," that's a red flag. You're not delegating anymore, you're just hoping the AI makes good architectural decisions for you. (Spoiler: it won't.)

When to Delegate

Perfect Candidates for Delegation:

  • Feature Flag Cleanup: "Remove the ENABLE_NEW_CHECKOUT feature flag and all conditional logic, keeping the new code path."
  • Unit Test Generation: "Add unit tests for the UserService class, mocking the database layer."
  • Diagnosed Bug Fixes: "Fix the off-by-one error in pagination.ts:142 that causes the last page to be skipped."
  • Reference-Based Changes: "Update the Reply to Customer button to match the style in CustomerActions.tsx:45-67."

The Delegation Protocol

Successful delegation follows a systematic process:

1. Write a Clear Specification
   ├─ State the exact outcome
   ├─ Provide code references
   ├─ Include examples
   └─ Define success criteria

2. Create the Task/Issue
   ├─ Use your team's tracking system
   ├─ Include all relevant context
   └─ Link related files/discussions

3. Assign to AI Agent
   ├─ Match task complexity to agent capability
   ├─ Provide access to relevant codebase
   └─ Set clear boundaries

4. Batch Review Later
   ├─ Limit to 1-2 review loops max
   ├─ Accept or reject (no endless iterations)
   └─ If unclear, handle manually

Writing Effective Delegation Specs

The quality of your specification directly determines success rate. Here's a comparison:

Poor Spec (30% success)

Task: Add dark mode support

Please add dark mode to the app.

This is too vague, no context, no examples, no constraints.

Good Spec (85% success)

Task: Add dark mode toggle to Settings page

Reference: See ThemeContext.tsx for theme state
Location: Add toggle to SettingsPage.tsx:67
  (below notification preferences)
Behavior:
- Toggle should use Switch component from ui/Switch
- Persist preference to localStorage
- Apply theme immediately on change
Tests: Add test verifying localStorage update

Specific, actionable, with clear constraints and examples.

The One-to-Two Review Loop Rule

Here's a rule that's saved me so much time: if the AI doesn't get it right after one round of feedback, just do it yourself. Seriously. I used to go back and forth with agents trying to get them to understand what I wanted. You know what I learned? After round two, you're just wasting time.

If it's not right after one correction, either your spec was unclear (fix it for next time) or the task is too complex for delegation (should have been a leveraging session). Don't fall into the trap of endless iterations. Reject the PR and handle it manually.

Workflow #2: Leveraging (Active Collaboration)

Now let's talk about leveraging, which is completely different. This is where you sit down with your AI assistant and work through a problem together. You're not assigning and walking away. You're having a conversation. You're steering, correcting, exploring options. It's like pair programming, except your pair is an AI that can instantly read through your entire codebase.

This requires your full attention. You can't leverage AI while answering Slack messages or sitting in a meeting. But when you need to figure something out, solve something complex, or explore architectural options? This is where the magic happens.

When to Leverage

Ideal Scenarios for Leveraging:

  • Architectural Decisions: "Help me design a caching layer for our API. Consider Redis vs in-memory vs file-based approaches."
  • Bug Diagnosis: "We have a race condition in the checkout flow. Help me trace the issue through the payment processing pipeline."
  • Feature Discovery: "I want to add real-time collaboration to our document editor. Let's explore options and trade-offs."
  • Performance Optimization: "Our dashboard is loading slowly. Help me profile and identify bottlenecks."

The Leveraging Protocol

Active Collaboration Workflow

┌─────────────────────────────────────────┐
│ 1. Discovery Phase                      │
│    → Ask AI to explore codebase         │
│    → Request multiple approaches        │
│    → Discuss trade-offs                 │
└────────────┬────────────────────────────┘
             │
             ↓
┌─────────────────────────────────────────┐
│ 2. Design Phase                         │
│    → Propose initial direction          │
│    → Get AI's feedback and concerns     │
│    → Refine approach collaboratively    │
└────────────┬────────────────────────────┘
             │
             ↓
┌─────────────────────────────────────────┐
│ 3. Implementation Phase                 │
│    ⚠️  INTERRUPT EARLY AND OFTEN        │
│    → Review each file as it's written   │
│    → Correct mistakes immediately       │
│    → Note issues for later fixes        │
└────────────┬────────────────────────────┘
             │
             ↓
┌─────────────────────────────────────────┐
│ 4. Refinement Phase                     │
│    → Queue up review notes              │
│    → Address after main work complete   │
│    → Verify tests pass                  │
└─────────────────────────────────────────┘

The Critical Skill: Interrupting Effectively

Okay, this is probably the most important thing I'll tell you about leveraging AI:interrupt early and often. I cannot stress this enough. When your AI assistant starts heading in the wrong direction, don't wait. Don't let it finish. Don't think "maybe it'll figure it out." It won't. Mistakes compound.

If the AI is applying the wrong pattern, using the wrong data structure, or making security mistakes, stop it immediately. Every line of code built on a flawed assumption just makes the problem worse. I've seen people let agents write 500 lines of code in the wrong direction because they didn't want to interrupt. Don't be that person.

Interrupt when you see:

  • Wrong architectural pattern being applied
  • Incorrect assumptions about your data structures
  • Missing edge cases or error handling
  • Performance anti-patterns creeping in
  • Security vulnerabilities (especially injection risks)

Providing Context: The Make-or-Break Factor

Here's another thing that separates good AI collaboration from frustrating ones: being specific with your context. Don't say "update the API service." There are probably five different API services in your codebase. Which one? Be surgical. Give file paths, line numbers, function names. The more precise you are, the better the output.

Vague Context ❌

"Update the API call service to handle retries better."

Agent searches entire codebase, possibly finds wrong service.

Precise Context ✓

"In src/services/api/subscriber.service.ts:87-120, add exponential backoff to the fetchSubscriberData method."

Agent knows exactly where to look and what to modify.

Common Leveraging Patterns

Pattern 1: Read and Propose

Start by having the AI explore the codebase and propose multiple approaches before committing to implementation.

You: "Read src/auth/* and propose three approaches for adding
     OAuth2 support. Consider our existing session management."

AI: [Explores codebase, identifies patterns]
    "I've analyzed the auth layer. Here are three approaches:

    1. Extend SessionManager with OAuth adapter
       Pros: Minimal changes, reuses existing logic
       Cons: Couples OAuth to session lifetime

    2. Separate OAuthProvider service
       Pros: Clean separation, easier testing
       Cons: Duplicate token refresh logic

    3. Unified TokenManager abstraction
       Pros: Handles both session and OAuth cleanly
       Cons: Requires refactoring existing code"

You: "Let's go with option 3. Start with the TokenManager interface."

Pattern 2: Rapid Prototyping

Use AI to quickly prototype solutions without direction, helping you discover missed requirements early. Then throw away the prototype and implement properly.

You: "Prototype a real-time notification system. Just get something
     working quickly, don't worry about production quality."

AI: [Implements basic WebSocket solution]

You: "Good, this reveals we need:
     - Message queuing for offline users
     - Notification persistence
     - Priority levels

     Let's discard this and design properly now."

Pattern 3: Test Pattern Replication

Provide one or two example tests showing your mocking and assertion patterns, then have the AI generate comprehensive test coverage following those patterns.

You: "Look at these two tests in auth.test.ts:
     - Lines 15-32: Shows how we mock the database
     - Lines 45-58: Shows our assertion style

     Now generate comprehensive tests for UserService following
     these exact patterns. Cover all public methods."

Pattern 4: Incremental Review Notes

As you actively collaborate, keep a running list of minor issues to address after the main work is complete. This prevents context-switching and maintains momentum.

You: "The main logic looks good. I'm noting these for cleanup:
     - Add JSDoc to public methods
     - Extract magic number to constant
     - Handle edge case for empty arrays

     Continue with the error handling implementation.
     We'll fix these notes after."

The Anti-Pattern: Unfocused Multitasking

One of the most common mistakes is trying to manage multiple AI agents simultaneously without proper delegation. This creates several problems:

Why Multitasking Multiple Agents Fails:

  • Context Loss: Switching between agents causes you to lose the mental model of each task, leading to poor decisions and missed issues.
  • Compound Errors: Without focused attention, each agent makes mistakes that build on each other, requiring extensive rework.
  • Poor Output Quality: Agents produce code that works in isolation but doesn't integrate well because you weren't paying attention to architectural consistency.

The Solution: If a task is suitable for autonomous work, fully delegate it and review later. If it requires active collaboration, give it 100% of your attention. Don't try to split focus across multiple leveraging sessions.

Decision Framework: Delegate or Leverage?

Ask yourself these questions:

✓ Delegate if ALL these are true:

  • □ I know exactly what needs to be done
  • □ I can provide clear examples or references
  • □ The solution approach is obvious
  • □ Success criteria are well-defined
  • □ The task is self-contained

→ Leverage if ANY of these are true:

  • □ I need to explore different approaches
  • □ The problem requires diagnosis first
  • □ Architectural decisions are involved
  • □ Requirements are unclear or evolving
  • □ Multiple subsystems need coordination

You Own the Output, Always

Let's get one thing crystal clear: every line of AI-generated code is your responsibility. Not the AI's. Yours. I don't care if Claude wrote it, Devin wrote it, or GPT-47 wrote it. When it breaks in production, when it has a security vulnerability, when it's unmaintainable, that's on you.

This means you need to review everything. Understand the architectural decisions. Check for security issues. Test the edge cases. Make sure it's maintainable. AI tools are incredible productivity multipliers, but they're tools. The quality of your product still depends on your judgment, not the AI's.

I've seen teams ship AI-generated code because "the tests passed" without actually reviewing what it does. Don't do this. Tests validate behavior, not correctness or maintainability or security. You still need to read the code.

Practical Examples

Example 1: Feature Flag Removal (Delegate)

Task Specification:

Title: Remove ENABLE_TOPICS_V2 feature flag

Context:
The ENABLE_TOPICS_V2 feature flag has been enabled in production
for 2 weeks with no issues. Time to clean up the old code path.

Requirements:
1. Remove feature flag definition from config/features.ts
2. Remove all conditional checks for ENABLE_TOPICS_V2
3. Keep the new code path (v2 implementation)
4. Delete old v1 components:
   - TopicsListV1.tsx
   - TopicsFilterV1.tsx
5. Rename v2 components (remove V2 suffix):
   - TopicsListV2.tsx → TopicsList.tsx
   - TopicsFilterV2.tsx → TopicsFilter.tsx
6. Update all imports throughout the codebase
7. Update tests to remove feature flag mocking

Success Criteria:
- No references to ENABLE_TOPICS_V2 remain
- All tests pass
- Application builds without warnings

Outcome: Assigned to Devin, completed in 12 minutes with zero review loops needed.

Example 2: Performance Optimization (Leverage)

Collaboration Session:

You: "Our dashboard page is loading slowly. Help me profile and
     identify bottlenecks. Start by examining the loader in
     app/routes/dashboard.tsx"

AI: [Reads code, identifies multiple sequential queries]
    "I see several performance issues:
    1. Sequential database queries (lines 45-67)
    2. Unneeded data fetching for filters (line 52)
    3. Missing pagination (line 78)

    Should we address all three or prioritize?"

You: "Prioritize query parallelization first. Show me the
     current query waterfall."

AI: [Displays timing analysis]
    "Current: 2.3s total
    - Auth: 100ms
    - User data: 300ms (waits for auth)
    - Stats: 800ms (waits for user data)
    - Recent activity: 1100ms (waits for stats)"

You: "Perfect. The stats and activity queries don't depend on
     each other. Start by parallelizing those. Also, I notice
     we're fetching full user objects but only using IDs, let's
     optimize that too."

AI: [Implements parallel queries + selective fetching]

You: "Good progress. I see a potential issue in the error handling -
     what happens if stats fails but activity succeeds?"

AI: [Adds proper error handling]

You: "Excellent. Now add performance instrumentation so we can
     monitor this in production. Follow the pattern in
     lib/monitoring/performance.ts:23-45"

Outcome: Active session over 45 minutes, reduced page load from 2.3s to 650ms, discovered and fixed three additional edge cases during collaboration.

Maximizing Success: Best Practices

For Delegation

Do

  • Write detailed specifications with examples
  • Provide exact file paths and line numbers
  • Define clear success criteria
  • Batch review multiple tasks together
  • Use voice-to-text for faster specs

Don't

  • Delegate without clear requirements
  • Continue past 2 review loops
  • Delegate architectural decisions
  • Skip thorough code review
  • Assume tests validate correctness

For Leveraging

Do

  • Maintain 100% focus during sessions
  • Interrupt early when issues arise
  • Provide precise code references
  • Ask for multiple approaches first
  • Queue review notes for later

Don't

  • Multitask during active collaboration
  • Let mistakes compound before correcting
  • Use vague references like "the API service"
  • Accept first solution without exploration
  • Context switch between multiple agents

Common Pitfalls and How to Avoid Them

Pitfall 1: Over-Delegation

Symptom: Delegating complex architectural decisions because "the AI should figure it out."

Solution: Use the decision framework. If you can't write a clear spec with success criteria, it's a leverage task, not a delegation task.

Pitfall 2: Under-Leveraging

Symptom: Manually implementing everything because "I could do it faster than explaining it to the AI."

Solution: Practice leveraging workflows on lower-stakes tasks. The upfront time investment in collaboration pays dividends through learning and exploration.

Pitfall 3: Insufficient Review

Symptom: Shipping AI-generated code with minimal review because tests pass.

Solution: Remember that you own the output. Review with the same rigor you'd apply to human-written code, tests validate behavior, not correctness or maintainability.

Pitfall 4: Context Overload

Symptom: Providing massive context dumps hoping the AI will "figure out" what's relevant.

Solution: Be surgical with context. Point to specific files, functions, and patterns. More context isn't always better, precision matters more than quantity.

Key Takeaways

  1. 1. Choose the Right Workflow: Delegate known tasks with clear specs, leverage AI for discovery and complex problem-solving.
  2. 2. Specification Quality Matters: For delegation, invest time in clear, detailed specs with examples and references.
  3. 3. Interrupt Early and Often: When leveraging, correct mistakes immediately before they compound.
  4. 4. Provide Precise Context: Use exact file paths and line numbers instead of vague references.
  5. 5. Limit Review Loops: Maximum 1-2 rounds for delegated tasks, reject and handle manually if unsuccessful.
  6. 6. Own the Output: You're responsible for all AI-generated code - review thoroughly and maintain quality standards.
  7. 7. Stay Focused: Give leveraging sessions 100% attention, batch review delegated tasks later.

Wrapping Up

Look, AI is changing how we write code. That's just reality. The engineers who figure this out early are going to have a massive advantage over those who don't. But it's not about blindly trusting AI or rejecting it entirely. It's about developing a systematic approach to human-AI collaboration.

The delegate vs leverage framework is that system. Is your task well-defined? Delegate it. Is it exploratory or complex? Leverage AI as your pair programmer. It's really that simple.

Start with something easy. Pick a feature flag removal or a simple refactor. Write a good spec, delegate it, see what happens. Then try a leveraging session on something harder, like performance optimization or debugging a race condition. You'll quickly develop intuition for which approach fits which situation.

The engineers who master these workflows now will be the ones leading teams in five years. Don't wait. Start practicing today.

© 2026 Cris Ryan Tan. All rights reserved.

Built with Gatsby, React, and Tailwind CSS