Back to posts

Claude Skills: Turning Personal Expertise into Team Superpowers

6 min readAI & Team Productivity

The Knowledge Transfer Problem

You spend weeks mastering a complex workflow. You document it, share it with your team, and hope everyone references it when needed. But people forget the docs exist, can't find them, or end up reinventing solutions you already figured out.

Claude Skills solve this differently. Instead of hoping people look up your documentation, you give Claude the knowledge directly. When someone needs that expertise, Claude already has it. It still takes effort to write good skills, and they need maintenance as your systems change, but the payoff is that expertise becomes ambient rather than something people have to go looking for.

How Skills Work

Think of Skills like giving Claude a reference guide it checks before starting work. When you begin a conversation, Claude scans your uploaded Skills and activates any that are relevant. You don't manually trigger them. They activate based on context.

Without Skills

Copy-paste the same prompt every time
Context gets lost between conversations
Everyone needs to remember the workflow

With Skills

Upload once, works everywhere
Automatically applies when relevant
Team shares the same knowledge base

Skills We Use at Lorikeet

We've built around 60 skills at Lorikeet. Some are essential, some were experiments that didn't stick. Here are the ones that actually became part of how we work daily.

On-Call Incident Response

Our diagnose-ticket skill gets the most use during on-call rotations. When you get paged about a production issue, you say "diagnose ticket abc-123" and Claude fetches production logs, identifies the ticket type, builds a timeline of events, correlates errors with code paths, and generates a diagnosis report.

What makes this skill work well is its structure. The main file is a thin dispatcher, around 160 lines, that routes to specialized sub-guides based on log patterns it detects. Voice ticket issues route to one guide, integration failures to another, workflow execution problems to a third. Each sub-guide has the specific queries, patterns, and decision trees for that category. We call this the "hub-and-spoke" pattern, and it's become our go-to for any skill that needs to handle multiple scenarios.

We've also built diagnose-and-track, which extends this further. After diagnosing an issue, it searches for duplicate tracking tickets, creates or enriches tickets with the diagnosis, and can reply to Slack threads with findings. It turns incident response into a systematic workflow rather than ad-hoc debugging.

Development Workflow Skills

  • ship-it: Complete PR workflow from code to production. Sets up worktrees, runs local checks, creates PRs, performs self-review, runs security scans, monitors CI, handles review feedback, merges, and monitors deployment. This is our best example of skill composition. It orchestrates eight other skills, each of which works independently too.
  • pr-feedback: Fetches PR review comments, filters out bot noise, categorizes each comment (actionable, unclear, discussion, skip), and works through them systematically. Commits fixes with clear descriptions of what was addressed. Includes fallback paths for when tooling isn't available.
  • ci-monitor: Polls CI status and monitors for new review comments in parallel. Fetches failure logs when checks fail, attempts fixes for common issues, and adapts its polling interval based on how long jobs typically take. Includes judgment criteria for which comments to act on immediately vs. discuss first.

Code Quality Skills

  • security-pass: A filter pipeline with early exits. First, it checks if any changed files are in paths that always require human review (auth, migrations, infrastructure), it stops and flags. Then it auto-passes anything that's clearly safe (test files, styling). Everything else gets checked against our threat model: auth middleware, secrets, tenant isolation. It only produces output when there's something to report.
  • find-similar-bugs: After fixing a bug, it analyzes the root cause, identifies a "standard pattern" already in the codebase, then searches for places using the anti-pattern. Each candidate gets triaged: BUG (fix it), SAFE (document why), or REFACTOR (update for consistency). Prevents the same bug from appearing in five different places.
  • remix-page-load-optimization: Performance optimization for our Remix frontend. Progresses through three patterns of increasing aggressiveness: defer non-critical data, defer main content with skeleton UI, strategic query ordering. Includes honest limitations about when defer doesn't actually help.

DevOps Skills

  • local-setup: Sets up a complete local development environment from scratch. Handles environment files, dependency installation, Docker builds, and service startup in the right order. Includes verification steps with expected outputs and a fallback path when prerequisites aren't available.
  • debug-local-setup: Another hub-and-spoke skill. The main file is under 90 lines, mostly a routing table that maps error patterns to five specialized sub-guides (Docker, database, auth, dependencies, services). Each sub-guide has diagnostic commands and common fixes for that category.
  • deploy-monitor: Watches deployments after merge, polling build status and reporting completion or failure. Uses different output templates for success (timestamps, what deployed) vs. failure (log links, which jobs failed).

Structural Patterns We've Found

After building 60+ skills, we've noticed most fall into a few structural patterns. Knowing these upfront saves time when writing new ones:

PatternHow It WorksGood For
Hub-and-spokeThin main file routes to specialized sub-guides based on what it detectsIncident diagnosis, troubleshooting, anything with multiple scenarios
Filter pipelineEarly exits at each stage. Stop if human review needed, auto-pass if clearly safe, check everything elseSecurity checks, code review triage, where most items need no action
Linear orchestrationFixed sequence of steps with confirmation checkpoints and calls to other skillsEnd-to-end workflows like ship-it where order matters
Monitoring loopPoll for status changes, react when something happens, repeat until doneCI monitoring, deployment watching, async processes you'd otherwise babysit

What We Learned the Hard Way

Not everything worked on the first try. Some lessons from building and maintaining our skills library:

  • Vague skills produce vague results. Our first version of the incident diagnosis skill said "check the logs for errors." Claude would dutifully check logs and report that yes, there were errors. The skill became useful when we added the specific queries to run, the exact column names to look for, and a routing table that mapped log patterns to specialized guides. Being precise is the difference between a skill that saves time and one that wastes it.
  • Skills go stale. We changed our database column naming conventions and didn't update the diagnosis skill. For weeks, it was generating queries with the old column names and hitting errors. Now we treat skills like code: they live in the repo, get reviewed in PRs, and someone notices when they drift.
  • Big monolithic skills break down. We initially wrote ship-it as one massive file. It was fragile and hard to update. Changing the CI monitoring behavior meant editing a 500-line skill and hoping you didn't break the deployment section. Splitting it into composable pieces (separate skills for PR creation, CI monitoring, security checks, deployment watching) made each one more reliable and independently useful.
  • Some things shouldn't be skills. We tried making a skill for architectural design decisions. It didn't work. The problem space was too open-ended and context-dependent. Skills work best when there's a repeatable process with clear inputs and outputs. If you find yourself writing a skill that's mostly "use your judgment," it's probably not a good candidate.

Building Your First Skill

Here's a realistic example of what a well-structured skill looks like. This is a simplified version of a troubleshooting skill using the hub-and-spoke pattern:

# Debug Local Dev

## When to Use This Skill
- User says "local dev is broken", "can't start the app", etc.
- After a failed local-setup attempt

## Quick Diagnostic
Run these in order, stop at the first failure:
1. Check prerequisites: node -v, docker ps
2. Check containers: docker compose ps
3. Check logs: docker compose logs --tail=50
4. Check health: curl http://localhost:3000/healthz

## Route to Guide

| Error pattern              | Guide              |
|----------------------------|--------------------|
| "connection refused"       | → database.md      |
| "docker" in error message  | → docker.md        |
| "401" or "403"             | → auth.md          |
| "module not found"         | → dependencies.md  |
| None of the above          | → general.md       |

## Each sub-guide contains:
- Specific diagnostic commands for that category
- Common root causes with fixes
- "If all else fails" nuclear option (full reset)

Notice how the main file stays small and acts as a router. The specialized knowledge lives in sub-guides that can be updated independently. The routing table handles the branching logic that makes troubleshooting hard to do from memory.

What Makes Skills Effective

  • Decision frameworks over procedures: Instead of listing steps, include "If you see X, use guide A. If you see Y, use guide B." This handles the branching logic that makes debugging hard.
  • Concrete commands: Include the actual shell commands, API calls, or file paths. "Fetch logs" is vague. docker compose logs api --tail=50 is actionable.
  • Output templates: Define what the final output should look like. This ensures consistent reports that can be shared or tracked. We use different templates for success vs. failure cases.
  • Judgment criteria: For skills that make decisions (like which PR comments to act on), include explicit criteria. "Action immediately: bug reports, security concerns. Discuss first: architectural changes. Skip: stylistic preferences."
  • Graceful degradation: What should Claude do when a tool isn't available or a command fails? Our PR feedback skill includes a fallback path for when the GitHub CLI isn't installed. Skills that handle failure gracefully get used more.

From Individual to Institutional Knowledge

The real value shows up over time. Each skill you add compounds:

  • A junior developer gets senior-level guidance during on-call
  • Backend engineers can confidently work on frontend tasks
  • New hires stop getting blocked on the same local dev issues
  • Knowledge survives team turnover

When someone improves a skill, everyone benefits on their next conversation. It's documentation that gets used because Claude uses it for you.

That said, skills aren't a silver bullet. They need maintenance, they can go stale, and they work best for repeatable processes with clear structure. But for the workflows your team runs every day (shipping code, debugging incidents, onboarding new hires), the effort of writing a good skill pays for itself quickly.

Get Started

Think about the last time you helped a teammate debug something. What commands did you run? What patterns did you look for? What decision did you make based on what you saw? That's your first skill.

Start small. A skill that fetches logs and identifies error patterns is more useful than an elaborate workflow that tries to do everything. Once you have a few working skills, compose them into larger workflows. Our ship-it skill started as three separate skills that we later wired together.

Official Claude Code Skills Documentation

How to create and manage Skills in Claude Code.

Anthropic Prompt Engineering Course

Learn the fundamentals of writing effective prompts. The same principles apply to Skills.

© 2026 Cris Ryan Tan. All rights reserved.

Built with Gatsby, React, and Tailwind CSS