Claude Code Has Skills. PAI Has a Skill System. Here’s the Difference.

Claude Code Has Skills. PAI Has a Skill System. Here’s the Difference.

There’s a word that shows up in both Claude Code’s documentation and in PAI’s architecture: skills. And because they share the same word — and even the same file conventions — it’s easy to assume they’re roughly equivalent. One is just a slightly fancier version of the other.

They’re not. The relationship is closer to HTTP and a web framework. Claude Code’s skill mechanism is the protocol. PAI is the framework built on top of it.

Understanding that distinction changed how I think about what I’ve actually built on my machine.


Start Here: What Claude Code’s Skill Mechanism Actually Is

Before explaining what PAI adds, it’s worth being precise about what Claude Code provides natively — because it’s both more minimal and more elegant than most people realize.

Claude Code’s skill system works like this:

  1. At startup, Claude reads every SKILL.md file it finds under ~/.claude/skills/
  2. The description field in each skill’s YAML frontmatter determines when that skill activates — it’s pure intent matching. Anthropic caps this at 1024 characters.
  3. When a skill matches your request, the Skill tool injects the full SKILL.md content into Claude’s context window
  4. Claude follows the instructions in that file

That’s the entire mechanism. It’s a context injection system with a routing layer.

The USE WHEN clause in a skill description is the key piece. Here’s a simplified example from my OracleHCM skill:

---
name: OracleHCM
description: Expert Oracle HCM Cloud troubleshooting. USE WHEN user mentions
  Oracle HCM, HCM Cloud, HDL, HCM Data Loader, Journey, Checklist, workflow
  approvals, autocomplete rules, fast formulas, security profiles...
---

When I describe an Oracle HCM problem in natural language, Claude Code matches my intent against that description and loads the skill. I never have to say “use the Oracle HCM skill.” The intent matching handles it.

Elegant. Minimal. And — on its own — surprisingly limited.


The Gap Between “Context Injection” and “Operational Capability”

Imagine a skill that’s just a long markdown file. When it loads, Claude reads the instructions and tries to follow them. If the instructions are clear and the task is simple, that works fine. But for anything complex — something that involves multiple steps, personalized behavior, CLI tooling, external APIs, or parallel agents — a single markdown file loaded into context starts to break down.

The instructions get long. They can’t be personalized without making the skill personal (and therefore un-shareable). There’s no way to dispatch to a sub-procedure. There’s no tooling layer. There’s no way to say “if the user wants to create a blog post, follow this procedure; if they want to publish, follow that one.”

This is the gap PAI fills.


What PAI Builds on Top: Nine Layers

PAI’s SKILLSYSTEM.md defines a canonical structure that every skill must follow. It’s not a suggestion — it’s enforced by convention and by the CreateSkill skill that scaffolds new ones. Here’s what each layer adds.

Layer 1 — Canonical Structure

Claude Code just needs a SKILL.md. PAI requires a specific directory layout:

SkillName/
├── SKILL.md          ← minimal routing only (40-50 lines)
├── Workflows/        ← execution procedures, one per task
│   ├── Create.md
│   └── Update.md
├── Tools/            ← TypeScript CLI tools (always present)
│   └── Generate.ts
└── ApiReference.md   ← context files loaded on demand

SKILL.md stays minimal. Complexity lives in workflows and context files that load when actually needed.

Layer 2 — Workflow Routing

This is the most immediately useful layer. Claude Code routes to a skill. PAI routes within a skill.

The routing table in every SKILL.md dispatches sub-tasks to specific workflow files:

| Workflow    | Trigger                    | File                      |
|-------------|----------------------------|---------------------------|
| **Create**  | "write a post"             | `Workflows/Create.md`     |
| **Publish** | "publish", "deploy"        | `Workflows/Publish.md`    |
| **Header**  | "create header image"      | `Workflows/Header.md`     |

“Write a post” and “publish the site” both activate the same skill, but they route to completely different procedures. Without this, a skill that handles multiple operations becomes one giant file Claude has to navigate by itself.

Layer 3 — The Personalization Layer

Every system skill in PAI checks for user overrides before executing:

~/.claude/skills/PAI/USER/SKILLCUSTOMIZATIONS/{SkillName}/
├── EXTEND.yaml        ← merge strategy (append | override | deep_merge)
└── PREFERENCES.md     ← user-specific behavior

The system skill stays generic and shareable. My preferences — color palettes for the Art skill, voice configurations for the Agents skill, output format defaults for Research — live separately and merge in at runtime. The skill author never needs to know about my preferences. I never need to fork the skill to add my own.

Layer 4 — System vs Personal Skill Separation

PAI enforces a hard naming convention that determines portability:

  • TitleCase (Research, Browser, OracleHCM) = system skills, no personal data, shareable via PAI Packs
  • _ALLCAPS (_BLOGGING, _MAQINA) = personal skills, private by convention, never exported

Claude Code has no concept of skill visibility. PAI makes it structural.

Layer 5 — CLI Tooling Convention

Every PAI skill has a Tools/ directory. When a workflow needs to do something repeatable — generate an image, manage a server, sync a repository — it calls a TypeScript CLI tool instead of embedding logic in the workflow markdown itself.

Tools use #!/usr/bin/env bun, expose configuration via flags, and have .help.md documentation files. This keeps workflows simple (intent routing) and tools encapsulated (execution). You can test a tool independently of its workflow.

Layer 6 — AI-Powered Hooks

PAI runs 17 event hooks that fire at specific moments: session start, prompt submission, pre-tool, post-tool, and others. The most important one for response quality is FormatReminder — it runs AI inference on your raw prompt before Claude even starts responding, classifies the depth required (FULL / ITERATION / MINIMAL), and injects that classification as authoritative context.

This is hooks doing real work, not just shell scripts appending text to prompts.

Layer 7 — The Algorithm

Every response PAI generates runs through a 7-phase problem-solving framework: OBSERVE → THINK → PLAN → BUILD → EXECUTE → VERIFY → LEARN.

This isn’t decorative structure. The OBSERVE phase reverse-engineers your actual intent. The THINK phase selects capabilities and validates skill choices against the problem. The VERIFY phase uses TaskCreate/TaskUpdate to track measurable success criteria. The LEARN phase captures what to improve next time.

Skills feed into this framework — they’re not parallel to it. When a skill activates, it executes inside the Algorithm, with its results held accountable to the ISC criteria created in OBSERVE.

Layer 8 — Agent Composition Patterns

PAI skills can spawn specialized subagents and compose them using named patterns:

Pattern Shape When
Pipeline A → B → C Sequential domain handoff
TDD Loop Engineer ↔ QATester Build-verify cycle
Fan-out → [A, B, C] Multiple perspectives needed
Gate A → check → B or retry Quality gate before progression

A skill that just loads into context can’t orchestrate parallel agents. A PAI skill that routes to a workflow that invokes a Fan-out pattern can research, build, and verify in parallel — with a spotcheck agent at the end synthesizing results.

Layer 9 — Dynamic Loading

Large skills use deferred loading. Only the SKILL.md loads on invocation. Reference documents, API guides, and style specs load only when the specific workflow that needs them runs. This actively manages token budget rather than blowing it on context that might not be needed.


The Feature Gap in One Table

Feature Claude Code (Native) PAI System
Skill discovery YAML description at startup Same + USE WHEN intent parsing
Sub-routing None Workflow routing table
Personalization None SKILLCUSTOMIZATIONS layer
Skill visibility All equal System vs Personal convention
Tooling None TypeScript CLI tools
Hooks Basic 17 AI-powered hooks
Response structure Free-form Algorithm (7 phases, ISC, verify)
Agents None 15+ specialized subagent types
Memory None File-based cross-session memory
Dynamic loading Full file loaded Context files on demand
Portability No convention PAI Packs

Why This Matters Practically

The single most useful shift in mental model: Claude Code skills are context. PAI skills are operational units.

When I ask my system to publish a blog post, the publishing skill doesn’t just remind Claude how publishing works. It dispatches to the Publish workflow, which runs image conversion, calls hugo, commits, pushes to GitHub, and triggers the Actions pipeline that deploys to Namecheap FTP — all as a structured procedure with steps that can fail, be verified, and be corrected independently.

That’s not context injection. That’s execution.

The 34 skills on my system aren’t 34 long markdown files. They’re 34 capabilities, each with their own routing logic, personalization layer, tooling, and agent integration. Claude Code’s mechanism made them possible. PAI’s framework made them reliable.


Where to Go from Here

If you’re new to PAI and want to understand the broader architecture this sits inside — the memory system, the agent tiers, how RAG ties everything together — the prior post RAG, Agents, and Skills: The Three Pillars Inside My Personal AI covers the full picture.

If you want to go deeper on the skill system itself, the canonical reference is ~/.claude/skills/PAI/SYSTEM/SKILLSYSTEM.md — it’s the document all skills are built against, and it explains every convention described here in precise detail.

PAI is open source at github.com/danielmiessler/PAI .