OpenClaw Core Concepts #6: What Are Skills? Teaching Your AI Agent How to Think, Not Just What to Do

If you’ve been following this series, you already know that Tools give your agent hands, eyes, and a voice. Tools are the organs — the raw capability gates that determine what an agent is physically allowed to do. But here’s the thing: giving someone a hammer doesn’t make them a carpenter. Knowing when to swing it, where to aim, and how hard to hit — that’s skill.

That’s exactly what Skills are in OpenClaw.

In the last article, we drew a sharp line between Tools and Skills:

Tools = organs / gates. They determine can the agent do this? Skills = textbooks / maps. They determine does the agent know how to do this well?

This article is the deep dive into Skills — what they actually are, how they work under the hood, how to find and install them, how to write your own, and the non-negotiable security rules you should follow before you ever clawhub install something from a stranger.


Quick recap: where we are in the series

Before we go further, here’s the full roadmap of what we’ve covered:

#TopicWhat it answers
#1GatewayWhat’s the central nervous system?
#2Agentic LoopHow does the agent actually run?
#3MemoryHow does the agent remember things?
#4AgentWhat is the agent, at its core?
#5ToolsWhat can the agent physically do?
#6SkillsHow does the agent know how to do it well?

Skills are where OpenClaw stops feeling like a framework you configured and starts feeling like a colleague you trained.


1. What a Skill actually is (and isn’t)

Let’s kill the misconception first.

Skill is not code. It’s not a plugin. It’s not an API endpoint. It’s not a function your agent calls.

A Skill is a Markdown document — specifically a file called SKILL.md — that lives in your workspace’s skills/ directory and gets loaded into the agent’s system prompt context at the start of each session. It tells the model: “When you encounter task X, here’s the step-by-step approach you should follow, the tools you should reach for, the edge cases to watch out for, and the output format the user expects.”

Think of it like this:

  • Without a skill: your agent is a brilliant generalist who figures things out on the fly. Smart, but inconsistent.
  • With a skill: your agent is a trained specialist who follows a proven playbook every single time. Consistent, reliable, predictable.

The difference isn’t intelligence. It’s institutional knowledge — and Skills are how you package that knowledge into something reusable, shareable, and version-controlled.


2. The anatomy of a SKILL.md file

Every skill is a versioned bundle. At its heart is the SKILL.md file, which follows a straightforward structure:

---
name: daily-briefing
version: 1.2.0
description: Compiles a personalized morning briefing from calendar, weather, and news sources
requires:
tools:
- web_search
- web_fetch
- message
env:
- OPENWEATHER_API_KEY
tags: [productivity, morning, automation]
---

# Daily Briefing Skill

## When to use this skill
Trigger this skill when the user asks for a "morning briefing", "daily summary", or when
the heartbeat cron fires at the configured morning time.

## Step-by-step workflow
1. Fetch today's weather using `web_fetch` with the OpenWeather API
2. Search for top news headlines using `web_search` with the user's configured topics
3. Pull today's calendar events from memory (key: `calendar_today`)
4. Compose a structured briefing in this exact format: [...]
5. Send via `message` to the user's primary channel

## Edge cases
- If weather API fails: note the outage, skip weather section, continue with news
- If no calendar events: omit that section entirely rather than saying "no events"
- Never summarize news items longer than 2 sentences

## Output format
Use this template exactly: [...]

Notice a few key things:

The requires block is a contract, not a suggestion. It tells OpenClaw which tools the skill needs to function, and which environment variables must be set. If requirements aren’t met, the skill shows up in the registry but is marked as not eligible — the agent simply won’t attempt to invoke it until the prerequisites are satisfied.

The body is pure natural language — no code, no schemas, no DSL. The LLM reads it and follows it. That’s the whole trick. Skills work because modern LLMs are genuinely good at following structured instructions when those instructions are precise and unambiguous.

The description field is critical for matching. When a message comes in, OpenClaw matches it against active skill descriptions to decide which skill (if any) to invoke. A vague description produces inconsistent activation. A precise description produces reliable behavior. Think of the description as the skill’s “trigger signature.”


3. How skills get loaded into the agentic loop

Here’s something many people miss: Skills don’t run separately from the Agentic Loop. They inform it.

When the Gateway starts a new session, it does a skills snapshot: it scans the skills/ directories in priority order, checks each skill’s requires declarations against the current environment, and builds a skills roster — a compact list of eligible skill names and descriptions.

That roster gets injected into the system prompt alongside the agent’s bootstrap context files (SOUL.md, IDENTITY.md, HEARTBEAT.md — all covered in #4). The model now knows which skills are available and what each one does.

Then, when model inference begins and the model decides a skill is relevant to the current task, OpenClaw loads the full SKILL.md content into context on demand. The model reads it, follows the instructions, and executes using the tools it already has access to.

This is the elegant design insight: Skills never grant new tool permissions. They only tell the model how to use the tools it already has. The permission boundary is always enforced at the Tools layer. Skills work within that boundary — never outside it.

The full flow:

Gateway starts session
→ Scans skills/ directories (workspace > managed > bundled)
→ Checks each skill's requires block
→ Builds eligible skills roster
→ Injects roster into system prompt

User message arrives
→ Model reads message + roster
→ Model decides: "daily-briefing skill applies here"
→ Full SKILL.md content loaded into context
→ Model follows skill instructions using available tools
→ Response streamed back
→ Memory persistence (if skill specifies)

One important operational note: skills are snapshotted at Gateway startup. If you install a new skill mid-session, it won’t be active until you restart the Gateway. This is by design — it keeps behavior deterministic within a session and prevents mid-task context shifts.


4. Three places skills live (and why the order matters)

OpenClaw loads skills from three locations, in this priority order:

PriorityLocationTypeNotes
1 (highest)<workspace>/skills/Workspace skillsPer-workspace; override everything
2~/.openclaw/skills/Managed skillsShared across all agents on the machine
3 (lowest)Built-inBundled skillsShip with OpenClaw; always available

If two skills share the same name, the higher-priority one wins. This is how you override a bundled skill’s behavior without touching OpenClaw’s core files — just drop a skill with the same name into your workspace’s skills/ folder. Your version takes precedence.

Workspace-level skills also make per-agent scoping possible. Because different agents can point at different workspace directories, you can give your research agent a different skill set than your DevOps agent, without any global configuration changes.


5. ClawHub: the npm for AI agent skills

You don’t have to write every skill yourself. That’s what ClawHub (clawhub.ai) is for.

ClawHub is the public skill registry for OpenClaw — think of it the way you think of npm for Node.js, or VS Code’s extension marketplace for your editor. Developers publish reusable skill packages; anyone can search, install, and update them from the command line with a single command.

The numbers are significant: ClawHub crossed 13,000+ published skills by early 2026, with new ones published daily. The registry uses vector search powered by embeddings — not brittle keyword matching — so you can find skills by describing what you want in plain English rather than guessing the exact package name.

The two ways to interact with ClawHub

Via native OpenClaw commands (recommended for day-to-day use):

# Search for skills by natural language
openclaw skills search "morning briefing from calendar and news"

# Install a skill into your active workspace
openclaw skills install daily-briefing

# Update all installed skills
openclaw skills update --all

# List what's currently installed
openclaw skills list

Via the standalone clawhub CLI (recommended for publishing, auth workflows, and scripting):

# Install globally
npm i -g clawhub

# Authenticate (required to publish)
clawhub login

# Search and inspect before installing
clawhub search "github PR review"
clawhub info gitgoodordietrying/pr-reviewer

# Install a specific version
clawhub install gitgoodordietrying/pr-reviewer --version 2.1.0

# Publish your own skill
clawhub publish ./my-skills/daily-briefing

# Update everything
clawhub update --all

The native OpenClaw commands install into your active workspace and persist source metadata so later update calls stay anchored to ClawHub. The standalone CLI is what you reach for when you need registry authentication, publishing, or CI pipeline automation.

A note on skill discovery beyond ClawHub

If you find a skill on GitHub that hasn’t been published to ClawHub yet, you can paste the repository URL directly into your conversation with the agent:

Install this skill: https://github.com/some-user/some-openclaw-skill

OpenClaw recognizes the pattern, downloads the skill, and confirms installation. This works for any public GitHub repository containing a valid SKILL.md at its root. Convenient — but read the security section before you do this with anything from an unknown author.


6. Writing your own skills

Writing a skill is one of the highest-leverage things you can do with OpenClaw. A well-written skill for a recurring task you do daily can save hours per week — and once it’s written, it’s portable, shareable, and version-controlled.

Here are the rules for writing skills that actually work:

Rule 1: Treat SKILL.md like a recipe for a very literal cook

The LLM executes what it reads. Ambiguous instructions produce inconsistent behavior. Be specific about:

  • When to trigger the skill (what phrases, what contexts)
  • Step-by-step workflow in explicit numbered sequence
  • Which tools to use at each step
  • Edge cases and how to handle them
  • Exact output format — don’t leave this to the model’s judgment

Rule 2: Write the description like it’s a matching rule

The description field in your frontmatter is how OpenClaw decides whether to invoke your skill. It should be specific enough to match your intended use cases and narrow enough not to fire on unrelated requests. Bad: "Handles productivity tasks". Good: "Compiles a morning briefing from calendar events, weather, and configured news sources on user request or scheduled cron trigger".

Rule 3: Declare your requirements honestly

Every tool and env var your skill needs should be in the requires block. This isn’t just documentation — OpenClaw uses it to gate skill eligibility. If you declare a dependency on web_search but the agent’s tool profile doesn’t include that group, the skill won’t activate. This is a feature, not a bug. It prevents your skill from running in a degraded state and failing mysteriously mid-task.

Rule 4: Keep skills focused

A skill should do one thing exceptionally well, not ten things passably. Skills that try to cover too many scenarios produce bloated prompts and inconsistent behavior. Split broad workflows into multiple focused skills and let the model compose them.

A minimal skill template to start from:

---
name: your-skill-name
version: 1.0.0
description: One precise sentence describing when this skill activates and what it does
requires:
tools:
- fs # if you need file read/write
- web_search # if you need search
env: []
tags: [your-tag, another-tag]
---

# Your Skill Name

## Purpose
One paragraph explaining what this skill does and why it exists.

## Trigger conditions
- User asks for [specific thing]
- Heartbeat fires with [specific condition]
- [Other explicit triggers]

## Workflow
1. [First step — be specific about which tool and how]
2. [Second step]
3. [Third step]
4. [...]

## Edge cases
- If [X happens]: [do Y, not Z]
- If [API fails]: [fallback behavior]

## Output format
[Exact template or format spec]

## Rules
- [Hard constraints the agent must never violate]
- [Example: never delete files without explicit confirmation]

7. Skills vs. SOUL.md: not the same thing

This confuses a lot of new users. Let’s settle it definitively.

SOUL.mdSkills
What it definesThe agent’s core identity — personality, values, communication style, ethical rulesTask-specific workflows and domain knowledge
When it’s activeAlways — every session, every taskOn demand — only when relevant to the current task
How it loadsAlways injected into system promptRoster injected at startup; full content loaded on match
What happens if you delete itThe agent loses its personality and baseline behaviorSpecific capabilities disappear; core behavior unchanged
AnalogyWho you areWhat you’ve learned to do

SOUL.md is covered in depth in #4. The short version: think of SOUL.md as the agent’s character, and Skills as the agent’s résumé. Character is always on; résumé items get cited when relevant.


8. Security: the part most people skip until something goes wrong

Skills are powerful precisely because they inject arbitrary text into your agent’s context and influence how it uses real tools with real side effects. That makes them a meaningful attack surface — and one that’s easy to underestimate because “it’s just a Markdown file.”

It’s not just a Markdown file. It’s instructions that run.

The threat model

Prompt injection via skill content. A malicious SKILL.md can include hidden instructions that override or subvert your agent’s normal behavior — redirecting outputs, exfiltrating data via tool calls, or suppressing the agent’s safety rules. Because the skill content is loaded directly into context, a carefully crafted attack can be difficult for the model to distinguish from legitimate instructions.

Supply chain poisoning. ClawHub is an open registry — anyone with a GitHub account older than one week can publish. The awesome-openclaw-skills community list alone tracks 5,400+ curated skills, filtered down from 13,000+ in the registry. The long tail contains plenty of unreviewed packages.

Dependency confusion. A skill that calls external APIs, shell scripts, or other binaries can be a vector for executing arbitrary code — especially if the agent is running with group:runtime permissions enabled.

What ClawHub does to help

ClawHub runs automated security analysis on every published skill. Each skill page shows the results, including flagged behaviors like network requests, file system writes, and credential handling. Skills flagged as “suspicious” are auto-hidden for review after multiple community reports.

OpenClaw also has a VirusTotal partnership that provides security scanning for skills — you can visit a skill’s page on ClawHub and check the VirusTotal report before installing. This was introduced after a series of supply chain incidents in the early days of the ecosystem, and it meaningfully raises the floor on what gets installed without scrutiny.

Your non-negotiable pre-install checklist

Before you clawhub install anything that isn’t a bundled skill or a package you wrote yourself:

  1. Read the SKILL.md in full. Yes, the whole thing. It’s usually under 200 lines. If it’s not, that’s a flag. Look for anything that instructs the agent to exfiltrate data, override rules, or make unexplained external calls.
  2. Check the security analysis tab on ClawHub. Look for a “benign” rating. Any flagged behaviors should have an obvious, legitimate explanation given the skill’s stated purpose.
  3. Check the VirusTotal report if the skill is newer or from an unknown author.
  4. Review the author’s history. Does the publisher have other well-regarded skills with community engagement (stars, comments, install counts)? A first-ever published skill from a brand-new account is higher risk.
  5. Check the requires block for overly broad tool requests. A skill that claims to need group:runtime (shell execution) for something that logically only needs group:fs (file read/write) is suspicious. Question why.
  6. Test in a restricted workspace first. Before enabling a new skill on your main agent with full tool permissions, run it in a sandboxed workspace with a minimal tool profile. Watch what it actually does.
  7. Never install skills via direct GitHub URL from unknown sources without reading the repository first. The conversational install path is convenient; it’s not a security shortcut.

9. Managing skills across agents: scoping and per-agent control

One of the cleanest patterns in OpenClaw is using the three-tier skill loading hierarchy to give different agents different capabilities without duplicating configuration.

Example: a two-agent household

~/.openclaw/skills/           ← shared base skills
weather.SKILL.md
summarize.SKILL.md

~/workspaces/research-agent/skills/ ← research-specific overrides
web-research-workflow.SKILL.md
citation-formatter.SKILL.md

~/workspaces/devops-agent/skills/ ← devops-specific overrides
deployment-checklist.SKILL.md
incident-response.SKILL.md

The research agent and the DevOps agent share the managed-level skills (weather, summarize) but each has its own workspace-level specializations. Neither can access the other’s workspace skills. The separation is clean and requires no complex permission configuration — just directory structure.

Pair this with per-agent tool profiles (covered in #5) and you have a system where:

  • The research agent has web_searchweb_fetch, and fs tools — enough to research and write
  • The DevOps agent additionally has group:runtime — enough to run deployment scripts
  • Each agent’s skills are tailored to its actual role
  • Skills that require tools the agent doesn’t have are automatically ineligible — they simply don’t appear in that agent’s roster

This is minimum viable access for skills, applied properly. The agent can only do what its tools allow, and its skills only activate when they’re appropriate for the task at hand.


10. The context budget problem (and how to stay out of trouble)

Skills load into your agent’s context window. Context is finite — and every byte of skill content loaded is a byte not available for conversation history, memory, and tool results.

Here’s the thing: OpenClaw loads skills on demand, not all at once. The roster (names + descriptions) is always in context; the full SKILL.md body only loads when the model decides a skill is relevant. This keeps baseline context usage low.

But this design only works if you keep your skills focused. A 5,000-word SKILL.md that tries to cover every conceivable scenario will consume significant context budget every time it activates. Best practice:

  • Keep individual skills under ~300 lines. Enough for genuine complexity; not so much that you’re bloating context.
  • Split complex workflows into multiple skills. A three-step research workflow is better as research-searchresearch-synthesize, and research-format than one monolithic research-everything skill.
  • Audit your installed skills periodically. Run clawhub list --verbose and uninstall skills you haven’t used in weeks. Roster bloat slows matching and increases the chance of false-positive activations.
  • Use the tags field. Tags help ClawHub search and community discovery — but they also signal to you, six months later, why you installed a skill and whether it’s still relevant.

11. End-to-end example: a skill doing real work

Let’s make this concrete. Suppose you’ve installed a skill called pr-review-workflow that your team uses to standardize how the agent reviews pull requests.

What happens when you type: "Review the open PRs in the backend repo"

  1. Gateway session starts → skills roster built → pr-review-workflow is eligible (requires web_fetch and fs, both available in this agent’s tool profile)
  2. Model sees the roster → matches your message to pr-review-workflow based on its description: “Fetches open PRs from a configured GitHub repository, reviews diff content, and produces a structured review report”
  3. Full SKILL.md loads into context → model reads the workflow: fetch PR list via web_fetch to GitHub API → for each PR, fetch the diff → analyze against the team’s review checklist (loaded from IDENTITY.md) → write structured report to pr-reviews/ directory via fs:write
  4. Three tool calls, one iteration:
    • web_fetch → GitHub API → PR list
    • web_fetch → diff content for each open PR
    • fs:write → pr-reviews/2026-04-10.md
  5. Response streams back with a summary of findings, link to the written report, and flags for anything requiring human decision
  6. Memory update → session notes that PR review was run today → prevents duplicate runs in the heartbeat loop

Total wall time: under a minute. Consistent output format every time. No ad-hoc improvisation. That’s what a well-written skill buys you.


The three-sentence summary

Skill is a Markdown playbook that teaches your agent how to handle a specific task consistently, using the tools it already has permission to access. Skills are loaded on demand into the agent’s context, never grant new permissions, and are managed through ClawHub — OpenClaw’s versioned, searchable public registry. Write skills that are focused, precise, and honest about their requirements; treat every third-party skill as untrusted until you’ve read it — because it will run.


What’s next

We’ve now covered the full core stack: Gateway → Agentic Loop → Memory → Agent → Tools → Skills. In #7, we’ll go deeper into Plugins — the layer that goes beyond Markdown and lets the community extend OpenClaw with actual compiled code: new channels, new model providers, new tools that don’t exist in the built-in set. If Skills are the textbooks, Plugins are the new organs — and the security considerations are a full order of magnitude more serious.


More in This Series:

References

  1. OpenClaw Official Docs — Skills https://docs.openclaw.ai/tools/skills
  2. OpenClaw Official Docs — Creating Skills https://docs.openclaw.ai/tools/creating-skills
  3. OpenClaw Official Docs — Skills Config https://docs.openclaw.ai/tools/skills-config
  4. OpenClaw Official Docs — ClawHub https://docs.openclaw.ai/tools/clawhub
  5. openclaw.cc — Skills | OpenClaw Docs https://openclaw.cc/en/tools/skills
  6. openclaw.com.au — OpenClaw Skills Directory – 55+ Integrations for Your AI Agent https://openclaw.com.au/skills
  7. openclaw-ai.com — Skills | OpenClaw Docs https://openclaw-ai.com/en/docs/tools/index
  8. openclawlab.com — Skills | OpenClaw Docs https://openclawlab.com/en/docs/tools/skills/

Leave a Comment