You keep hearing three words: MCP, skills, and plugins. Sometimes used interchangeably. Sometimes explained in ways that make them sound identical. They are not identical. They solve fundamentally different problems, and understanding the boundaries will save you hours of confusion.

Here is the short version. MCP is plumbing. It connects your AI to external tools and data through a structured interface. Skills are expertise. They teach your AI how to think about a task and use tools on its own. Plugins are platform extensions. They add entirely new capabilities to the software your AI runs on.

MCP came first and solved a real problem. But skills were designed for a different era of AI, one where AI can reason, use tools on its own, and keep working in loops until a task is done. Understanding that shift is key to knowing which one to reach for.

How we got here

In late 2024, Anthropic (the company behind Claude) released the Model Context Protocol (MCP). The problem it solved was real: every AI tool had its own custom way of connecting to services like GitHub (where developers store code), Slack (team messaging), databases, and everything else. MCP created one standard way to connect them all. Build one connector, and any AI tool that supports MCP can use it.

At the time, AI was a chatbot. It could answer questions and generate text, but it could not actually do things on your computer. It needed every tool spelled out in advance: here is the tool name, here is what it does, here are the exact parameters you can pass. MCP was the right solution for that moment.

Then everything changed. In early 2025, Anthropic released Claude Code, an AI that does not just chat. It can read files on your computer, write code, run commands, and keep working on its own until a task is done. Other companies followed: OpenAI (the company behind ChatGPT) released Codex CLI, Google released Gemini CLI, and tools like Cursor and GitHub Copilot gained similar capabilities. AI went from something you talk to, to something that works for you. This is what people now call agentic AI, or simply agents: AI that can take actions, use tools, and complete tasks on its own. And an AI that can act on its own does not need every tool spelled out in advance. It can figure out how to use tools if you just teach it the workflow.

That is exactly what skills do. Anthropic shipped Skills in Claude Code in October 2025. Instead of giving the AI a formal tool description, a skill teaches it how to use tools on its own. A skill is just a text file with instructions. No running processes, no protocol, no infrastructure.

Simon Willison, a well-known developer and writer in the AI space, immediately called them “awesome, maybe a bigger deal than MCP.” His reasoning: skills are elegantly minimal, they use almost no resources until actually needed (versus MCP, which loads everything upfront), and they work across every major AI tool. He predicted “a Cambrian explosion in Skills which will make this year’s MCP rush look pedestrian by comparison.”

Two months later, Anthropic released skills as an open standard. The same skill file now works across Claude Code, Codex CLI, Gemini CLI, Cursor, GitHub Copilot, Windsurf, Cline, Amp, and ChatGPT. Write once, use everywhere.

Around the same time, Anthropic donated MCP to a neutral industry body, the Agentic AI Foundation under the Linux Foundation, co-founded with Block (the payments company behind Square and Cash App) and OpenAI. MCP is now an industry standard backed by Amazon, Google, Microsoft, and Cloudflare. It is not going away. But the question of whether you actually need it for most tasks has a different answer than it did a year ago.

MCP: the connector layer

The Model Context Protocol (MCP) is a standard way to connect AI applications to external tools and data. Think of it like USB-C for AI. Before USB-C, every device had its own charger. Before MCP, every AI tool needed its own custom connector for GitHub, Slack, your database, and everything else. MCP created one universal plug.

How it works

The setup has two sides. Your AI tool (Claude Desktop, Cursor, VS Code) is one side. On the other side is a small program, called an MCP connector, that knows how to talk to a specific service. Want your AI to manage GitHub? Run a GitHub MCP connector. Want it to work with a database? Run a database MCP connector.

Each connector tells the AI what it can do: “I can create pull requests,” “I can search issues,” “I can run database queries.” The AI sees these capabilities and can use them when relevant.

What connectors exposeWho controls itExample
ToolsThe AI decides when to use them”Create a pull request,” “Search issues”
ResourcesThe application provides themFiles, database records, documents
PromptsThe user triggers themPre-written templates for common tasks

The ecosystem

MCP has massive adoption: 97 million monthly downloads, 10,000+ connectors, and governance by the Linux Foundation. The most popular ones give you a sense of the range: Playwright MCP by Microsoft for browser automation, Firecrawl for turning websites into clean text, connectors for GitHub, Notion, Brave Search, PostgreSQL, and even Blender for 3D modeling. Services like Composio bundle 250+ integrations into a single connector.

The problem: MCP was built for chatbots

MCP was designed when AI needed every tool spelled out in advance. It solved the right problem at the right time. But the world has changed.

Every MCP connector you add loads its full list of capabilities into the AI’s memory. Connect 10 connectors with 5 tools each and you have burned through 50 tool descriptions before your conversation even starts. That is memory the AI could be using for your actual task.

Armin Ronacher (creator of Flask, a popular web framework) described the problem well: tool descriptions end up “too long to eagerly load, too short to really tell the AI how to use it.” He now prefers having AI manage its own tools through skills because he retains control over them.

MCP has improved. A January 2026 update made the initial memory cost much smaller. But the fundamental design remains: you need a running program for each service you connect, a layer of translation between the AI and the tool, and descriptions that compete for space with your actual conversation.

More importantly, most MCP connectors are just wrapping tools that already exist. A GitHub MCP connector wraps gh (GitHub’s own tool). A Postgres MCP connector wraps psql (the database’s own tool). That is an extra layer between the AI and the tool, when the AI could just use the tool directly.

Where MCP still makes sense

MCP is not useless. It has real value when:

  • The AI can only chat, like in Claude Desktop or other chat-only apps where it cannot run commands
  • There is no existing tool for the service you need to connect to
  • Compliance requires it, in corporate environments that need formal audit trails
  • You do not want the AI running commands, for security reasons

But for most hands-on work, local automation, and anything where the AI can act on its own, something better has arrived.

Skills: built for the agentic era

Skills start from a different question. MCP asks “how do we connect the AI to tools?” Skills ask “how do we teach the AI to use the tools it already has?”

A skill is a set of instructions that turns a general-purpose AI into a specialist. It does not connect the AI to anything new. It teaches the AI how to use what it already has access to, including every tool and program already installed on the computer it runs on.

---
name: weekly-summary
description: Summarize team notes into highlights, decisions, and risks
allowed-tools: Read, Grep, Glob
---

When asked to summarize weekly notes:
1. Read all files matching `notes/*.md`
2. Extract highlights, decisions, and open risks
3. Format as a structured summary with sections for each
4. Keep it under 500 words

That is a complete skill. When the AI reads it, it gains expertise it did not have before. It knows the workflow, the constraints, the expected output format. No running programs. No connections to manage. Just a text file.

How skills work

When you install a skill, it sits as a file in your project. The AI reads the skill’s name and description at the start of a session (this costs almost nothing). When the skill is needed, either because you ask for it or the AI recognizes the situation, the full instructions are loaded. That is it.

Skills have zero infrastructure cost. They are text files. They load instantly. They work offline. They work across every major AI tool.

What makes skills different from MCP

Modern AI can run commands on its own. It can use curl (to fetch data from the web), grep (to search through files), git (to manage code), docker (to run software), and every other tool available on the computer. A skill teaches the AI when and how to use those tools for a specific task. And crucially, skills enable something MCP cannot: autonomous feedback loops.

Consider SquirrelScan’s SEO skill. It instructs the AI to audit a website for search engine optimization, read the results, fix the issues it finds, then re-audit in a loop until the score improves. The AI reasons about the output, decides what to fix, makes the changes, and validates the result. MCP gives AI a button to press. A skill gives AI a workflow to execute, iterate on, and complete on its own.

This pattern works for almost anything. Need to interact with GitHub? There is a tool for that. Databases? There is a tool for that. Web APIs? There is a tool for that. The tools already exist. Skills just teach the AI how to use them.

The most striking example so far: Hugging Face (a major AI platform and community) released a skill that teaches AI to write high-performance GPU code in February 2026. A short text file with structured guidance, plus reference scripts and optimization guides. The result: AI generates working code that makes other AI models run nearly twice as fast on specialized hardware. Deep technical expertise encoded as a text file, producing real performance gains. No MCP connector involved.

As @chrysb put it on X: “on-demand skills is all you need. It can run any command invented since the beginning of computers. A resurgence of glory for ancient, but powerful tools. Now available to us all, piloted by the smartest models on earth.”

The common defenses of MCP, and their counters:

  • “MCP gives you structured, validated inputs”: So does a well-documented tool.
  • “MCP gives you explicit permissions”: So does a controlled environment with an allowlist.
  • “MCP is a standard”: A standard that scales poorly is still a standard that scales poorly.

Works everywhere

The same skill file works across Claude Code, OpenAI Codex CLI, Google Gemini CLI, Cursor, GitHub Copilot, Windsurf, Cline, Amp, and ChatGPT. Write once, use everywhere. MCP promised the same, but skills achieve it with nothing to install or run.

The ecosystem

The ecosystem is growing fast. Vercel (a major web platform) launched skills.sh in January 2026 as an open directory for discovering and installing skills, reaching tens of thousands of installs shortly after launch. Over 52,000 skills are now available across multiple marketplaces: SkillsMP, AgentSkills.to, skill0, Skills Directory, and others. Hugging Face hosts skills too.

The most installed skills show what people actually use them for: React best practices by Vercel (166k installs), web design guidelines (127k), Remotion video rendering (111k), frontend design by Anthropic (99k), and Azure cloud skills by Microsoft (53k each). Domain-specific skills like Hugging Face CUDA kernels, Stripe payment integration, Trail of Bits security auditing, and SquirrelScan SEO round out the picture.

A word of caution: growth has outpaced quality. A Hugging Face analysis of the skills marketplace found that 46% of listed skills are duplicates or near-duplicates, many are bloated far beyond what an AI can reasonably use, and 9% pose critical security risks. Academic research confirms this: SkillsBench, the first large-scale benchmark for agent skills, found that well-designed skills improve AI performance by 16 percentage points on average, with some domains seeing gains over 50 points. But skills that were auto-generated by the AI itself provided no benefit at all. The takeaway is clear: the best skills are written by people who understand the domain and the workflow, not generated in bulk by AI. Quality over quantity. One command to install a good one:

npx skills add openclaw-rocks/skills --skill jobs-ive

When to use skills

  • You want your AI to behave consistently (follow your coding style, your review process, your writing voice)
  • You want to give your AI domain expertise (product strategy, security auditing, SEO optimization)
  • You want your AI to work in loops (check something, fix it, check again until it meets a standard)
  • You want the same skill to work across different AI tools
  • You want zero setup and zero running costs

If your problem is “my AI does not know how to do X properly,” a skill is almost certainly the right answer.

OpenClaw plugins: the platform layer

OpenClaw (the open-source AI assistant platform) has its own way to extend what it can do: tools, skills, and plugins. Skills in OpenClaw work exactly like the standard described above. But plugins are something different entirely.

An OpenClaw plugin is actual code that runs inside the OpenClaw software itself. Where skills teach the AI what to do, and MCP connects it to external services, plugins change what the platform can do at a fundamental level. They can add new messaging channels (so your AI can talk to you on Telegram, WhatsApp, or Teams), add new features to the platform, run background tasks, and bundle their own skills and tools.

OpenClaw’s creator Peter Steinberger deliberately chose not to add native MCP support. As he put it: “there’s a reason I didn’t add MCP support to OpenClaw (except via mcporter mcp-to-cli converter).” The philosophy is clear: skills are the primary way to extend what your AI can do. MCP is available as a fallback through mcporter (which converts MCP connectors into regular tools) when you genuinely need it.

The official plugins cover the most common needs: voice calls, Microsoft Teams, Matrix messaging, and Nostr (a decentralized social protocol). On ClawHub (the community marketplace), over 5,700 community-built skills and plugins span everything from AI model switching to Kubernetes management to SEO auditing. The most active categories are AI/LLM integration, search and research, DevOps, and web development.

What plugins can do

The scope of what a plugin can register is broad:

  • Messaging channels: Connect your AI to Telegram, Matrix, Zalo, MS Teams, and more
  • Custom tools: Give your AI new abilities beyond what skills can teach
  • Commands: Add new commands to the OpenClaw command-line interface
  • API endpoints: Extend what the OpenClaw server can do
  • Background services: Run long-running tasks on a schedule
  • Login flows: Add new ways for users to authenticate (Google, GitHub, API keys)
  • Hooks: Trigger actions automatically when events happen
  • Skills: Bundle skill packages inside the plugin

Plugin structure

Each plugin has a configuration file that declares what it adds to the platform:

{
  "openclaw": {
    "extensions": ["./src/channels.ts", "./src/tools.ts"]
  }
}

The plugin runs inside the OpenClaw software itself, which means it has full access to the system. That makes plugins powerful, but it also means you need to trust the code before installing it.

Installation

openclaw plugins install @openclaw/voice      # from npm
openclaw plugins install ./my-plugin          # local directory
openclaw plugins install plugin.tgz           # tarball

Management through the CLI:

openclaw plugins list
openclaw plugins enable <id>
openclaw plugins disable <id>
openclaw plugins doctor                       # diagnose issues

Security

Because plugins run inside the OpenClaw software, they have the same level of access as the software itself. OpenClaw has built-in protections (it blocks suspicious file paths, validates ownership, and supports allowlists for approved plugins), but the basic rule is: review a plugin before you install it, especially if it comes from an unknown source.

When to use plugins

  • You need to connect OpenClaw to a new messaging platform (Telegram, WhatsApp, Teams)
  • You want to add new capabilities to the OpenClaw software itself
  • You need scheduled tasks or automated reactions to events
  • You are building something that goes beyond what instructions in a text file can achieve

Plugins are the most powerful way to extend OpenClaw, and the most complex. They are also OpenClaw-specific. A plugin does not work in Claude Code or Cursor. It works in OpenClaw.

How they fit together

The three represent an evolution, not a hierarchy:

              What it is                Built for          Works across tools?
-----------------------------------------------------------------------------
MCP           Connectors (protocol)     Chatbot era        Yes
Skills        Instructions (text)       Agentic era        Yes
Plugins       Code (software)           Platform needs     OpenClaw only

MCP solved the integration problem when AI could not use tools on its own. Skills solve it better now that it can. Plugins extend the platform when skills are not enough.

Example: your OpenClaw assistant summarizes your team’s weekly Slack activity.

  1. A skill defines the entire workflow. It tells the AI what output to produce, which channels to prioritize, what counts as a highlight vs. a decision vs. a risk, and the quality bar. It instructs the AI to fetch messages from Slack using the Slack tool.
  2. A plugin provides the delivery channel. A Telegram channel plugin lets OpenClaw send you the finished summary through Telegram every Friday afternoon.

No MCP connector required. The skill teaches the AI how to do the work. The plugin handles the platform integration.

The decision tree

If you are wondering which one to reach for:

“My AI needs to talk to another service.” First ask: does that service have an existing tool? If yes, write a skill that teaches the AI to use it. If no, then consider MCP.

“My AI does not know how to do X properly.” Use a skill. Write the instructions, and the AI learns. If the task requires iteration (check, fix, check again), skills handle that naturally.

“I need to change how OpenClaw itself works.” Use a plugin. Add a new messaging channel, a new feature, or a background task.

“I want all of the above in one installable package.” That is what Claude Code Plugins (January 2026) and OpenClaw plugins both offer: bundles that can ship skills, tools, and platform extensions together.

What happened, and what is next

A year ago, MCP launched to solve a real problem: AI needed every tool formally described because it could not yet use tools on its own. That problem has largely been solved. Modern AI can use existing tools with minimal guidance.

Skills arrived and offered something built for this new reality: teach the AI the workflow in plain text, let it use the tools it already has, and let it keep working until the job is done. No protocol. No extra software to run. No wasted memory. Simon Willison saw it immediately. Armin Ronacher saw it. Peter Steinberger built an entire platform around it.

MCP is not dead. It has 97 million monthly downloads and backing from every major tech company. It will continue to serve chat-only interfaces, corporate compliance, and services that have no existing tools. But for hands-on work, for automation, for anything where the AI can act on its own: skills are the way forward.

The practical advice is simple. Start with skills. Reach for plugins when the platform itself needs extending. And if you want it all managed and always running, that is what we built OpenClaw.rocks to do.


Every OpenClaw.rocks instance supports skills and plugins out of the box. Deploy in seconds on EU infrastructure and start extending your assistant immediately. Get started free or learn more about what OpenClaw is and what it can do.