Cortex Skills logo
A Cortex Skills Product
Master AI Agents. Any Platform. Any Skill Level.

Digital product file

SkillForge Pro

Autonomous Skill Creation System for AI Agents

HTML EditionPrint ReadyPremium Dark Mode

Table of Contents

  1. What This Is
  2. What Problem It Solves
  3. How It Works
  4. Why It Matters
  5. Best Use Cases
  6. Operator Notes
  7. Prompt
  8. FAQ

Section 1

What This Is

SkillForge Pro gives your AI a discipline most systems never develop on their own: when something useful happens, capture it, package it, and make it reusable.

Section 2

What Problem It Solves

Without a system like this, valuable fixes, prompts, workflows, and lessons vanish inside chat history. SkillForge Pro stops that leak and turns repeated work into compounding infrastructure.

Section 3

How It Works

Install the directive once, then let the system run. After every task, error, or research session, the AI checks whether a reusable skill should be created and files it into an indexed library.

Section 4

Why It Matters

This is how an AI setup becomes more than reactive. It begins to accumulate operating intelligence and reduce rework across the board.

Section 5

Best Use Cases

Use it for technical troubleshooting, prompt engineering, operational SOPs, research methods, workflow capture, and failure post-mortems.

Section 6

Operator Notes

The retroactive sweep and weekly audit are where the real compounding starts. Skip those and you are leaving money on the table.

Full Prompt

The Exact Prompt

This section preserves the original prompt content exactly, word for word.

================================================================================ SKILLFORGE TOOLKIT Autonomous Skill Creation System for AI Agents ================================================================================ PRODUCT GUIDE & PROMPT ================================================================================ TABLE OF CONTENTS ================================================================================ 1. What This Is 2. What Problem It Solves 3. How It Works 4. Compatible Platforms 5. The Prompt 6. Tips & Tricks 7. Customization Guide 8. Frequently Asked Questions ================================================================================ 1. WHAT THIS IS ================================================================================ SkillForge is a standing directive that turns your AI agent into a self-improving system. Every time your AI solves a problem, writes a prompt, builds a workflow, finds a workaround, configures an integration, or learns from a failure — it automatically packages that knowledge into a reusable "skill" that can be retrieved and executed again later. Over time, your AI builds its own library of institutional knowledge. It gets smarter every day — not because you tell it to, but because the system compounds on itself. Think of it as giving your AI the instinct to take notes, build SOPs, and organize its own playbook — permanently and automatically. ================================================================================ 2. WHAT PROBLEM IT SOLVES ================================================================================ If you work with AI agents regularly, you know these problems: - You solved something complex last week but your AI has no memory of the solution. You have to re-explain everything from scratch. - You built a great workflow or prompt during a session, but it was never saved anywhere. It is gone. - Your AI makes the same mistake twice because the lesson from the first failure was never captured. - You have multiple agents or team members using AI, but knowledge is siloed. One agent's breakthrough never reaches the others. - You keep doing repetitive setup work because nobody documented the process the first time. SkillForge eliminates all of this. Every useful piece of knowledge is automatically captured, structured, indexed, and made available for future use — by any agent in your system. ================================================================================ 3. HOW IT WORKS ================================================================================ Step 1: Paste the SkillForge directive into your AI agent's system prompt or send it as a standing instruction. Step 2: Your AI now runs a background check after every conversation, every task, and every error: "Did anything useful just happen that should be captured as a reusable skill?" Step 3: If yes, the AI creates a skill file using a standardized template — complete with trigger conditions, step-by-step content, dependencies, edge cases, and expiry dates. Step 4: Every skill is logged in a master index (SKILL-INDEX.md) that serves as the single source of truth for all captured knowledge. Step 5: Weekly, the AI audits the full skill library — updating stale skills, retiring obsolete ones, and flagging gaps where repetitive work has no skill yet. Step 6: Over time, your system builds a growing, self-maintaining knowledge base that makes every future interaction faster and more accurate. ================================================================================ 4. COMPATIBLE PLATFORMS ================================================================================ SkillForge works with any AI platform that supports: - Persistent instructions or system prompts - File creation and management - Conversation memory or context awareness Tested and designed for: - OpenClaw - AutoGPT - CrewAI - Custom agent frameworks - Claude Projects (with project knowledge) - ChatGPT with persistent memory - Any multi-agent orchestration system Single-agent setups work perfectly. Multi-agent setups get even more value because skills are shared across the entire team. ================================================================================ 5. THE PROMPT ================================================================================ Copy everything between the START and END markers below. Paste it into your AI agent's system prompt, configuration file, or send it as a standing directive message. --- PROMPT START --- SKILLFORGE — STANDING DIRECTIVE Autonomous Skill Creation System Priority: ALWAYS ON — Permanent standing order THE DIRECTIVE Effective immediately, this is a permanent standing order for you and every agent in this system. Every time you or any agent solves a problem, builds a workflow, creates a process, writes a prompt, finds a workaround, develops a strategy, figures out a fix, or learns something useful — turn it into a skill. This is not optional. This is automatic. If it was useful once, it will be useful again. Capture it. Package it. Make it a reusable skill. The goal: this system gets smarter every single day — not because it is told to, but because it builds on itself. WHAT COUNTS AS A SKILL If any of the following happen during any conversation or task, a skill should be created: 1. A problem was solved — The fix, the root cause, the exact steps to resolve it. Especially if it took more than 2 messages to figure out. 2. A prompt was written — Any prompt crafted for a specific purpose. The full text, what it does, when to use it, and any variations discussed. Prompts are always reproduced word for word — never summarized. 3. A workflow was built — Any multi-step process that produces a result. Step 1 through Step X, in order, with context on why each step matters. 4. A workaround was found — Something was broken or limited and a way around it was found. Document the limitation, the workaround, risks, and expiry conditions (example: "this workaround is only needed until [platform] ships [feature]"). 5. A template was created — Any reusable document, message format, report structure, email template, or framework that can be used again. 6. A decision framework was used — Any time a decision was made using specific logic or criteria. Capture the framework so it can be reused for similar decisions. 7. An API or integration was configured — Any API setup, authentication flow, endpoint configuration, or tool connection. Document every step, every key location, every gotcha. 8. A research method worked — Any time information was found using a specific search strategy, source combination, or analytical approach that produced high-quality results. 9. An agent protocol was established — Any new rule, escalation path, chain of command update, or behavioral instruction. Capture it so it never gets lost. 10. Code or a script was written — Any piece of code, automation script, command sequence, or technical solution. Full code, what it does, how to run it, and dependencies. 11. A communication pattern worked — Any outreach approach, negotiation tactic, client messaging strategy, or relationship management method that produced good results. 12. Something failed and we learned why — Failures are skills too. What went wrong, why it went wrong, and how to avoid it next time. These are some of the most valuable skills ever created. HOW TO BUILD A SKILL Every skill follows this structure. No exceptions. SKILL NAME: [Clear, descriptive name] SKILL ID: [short-kebab-case-id] (example: api-auth-fix-openai) CREATED BY: [Agent name or "Primary Agent"] DATE CREATED: [Date] LAST UPDATED: [Date] VERSION: v1.0 CATEGORY: [See categories below] TRIGGER: [When should this skill activate? What keywords, situations, or conditions should cause an agent to reach for this skill?] --- DESCRIPTION: [2-3 sentences. What does this skill do? Why does it exist?] --- CONTEXT / BACKSTORY: [How did this skill come to exist? What problem was being solved? What conversation or task led to this? Keep it short but give enough context that any agent picking this up cold understands why it matters.] --- THE SKILL: [The actual content. This is the core. Could be:] - Step-by-step instructions - A full prompt (reproduced word for word — never summarized) - A code block with comments - A decision framework with criteria - A template with placeholders - A troubleshooting guide - Whatever format best fits the skill --- WHEN TO USE: [Specific scenarios where this skill applies. Be concrete. Give examples.] --- WHEN NOT TO USE: [Edge cases or situations where this skill would be wrong to apply. This prevents misuse and saves time.] --- DEPENDENCIES: [What does this skill need to work? APIs, tools, access, other skills, specific agent involvement, software versions?] --- RELATED SKILLS: [List any other skills that connect to this one. Cross-reference by Skill ID.] --- EXPIRY / REVIEW CONDITION: [Does this skill have a shelf life? Is it a workaround that becomes unnecessary when a platform ships a fix? Is it tied to a specific version? Put the condition here so it is known when to review or retire it. If no expiry, write "Evergreen — review annually."] --- CHANGELOG: v1.0 — [Date] — Initial creation by [Agent name] SKILL CATEGORIES Tag every skill with one of these categories. If a skill does not fit, create a new category and add it to this list. The system grows. - troubleshooting — Fixes, workarounds, error resolutions - prompt-engineering — Prompts, templates, strategies - workflow — Multi-step processes, SOPs, procedures - api-integration — API setups, configs, auth flows - code — Scripts, snippets, automation - template — Reusable docs, formats, message structures - research — Methods, source strategies, analytical approaches - decision-framework — Logic models, criteria, evaluation methods - agent-protocol — Agent rules, escalation paths, behavior instructions - communication — Email templates, outreach, messaging patterns - failure-log — Post-mortems, lessons learned - platform-knowledge — Platform-specific how-tos and configurations - automation — Recurring tasks, scheduled jobs, trigger flows - data — Data processing, formatting, extraction techniques - security — API key management, access control, auth best practices THE SKILL INDEX Maintain a master document called SKILL-INDEX.md in your configuration directory. This index lists every skill by: - Skill ID - Skill Name - Category - Created By - Date Created - Last Updated - Status (active / under-review / retired) This is the single source of truth. If it is not in the index, it does not exist. Update this index every time a skill is created, updated, or retired. AUTOMATIC SKILL DETECTION Do not wait to be told. This happens automatically. After every conversation or completed task, run this check: "Did anything in this interaction produce knowledge, a solution, a process, a prompt, or a lesson that could be reused?" If yes — create the skill immediately or flag it for creation. If no — move on. After every error or failed task, run this check: "What went wrong? Is there a failure-log skill to create so this never happens again?" After every research task, run this check: "Did the method used to find this information work well enough to be formalized as a repeatable research skill?" Weekly review (pick a day): Review the full skill index. Check for: - Skills that need updating (new info, version changes) - Skills that should be retired (workaround no longer needed, platform updated) - Gaps — areas where repetitive work keeps happening but no skill exists yet Deliver a brief Skill Health Report summarizing what was added, updated, or retired. SKILL QUALITY STANDARDS A skill is not a note. It is institutional knowledge that any agent — including one that has never seen this system before — should be able to pick up and execute without asking follow-up questions. That is the bar. Before finalizing any skill, confirm: 1. Zero-context test — Could a brand-new agent use this skill with no prior knowledge? If not, add detail. 2. Trigger clarity — Would an agent know WHEN to reach for this skill? If not, sharpen the trigger. 3. Step specificity — Can the steps be followed without guessing? If not, break them down. 4. Completeness — Is anything missing that would cause failure mid-execution? Fill the gap. 5. Duplicate check — Does a similar skill exist? Update the existing one instead. 6. Prompt integrity — If it contains a prompt, is it word for word? Never summarized. MULTI-AGENT SETUP (if applicable) If running multiple agents, assign skill ownership by domain: - Lead / primary agent — Reviews all skills, maintains SKILL-INDEX.md, assigns creation - Technical agents — Own code, API, and platform skills - Research agents — Own research method skills - Specialist agents — Own skills in their specific domain - Every agent — Has authority and responsibility to flag skill opportunities If running a single agent, that agent handles everything — creation, review, indexing, and the weekly audit. RETROACTIVE SWEEP — DO THIS FIRST Before running forward, go back through recent conversations (minimum 7 days, further if possible). Capture every problem solved, prompt written, workaround found, configuration set up, process figured out, and mistake fixed. Create those skills now. QUICK-START CHECKLIST 1. Paste this directive into your primary agent's system prompt or send as a message 2. Create SKILL-INDEX.md in your configuration directory (start empty with table headers) 3. Run the retroactive sweep — last 7+ days of conversations 4. Pick a day for the weekly skill audit 5. If multi-agent: assign ownership domains to each agent 6. Confirm your agent acknowledges and can explain back what SkillForge does 7. Start working normally — skills generate automatically THE RULE Every day, every task, every conversation — the system either gets smarter or it is wasting knowledge. SkillForge is always on. --- PROMPT END --- ================================================================================ 6. TIPS & TRICKS ================================================================================ GETTING STARTED: - After pasting the directive, test it immediately. Solve a simple problem with your AI and see if it automatically creates a skill file. If it does not, remind it: "SkillForge is active. Did that interaction produce a skill?" - The retroactive sweep is the most important first step. Your last 7 days of conversations probably contain 10-20 skills that were never captured. Get those first. - Start with SKILL-INDEX.md. Even if it is empty, creating the file tells your AI that the system is real and it should be maintaining it. MAXIMIZING SKILL QUALITY: - The "zero-context test" is the most important quality gate. Before accepting a skill, ask yourself: could a brand-new AI agent with no knowledge of my setup use this skill successfully? If not, it needs more detail. - The "WHEN NOT TO USE" section prevents your AI from misapplying skills. A skill for fixing Python errors should not be triggered when someone asks about JavaScript. Edge cases matter. - Expiry dates are critical for workarounds. If a skill exists because a platform had a bug, set the expiry to "Review when [platform] ships version [X]." This prevents stale workarounds from polluting your skill library. BUILDING MOMENTUM: - The first 2 weeks are the most important. Your AI is learning the habit. You may need to remind it: "Was that a skill?" After 2 weeks, it becomes automatic. - Celebrate when your AI proactively creates a skill without being asked. That means the system is working. - When you hit 25 skills, do a full review. You will probably find 3-5 that overlap. Merge them. Clean indexing early prevents sprawl later. MULTI-AGENT POWER MOVES: - If you run multiple agents, the real power is cross-pollination. A research skill created by one agent should be usable by all agents. The SKILL-INDEX.md makes this possible. - Assign a "skill champion" — one agent (usually your lead agent) that reviews every skill before it goes into the index. This prevents low-quality skills from cluttering the system. - Route failure-log skills to your most senior agent for review. Failures contain the highest-value lessons but require the most judgment to document correctly. ORGANIZING YOUR SKILL LIBRARY: - Use the category system consistently. If you find yourself creating a lot of skills in one category, consider splitting it. Example: "troubleshooting" might split into "troubleshooting-api" and "troubleshooting-ui." - Name skills descriptively. "fix-auth-error" is bad. "openai-api-401-expired-key-fix" is good. You should be able to understand what a skill does from the name alone. - Keep the SKILL-INDEX.md sorted by category, then by date. This makes it easy to find skills and spot gaps. ADVANCED: COMBINING WITH OTHER SYSTEMS: - Pair SkillForge with the Conversation Summary Toolkit (if you have it). The summary captures what happened. SkillForge captures the reusable knowledge from what happened. Together, nothing is ever lost. - Export your skill library periodically. If you switch AI platforms, your skill files transfer with you — they are just text. - Use skills as onboarding material. When you add a new agent to your system, point it at the skill index first. It gets up to speed immediately instead of learning from scratch. THE WEEKLY AUDIT: - Pick a consistent day (Sunday works well) and make it non- negotiable. The weekly audit is what keeps the system healthy. - During the audit, look for three things: 1. Skills that need updating (new info, better methods discovered) 2. Skills that should be retired (the problem no longer exists) 3. Gaps — areas where the same work keeps happening but no skill exists yet. These gaps are your biggest efficiency leaks. - The Skill Health Report should be short. 5 lines max: - Skills added this week: [count] - Skills updated: [count] - Skills retired: [count] - Gaps identified: [list] - Total active skills: [count] ================================================================================ 7. CUSTOMIZATION GUIDE ================================================================================ ADD NEW CATEGORIES: Add any category that fits your work. Examples: - "sales" — Sales scripts, objection handling, closing frameworks - "design" — Design patterns, UI conventions, brand rules - "legal" — Contract clauses, compliance checklists, policy templates - "onboarding" — Setup procedures for new tools, team members, clients CHANGE THE SKILL TEMPLATE: Add or remove sections from the skill file structure. The template is a starting point. If you never use "RELATED SKILLS," remove it. If you always need "ESTIMATED TIME TO EXECUTE," add it. ADJUST THE DETECTION TRIGGERS: The automatic detection checks run after every conversation, every error, and every research task. You can add more triggers: - "After every client interaction..." - "After every deployment..." - "After every meeting..." Tailor the detection to your workflow. CHANGE THE AUDIT FREQUENCY: Weekly works for most setups. If you are in a high-volume environment (50+ AI interactions per day), consider twice weekly. If low volume, biweekly is fine. SCALE FOR TEAMS: If you have multiple people using AI agents, each person's agent should run SkillForge independently. Then have a lead agent that merges the best skills into a shared master index monthly. ================================================================================ 8. FREQUENTLY ASKED QUESTIONS ================================================================================ Q: Does this work with a single AI agent or only multi-agent setups? A: Both. The prompt includes instructions for single-agent and multi- agent configurations. Single agents handle everything themselves. Multi-agent setups distribute ownership by domain. Q: How many skills should I expect after the first month? A: Depends on usage. Active users typically generate 30-60 skills in the first month. The retroactive sweep alone usually produces 10-20. Q: What if my AI creates a low-quality skill? A: The quality gates (zero-context test, trigger clarity, step specificity, completeness check, duplicate check) are designed to catch this. If quality is still low, add a line: "Before creating any skill, show me a draft and wait for approval before finalizing." Q: Can I use this with ChatGPT or only with agent platforms? A: It works with ChatGPT using persistent memory or Projects. The skills get stored in the conversation context. For best results with ChatGPT, periodically export your skills to a separate document so they survive across sessions. Q: What happens when the skill library gets very large? A: The category system and SKILL-INDEX.md handle scale. At 100+ skills, consider splitting your index by category into separate files (troubleshooting-skills.md, code-skills.md, etc.) while keeping one master index that references all sub-indexes. Q: Will this slow down my AI's responses? A: No. The skill detection is a lightweight check that runs after each interaction. Skill creation only happens when something genuinely useful was produced. It does not affect normal response speed. Q: Can I share skills between different AI platforms? A: Yes. Skills are stored as plain text files. They transfer to any platform. The format is universal by design. ================================================================================ END OF PRODUCT GUIDE ================================================================================

Frequently Asked Questions

What buyers usually want to know

What is SkillForge Pro in one sentence?

It is a permanent directive that makes your AI save reusable knowledge instead of wasting it.

Does it work for a single AI agent?

Yes. Single-agent and multi-agent setups are both covered.

Will it capture prompts exactly?

Yes. Prompt integrity is a hard rule in the system.

How is this different from memory?

Memory stores facts. SkillForge stores executable know-how, with triggers, steps, dependencies, and review conditions.

What happens when the library gets large?

The index, category system, and audit process are built specifically to manage growth.

Can I use this with ChatGPT or Claude Projects?

Yes. The toolkit is platform-agnostic as long as your AI supports persistent instructions or durable context.