Skip to content

miracletiger/skills_agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Personal Skills Foundry Β· Skill Generator Agent

A Skills Foundry for individuals and teams: An intelligent agent that automatically transforms natural language requirements into distributable Skill packages, with a complete Generate β†’ Validate β†’ Test β†’ Package β†’ Publish workflow, ultimately building your Personal Skills Repository.

🎯 Goal: Turn "writing requirements" into "deliverable Skills", turn "scripts" into "publishable assets".


Why This Project

  • From 0 to 1: Write a single requirement, automatically generate structured Skill packages (code, docs, dependencies, config, assets)
  • From 1 to N: Manage Skills like a codebase (versioning, testing, CI/CD, publishing)
  • Controllable & Reviewable: Generated results include requirement analysis artifacts, traceable logs, and auto-validation for easy review and iteration
  • Multi-Model Strategy: Support cloud/local LLMs with "latest-preferred strategy" by default, avoiding lock-in to outdated models

Core Capabilities

  • πŸ€– Requirement Understanding & Structured Analysis - Parse natural language requirements into executable Skill specs (.requirement.json) using LLM-powered analysis
  • 🧩 Auto-Generate Runnable Code - Produce compliant Python scripts with complete implementations (no TODO placeholders), including argparse, logging, and error handling
  • πŸ” TODO Detection & Auto-Completion - Automatically detect and complete TODO placeholders in generated code using LLM
  • πŸ“„ Auto-Generate Standard Documentation - Generate structured SKILL.md with YAML frontmatter, usage guides, examples, and API documentation
  • πŸ“¦ Dependencies & Reproducibility - Generate requirements.txt, support uv/pip installation
  • βœ… Post-Generation Validation - Static checks + compliance verification using quick_validate.py to ensure Skills are distributable
  • πŸ“¦ One-Click Packaging - Output standard .skill distribution packages (ZIP format) using package_skill.py
  • πŸ”„ Fallback Strategy - Use rule/template mode when LLM unavailable to ensure availability
  • 🧾 Full-Chain Logging - Record generation process, prompts, model info (for traceability and auditing)
  • πŸ“Š Generation Record Tracking - Automatically record detailed generation info to CSV (requirements, model, duration, file structure, etc.)
  • πŸ§ͺ Test & Publish Friendly - Auto-generate test scripts, CI-ready (GitHub Actions), extensible for auto-publishing to Releases/Registry

Use Cases

  • Personal: Build your own Skills Repository (data processing, automation, scaffolding, data cleaning, report generation, etc.)
  • Team: Accumulate organization-level skill assets (tooling, standardization, reusability)
  • Teaching/Research: Rapidly produce runnable engineering samples for validating and iterating ideas

πŸš€ Quick Start

Prerequisites

  • Python 3.10+ (3.12 recommended)
  • uv (recommended) or pip

Installation

# Clone the repository
git clone https://github.com/your-org/skill-generator-agent.git
cd skill-generator-agent

# Run setup script (recommended - automatically installs uv if needed)
bash scripts/setup.sh

# Activate virtual environment
source .venv/bin/activate  # Linux/Mac
# .venv\Scripts\activate   # Windows

Configuration (Multi-Model Β· Latest-Preferred)

Copy the environment template and fill in your API keys:

cp .env.example .env

Edit .env file with your configuration:

# ========== OpenAI ==========
OPENAI_API_KEY=sk-...
OPENAI_API_BASE=https://api.openai.com/v1  # Optional: custom endpoint
OPENAI_MODEL=gpt-5-mini                     # Recommended: Latest cost-effective model

# ========== Anthropic ==========
ANTHROPIC_API_KEY=sk-...
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022  # Recommended: Latest high-quality model

# ========== Ollama (Local, Free) ==========
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3.2                      # Recommended: llama3.2, qwen2.5

# ========== Proxy Configuration (Optional) ==========
HTTP_PROXY=http://proxy.example.com:8118
HTTPS_PROXY=http://proxy.example.com:8118

# ========== Generation Parameters (Optional) ==========
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=4000

πŸ’‘ Model Selection Strategy:

  • The tool automatically detects available API keys and selects the best provider
  • Priority order: OpenAI β†’ Anthropic β†’ Ollama
  • You can override with --llm and --model command-line arguments
  • Supports proxy configuration for network-restricted environments

πŸ€– Supported LLMs

Click to expand full model support list

OpenAI Models

Model Context Features Best For
gpt-5-mini 128K ⭐ Recommended Cost-effective, fast
gpt-5 256K Latest flagship Complex tasks

Anthropic Models

Model Context Features Best For
claude-4.5-sonnet 200K ⭐ Recommended High quality output
claude-4-opus 200K Most capable Complex tasks

Ollama (Local Models)

Model Context Features Best For
qwen2.5 128K ⭐ Recommended Best local model
deepseek-coder 64K Code-focused Code generation

Other Compatible Providers

  • DeepSeek: deepseek-chat, deepseek-coder, deepseek-reasoner, deepseek-v3
  • Qwen Cloud: qwen-turbo, qwen-plus, qwen-max

πŸ“– Usage

Simplest Generation

python skill_generator_agent.py -r "Create a data processing tool"

Specify Skill Name and Output Directory

python skill_generator_agent.py \
  -n "csv-processor" \
  -r "Create a CSV processing tool with read, filter, JSON conversion, and statistics" \
  -o ./skills \
  --yes

Run Demo Script

# Run pre-configured demo examples
bash scripts/demo.sh

Run Complete Workflow Test

# Test the complete generation β†’ validation β†’ packaging workflow
python tests/test_skill_workflow.py \
  -r "Create a file counter tool" \
  -n "test-counter" \
  --keep-output

Interactive Mode

python skill_generator_agent.py --interactive

Use Specific LLM Provider/Model

# Use OpenAI with specific model
python skill_generator_agent.py --llm openai --model gpt-5-mini -r "Create a file converter"

# Use Anthropic Claude
python skill_generator_agent.py --llm anthropic --model claude-3-5-sonnet-20241022 -r "..."

# Use local Ollama
python skill_generator_agent.py --llm ollama --model llama3.2 -r "..."

πŸ“– Command Line Options

Option Short Description
--requirement -r Requirement description (natural language)
--name -n Skill name (optional)
--output -o Output directory (default: ./skills)
--interactive -i Interactive mode
--yes -y Auto-confirm, no prompts
--llm - LLM provider: openai, anthropic, ollama
--model - Model name (e.g., gpt-5-mini, claude-3-5-sonnet-20241022)
--api-key - API Key (overrides env var)
--api-base - API Base URL (for custom endpoints)
--skip-validate - Skip validation
--skip-package - Skip packaging
--debug - Debug mode

Generated Structure (Skill Package)

my-skill/
β”œβ”€β”€ SKILL.md              # Standardized documentation with YAML frontmatter
β”œβ”€β”€ README.md              # User-friendly README (auto-generated)
β”œβ”€β”€ requirements.txt      # Python dependencies
β”œβ”€β”€ .requirement.json     # Requirement analysis artifact (traceable)
β”œβ”€β”€ .gitignore            # Git ignore file (auto-generated)
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ main.py           # Main script (runnable, complete implementation)
β”‚   β”œβ”€β”€ utils.py          # Utility functions (if needed)
β”‚   β”œβ”€β”€ test_main.py      # Unit tests (auto-generated)
β”‚   └── ...
β”œβ”€β”€ references/           # Reference materials (optional, auto-generated)
β”‚   └── api_reference.md  # API documentation (if needed)
└── assets/               # Asset files (optional)
    └── templates/        # Template files (if needed)

End-to-End Workflow (Generate β†’ Validate β†’ Package)

Requirement Input
  β”‚
  β–Ό
[1] Requirement Analysis (LLM)
  β”‚   └─ Output: Structured requirement spec
  β–Ό
[2] User Confirmation (optional)
  β–Ό
[3] Initialize Skill Directory
  β”‚   └─ Uses init_skill.py or manual creation
  β–Ό
[4] Code Generation (LLM)
  β”‚   β”œβ”€ Generate scripts with argparse, logging, error handling
  β”‚   β”œβ”€ Complete implementations (no TODO placeholders)
  β”‚   β”œβ”€ Auto-detect and complete any remaining TODOs
  β”‚   └─ Output: scripts/*.py
  β–Ό
[5] Generate Tests
  β”‚   └─ Output: scripts/test_*.py (auto-generated unit tests)
  β–Ό
[6] Documentation Generation
  β”‚   β”œβ”€ SKILL.md (with YAML frontmatter)
  β”‚   β”œβ”€ README.md (user-friendly guide)
  β”‚   β”œβ”€ requirements.txt
  β”‚   └─ .gitignore
  β–Ό
[7] Generate References & Assets
  β”‚   β”œβ”€ API reference documentation (if needed)
  β”‚   └─ Template files (if needed)
  β–Ό
[8] Validation (quick_validate.py)
  β”‚   β”œβ”€ Check SKILL.md format
  β”‚   β”œβ”€ Validate naming conventions
  β”‚   └─ Verify required fields
  β–Ό
[9] Packaging (package_skill.py)
  β”‚   └─ Output: .skill file (ZIP format)
  β–Ό
[10] Save Generation Record
  β”‚   └─ CSV with timestamp, model, duration, structure
  β–Ό
Done βœ… (Ready for CI/CD publishing)

Testing Workflow

The complete test workflow includes:

  1. Generate Skill - Create skill from requirement
  2. Validate Structure - Check required files exist
  3. Test Script Execution - Verify scripts can run
  4. Validate Skill - Use quick_validate.py for format validation
  5. Package Skill - Create .skill distribution file

Run tests with:

python tests/test_skill_workflow.py \
  -r "Your requirement" \
  -n "skill-name" \
  --keep-output

πŸ’‘ Examples

Example 1: Simple File Counter

python skill_generator_agent.py \
    -n "file-counter" \
    -r "Create a file counter tool that counts files and total size in a directory" \
    -o ./examples \
    --yes

Example 2: CSV Processing Tool

python skill_generator_agent.py \
    -n "csv-processor" \
    -r "Create a CSV processing tool that supports:
    1. Read CSV files
    2. Filter and select data
    3. Format conversion (CSV to JSON)
    4. Data statistics analysis" \
    -o ./examples \
    --yes

Example 3: Complete Workflow Test

# Test the complete generation β†’ validation β†’ packaging workflow
python tests/test_skill_workflow.py \
    -r "Create a file counter tool" \
    -n "test-counter" \
    --keep-output

Example 4: Using Local Ollama

# Make sure Ollama is running: ollama serve
python skill_generator_agent.py \
    --llm ollama --model llama3.2 \
    -r "Create a file format conversion tool" \
    -o ./examples \
    --yes

Example 5: Using Claude for High-Quality Output

python skill_generator_agent.py \
    --llm anthropic --model claude-3-5-sonnet-20241022 \
    -r "Create a web scraper with error handling and retry logic" \
    -o ./examples \
    --yes

Example 6: Complex Multi-Module Skill

python skill_generator_agent.py \
    -n "project-env-installer" \
    -r "Create a project environment installer that:
    1. Clones from GitHub or reads local project
    2. Analyzes README and dependency files
    3. Auto-installs uv if needed, creates Python 3.12 venv
    4. Installs dependencies and tests startup
    5. Auto-fixes common issues" \
    -o ./examples \
    --yes

See EXAMPLES.md for more examples.


πŸ“‚ Recommended GitHub Project Layout

If you're creating a new GitHub project to build your personal Skills repository, we recommend an engineering-oriented layout:

your-skills-repo/
β”œβ”€β”€ skill_generator_agent.py      # Generator script
β”œβ”€β”€ skills/                        # Generated skills (version control as needed)
β”‚   β”œβ”€β”€ csv-processor/
β”‚   β”œβ”€β”€ api-client/
β”‚   └── ...
β”œβ”€β”€ templates/                     # Specification templates (optional)
β”œβ”€β”€ tests/                         # Regression tests / golden examples
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       β”œβ”€β”€ validate.yml          # Validate skills on PR
β”‚       β”œβ”€β”€ package.yml           # Package and release
β”‚       └── test.yml              # Run tests
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ setup.sh                  # Environment setup
β”‚   └── run.sh                    # Quick run script
β”œβ”€β”€ .env.example                  # Environment template
β”œβ”€β”€ pyproject.toml                # Python project config
β”œβ”€β”€ README.md
└── CHANGELOG.md

πŸ”„ Recommended CI/CD Workflow

  1. PR Validation: Auto-run validate/test on PRs to prevent "bad skills" from entering main branch
  2. Auto-Release: Auto-package .skill files and upload to GitHub Releases after merging to main
  3. Registry (Optional): Maintain version numbers, signatures, and manifests for each Skill to form a "private registry"
  • Write requirements as "acceptance criteria": The more testable your input, the more stable your output
  • Generate first, review later: Treat generated results as PRs and do Code Review before merging
  • Accumulate golden examples: Turn frequently-used skills into regression test cases for long-term quality improvement
  • Layered model strategy: Use cost-effective models for exploration, stronger models for final review before publishing

🌟 Best Practices

  • Write requirements as "acceptance criteria": The more testable your input, the more stable your output
  • Generate first, review later: Treat generated results as PRs and do Code Review before merging
  • Accumulate golden examples: Turn frequently-used skills into regression test cases for long-term quality improvement
  • Layered model strategy: Use cost-effective models for exploration, stronger models for final review before publishing
  • Version control: Keep generated skills in Git to track changes and iterations
  • CI/CD integration: Automate validation, testing, and packaging in your workflow

πŸ§ͺ Testing

Run Complete Workflow Test

# Test generation β†’ validation β†’ packaging workflow
python tests/test_skill_workflow.py \
    -r "Create a file counter tool" \
    -n "test-counter" \
    --keep-output

Run Basic Tests

# Basic generation test
python tests/test_generator.py

# Full workflow test (detailed)
python tests/test_full_workflow.py

Test Generated Skills

# After generating a skill
cd examples/your-skill/scripts
python main.py --help
pytest test_*.py -v

See TEST_RUN_GUIDE.md and HOW_TO_RUN_TESTS.md for detailed testing documentation.


πŸ”§ Development

# Install dev dependencies
bash scripts/setup.sh

# Run tests
pytest

# Run tests with coverage
pytest --cov=skill_generator_agent --cov-report=html

# Run workflow test
python tests/test_skill_workflow.py -r "Test requirement" -n "test" --keep-output

# Format code
black .
ruff check .

# Type checking (if mypy is installed)
mypy skill_generator_agent.py

Development Scripts

  • scripts/setup.sh - Environment setup (auto-installs uv if needed)
  • scripts/demo.sh - Run demo examples
  • scripts/run.sh - Quick run script
  • tests/test_skill_workflow.py - Complete workflow test
  • tests/test_generator.py - Basic generation test

❓ FAQ

Q: Can generated code be used directly in production?
A: We recommend treating generated results as "high-quality drafts" and using them only after review, testing, and security auditing.

Q: Can I use this without an LLM?
A: Yes. It will fall back to template/rule-based generation mode, ensuring structure and compliance but with lower intelligence.

Q: Which model should I use?
A: For best results: gpt-5-mini (cost-effective) or claude-3-5-sonnet-20241022 (high quality). For local/free: llama3.2 via Ollama.

Q: How do I add a custom LLM provider?
A: The code uses OpenAI-compatible API format. You can set OPENAI_API_BASE to point to your custom endpoint.

Q: Can I customize the generation templates?
A: Yes. Check the code for prompt templates and modify them according to your needs.

Q: How does validation work?
A: Uses quick_validate.py from skill-creator to validate SKILL.md format, naming conventions, and required fields.

Q: How does packaging work?
A: Uses package_skill.py to create .skill files (ZIP format) with all required files, excluding temporary files like __pycache__.

Q: Where are generation records saved?
A: In generation_records.csv in the output directory, containing timestamp, requirement, model, duration, file structure, etc.


License

MIT


πŸ“š Documentation

πŸ”— Links


πŸ™ Acknowledgments

This project is inspired by the following excellent projects, and we extend our gratitude:

  • OpenAI Skills - Official Skills Catalog for Codex by OpenAI
  • LangGraph - Framework for building multi-agent applications
  • openskills - Universal skills loader for AI coding agents

Thank you to these projects for their contributions to the AI Agent and Skills ecosystem!

About

Create an intelligent agent for skills that understands user needs and intentions, automatically creates and launches skills, and builds its own skills repository.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors