内容摘录
Claude Code Settings/Skills for Vibe Coding
A curated collection of Claude Code settings, skills and sub-agents designed for enhanced development workflows. This setup includes specialized skills and subagents for feature development (spec-driven workflow), code analysis, GitHub integration, and knowledge management.
For OpenAI Codex settings, configurations and custom prompts, please refer feiskyer/codex-settings.
Setup
Using Claude Code Plugin
**Note:**
~/.claude/settings.json is not configured via Claude Code Plugin, you'd need to configure it manually.
Using npx skills
npx skills could be used to install skills only for your AI coding tools.
Manual Setup
Once started, you'll see:
Open the link, log in and authenticate your GitHub Copilot account.
**Note:**
The default configuration is leveraging LiteLLM Proxy Server as LLM gateway to GitHub Copilot. You can also use copilot-api as the proxy as well (remember to change your port to 4141).
Make sure the following models are available in your account; if not, replace them with your own model names:
ANTHROPIC_DEFAULT_SONNET_MODEL: claude-sonnet-4.5
ANTHROPIC_DEFAULT_OPUS_MODEL: claude-opus-4
ANTHROPIC_DEFAULT_HAIKU_MODEL: gpt-5-mini
Skills
Skills are reusable capabilities that teach Claude how to complete specific tasks. They can be invoked via /skill-name [arguments] or triggered automatically based on context. Install only what you need:
<details>
<summary>codex-skill - handoff task to Codex CLI</summary>
codex-skill
Non-interactive automation mode for hands-off task execution using OpenAI Codex. Use when you want to leverage codex, gpt-5, or gpt-5.1 to implement features or plans designed by Claude.
**Installation:**
**Key Features:**
Multiple execution modes (read-only, workspace-write, danger-full-access)
Model selection support (gpt-5, gpt-5.1, gpt-5.1-codex, etc.)
Autonomous execution without approval prompts
JSON output support for structured results
Resumable sessions
**Requirements:** Codex CLI installed (npm i -g @openai/codex or brew install codex)
</details>
<details>
<summary>autonomous-skill - Long-running task automation</summary>
autonomous-skill
Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.
**Installation:**
**Key Features:**
Dual-agent pattern (Initializer creates a task list, Executor completes tasks)
Auto-continuation across sessions with progress tracking
Task isolation with per-task directories (.autonomous/<task-name>/)
Progress persistence via task_list.md and progress.md
Headless mode execution using Claude CLI
**Usage:**
**Requirements:** Claude CLI installed
</details>
<details>
<summary>nanobanana-skill - draw image with Gemini nanobanana</summary>
nanobanana-skill
Generate or edit images using Google Gemini API via nanobanana. Use when creating, generating, or editing images.
**Installation:**
**Key Features:**
Image generation with various aspect ratios
Image editing capabilities
Multiple model options (gemini-3-pro-image-preview, gemini-2.5-flash-image)
Resolution options (1K, 2K, 4K)
Support for various aspect ratios (square, portrait, landscape, ultra-wide)
**Requirements:**
GEMINI_API_KEY configured in ~/.nanobanana.env
Python3 with google-genai, Pillow, python-dotenv (install via pip install -r requirements.txt in the plugin directory)
</details>
<details>
<summary>youtube-transcribe-skill - Extract YouTube subtitles</summary>
youtube-transcribe-skill
Extract subtitles/transcripts from a YouTube video link.
**Installation:**
**Key Features:**
Dual extraction methods: CLI (fast) and Browser Automation (fallback)
Automatic subtitle language selection (zh-Hans, zh-Hant, en)
Efficient DOM-based extraction for browser method
Saves transcripts to local text files
**Requirements:**
yt-dlp (for CLI method)
or chrome-devtools-mcp (for browser automation method)
</details>
<details>
<summary>deep-research - Multi-Agent Research Orchestration</summary>
deep-research
Multi-agent orchestration workflow for deep research. Decomposes research goals into parallel sub-objectives, spawns claude -p sub-processes, aggregates results, and delivers polished reports.
**Triggered by**: "深度调研", "deep research", "wide research", "multi-agent research", or systematic research needs
**Key Features:**
**Multi-agent orchestration**: Splits research goals into parallel sub-tasks executed via claude -p
**Skills-first approach**: Prioritizes installed skills, then MCP tools (firecrawl → exa), then WebFetch/WebSearch
**Structured delivery**: Produces file-based reports with executive summaries, not chat messages
**Chapter-by-chapter refinement**: Iterative polishing with source verification
**Comprehensive logging**: Dispatcher logs, per-task logs, raw data caching
**Scale-aware execution**: Micro (1-2 tasks) to large (15+) with appropriate parallelization
**Use Cases:**
Systematic web/document research
Competitive/industry analysis
Batch URL/dataset processing
Long-form writing with evidence integration
**Directory Structure:**
**Usage:**
</details>
<details>
<summary>kiro-skill - Interactive Feature Development</summary>
kiro-skill
Interactive feature development workflow from idea to implementation.
**Triggered by**: "kiro", or references to .kiro/specs/ directory
**Installation:**
**Workflow**:
**Requirements** → Define what needs to be built (EARS format with user stories)
**Design** → Determine how to build it (architecture, components, data models)
**Tasks** → Create actionable implementation steps (test-driven, incremental)
**Execute** → Implement tasks one at a time
**Usage**:
</details>
<details>
<summary>spec-kit-skill - Constitution-Based Development</summary>
spec-kit-skill
GitHub Spec-Kit integration for constitution-based spec-driven development.
**Triggered by**: "spec-kit", "speckit", "constitution", "specify", or references to .specify/ directory
**Installation:**
…