feiskyer/codex-settings

每日信息看板 · 2026-03-09
开源项目
Category
github_search
Source
0
Score
2026-03-09T01:37:32Z
Published

AI 总结

GitHub 上发布了一个面向 OpenAI Codex CLI 的配置与自定义提示词集合,整合多模型接入、可复用 prompts 与实验性 skills,可显著提升 AI 编程工作流的可配置性和自动化能力。
#GitHub #repo #开源项目 #Codex CLI #LiteLLM

内容摘录

OpenAI Codex CLI Settings and Custom Prompts

A curated collection of configurations, skills and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, skills, agents and custom commands, please refer feiskyer/claude-code-settings.
Overview

This repository provides:
**Flexible Configuration**: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter)
**Custom Prompts**: Reusable prompt templates for common development tasks
**Skills (Experimental)**: Discoverable instruction bundles for specialized tasks (image generation, YouTube transcription, spec-driven workflows)
**Best Practices**: Pre-configured settings optimized for development workflows
**Easy Setup**: Simple installation and configuration process
Quick Start
Installation
Using npx skills

npx skills could be used to install skills only for your AI coding tools.
Basic Configuration

The default config.toml uses LiteLLM as a gateway. To use it:
Install LiteLLM and Codex CLI:
Create a LiteLLM config file (full example litellm_config.yaml):
Start LiteLLM proxy:
Run Codex:

 
Configuration Files
Main Configuration
config.toml: Default configuration using LiteLLM gateway
Model: gpt-5 via model_provider = "github" (Copilot proxy on http://localhost:4000)
Approval policy: on-request; reasoning summary: detailed; reasoning effort: high; raw agent reasoning visible
MCP servers: claude (local), exa (hosted), chrome (DevTools over npx)
Alternative Configurations

Located in configs/ directory:
OpenAI ChatGPT: Use ChatGPT subscription provider
Azure OpenAI: Use Azure OpenAI service provider
Github Copilot: Use Github Copilot via LiteLLM proxy
OpenRouter: Use OpenRouter provider

To use an alternative config:
Custom Prompts

Custom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
/prompts:deep-reflector - Analyze development sessions to extract learnings, patterns, and improvements for future interactions.
/prompts:insight-documenter [breakthrough] - Capture and document significant technical breakthroughs into reusable knowledge assets.
/prompts:instruction-reflector - Analyze and improve Codex instructions in AGENTS.md based on conversation history.
/prompts:github-issue-fixer [issue-number] - Systematically analyze, plan, and implement fixes for GitHub issues with PR creation.
/prompts:github-pr-reviewer [pr-number] - Perform thorough GitHub pull request code analysis and review.
/prompts:ui-engineer [requirements] - Create production-ready frontend solutions with modern UI/UX standards.
/prompts:prompt-creator [requirements] - Create Codex custom prompts with proper structure and best practices.
Creating Custom Prompts
Create a new .md file in ~/.codex/prompts/
Use argument placeholders:
$1 to $9: Positional arguments
$ARGUMENTS: All arguments joined by spaces
$$: Literal dollar sign
Restart Codex to load new prompts
Skills (Experimental)

Skills are reusable instruction bundles that Codex automatically discovers at startup. Each skill has a name, description, and detailed instructions stored on disk. Codex injects only metadata (name, description, path) into context - the body stays on disk until needed.
How to Use Skills

Skills are automatically loaded when Codex starts. To use a skill:
**List all skills**: Use the /skills command to see all available skills
**Invoke a skill**: Use $<skill-name> [prompt] to invoke a skill with an optional prompt

 

Skills are stored in ~/.codex/skills/**/SKILL.md. Only files named exactly SKILL.md are recognized.
Available Skills

<details>
<summary>claude-skill - Handoff task to Claude Code CLI</summary>
claude-skill

Non-interactive automation mode for hands-off task execution using Claude Code. Use when you want to leverage Claude Code to implement features or review code.

**Key Features:**
Multiple permission modes (default, acceptEdits, plan, bypassPermissions)
Autonomous execution without approval prompts
Streaming progress updates
Structured final summaries

**Requirements:** Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)

</details>

<details>
<summary>autonomous-skill - Long-running task automation</summary>
autonomous-skill

Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.

The runner keeps model selection with your active Codex config/profile and pins unattended execution through config overrides rather than hardcoding a model or relying on --full-auto.

**Key Features:**
Dual-agent pattern (Initializer creates task list, Executor completes tasks)
Auto-continuation across sessions with progress tracking
Task isolation with per-task directories (.autonomous/<task-name>/)
Progress persistence via task_list.md and progress.md
Non-interactive mode execution

**Usage:**

</details>

<details>
<summary>deep-research - Multi-agent deep research orchestration</summary>
deep-research

Multi-instance (multi-agent) orchestration workflow for deep research tasks. Breaks down research objectives into parallelizable sub-goals, runs child processes via codex exec, and aggregates results into polished reports.

**Key Features:**
Parallel sub-task execution with codex exec in sandboxed environments
Automatic aggregation and chapter-by-chapter refinement
Structured output with deliverable reports (not chat messages)
Progress tracking with detailed logs per sub-task
Tool priority: skills → MCP (firecrawl → tavily) → direct fetch

**Use Cases:**
Systematic web/document research
Competitor/industry analysis
Batch link/dataset retrieval
Long-form writing with evidence integration

**Workflow:**
Pre-execution planning & reconnaissance
Sub-goal identification and task decomposition
Generate dispatch scripts with parallel execution
Design child process prompts
Parallel execution w…