Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

mcp-prompt-optimizer

prompt-optimizer116SEE LICENSE IN LICENSE1.5.0

Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, Bayesian optimization, AG-UI real-time optimization, template auto-save, optimization insights, personal model configuration via WebUI, team collabo

mcp, mcp-server, prompt, optimization, ai, claude, anthropic, openrouter, model-configuration, personal-models, cursor, windsurf, cline, professional, subscription, cloud-based, context-detection, templates, template-auto-save, optimization-insights, analytics, team-collaboration, enterprise-grade, intelligent-routing, ai-context, api-key-management, startup-validation, caching, robust-error-handling, cli-commands, professional-tooling, network-resilience, production-ready, development-mode, fallback-support, cross-platform, windows, macos, linux, arm64, bayesian-optimization, ag-ui-real-time, streaming-optimization, websocket-support, performance-optimization, advanced-analytics, intelligent-context, ai-aware-rules, parameter-tuning, optimization-strategies

readme

MCP Prompt Optimizer

🚀 Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, template management, team collaboration, enterprise-grade features, and optional personal model configuration. Starting at $2.99/month.

✨ Key Features

🧠 AI Context Detection - Automatically detects and optimizes for image generation, LLM interaction, technical automation
📁 Template Management - Auto-save high-confidence optimizations, search & reuse patterns
👥 Team Collaboration - Shared quotas, team templates, role-based access
📊 Real-time Analytics - Confidence scoring, usage tracking, optimization insights
☁️ Cloud Processing - Always up-to-date AI models, no local setup required
🎛️ Personal Model Choice - Use your own OpenRouter models via WebUI configuration
🔧 Universal MCP - Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code, Zed, Replit

🚀 Quick Start

1. Install the MCP server:

npm install -g mcp-prompt-optimizer

2. Get your API key: Visit https://promptoptimizer-blog.vercel.app/pricing for your free trial (5 optimizations)

3. Configure Claude Desktop: Add to your ~/.claude/claude_desktop_config.json:

{
  "mcpServers": {
    "prompt-optimizer": {
      "command": "npx",
      "args": ["mcp-prompt-optimizer"],
      "env": {
        "OPTIMIZER_API_KEY": "sk-opt-your-key-here"
      }
    }
  }
}

4. Restart Claude Desktop and start optimizing with AI context awareness!

5. (Optional) Configure custom models - See Advanced Model Configuration below

🎛️ Advanced Model Configuration (Optional)

WebUI Model Selection & Personal OpenRouter Keys

Want to use your own AI models? Configure them in the WebUI first, then the NPM package automatically uses your settings!

Step 1: Configure in WebUI

  1. Visit Dashboard: https://promptoptimizer-blog.vercel.app/dashboard
  2. Go to Settings → User Settings
  3. Add OpenRouter API Key: Get one from OpenRouter.ai
  4. Select Your Models:
    • Optimization Model: e.g., anthropic/claude-3-5-sonnet (for prompt optimization)
    • Evaluation Model: e.g., google/gemini-pro-1.5 (for quality assessment)

Step 2: Use NPM Package

Your configured models are automatically used by the MCP server - no additional setup needed!

{
  "mcpServers": {
    "prompt-optimizer": {
      "command": "npx",
      "args": ["mcp-prompt-optimizer"],
      "env": {
        "OPTIMIZER_API_KEY": "sk-opt-your-key-here"  // Your service API key
      }
    }
  }
}

Model Selection Priority

1. 🎯 Your WebUI-configured models (highest priority)
2. 🔧 Request-specific model (if specified)
3. ⚙️ System defaults (fallback)

Benefits of Personal Model Configuration

Cost Control - Pay for your own OpenRouter usage
Model Choice - Access 100+ models (Claude, GPT-4, Gemini, Llama, etc.)
Performance - Choose faster or more capable models
Consistency - Same models across WebUI and MCP tools
Privacy - Your data goes through your OpenRouter account

Example Model Recommendations

For Creative/Complex Prompts:

  • Optimization: anthropic/claude-3-5-sonnet
  • Evaluation: google/gemini-pro-1.5

For Fast/Simple Optimizations:

  • Optimization: openai/gpt-4o-mini
  • Evaluation: openai/gpt-3.5-turbo

For Technical/Code Prompts:

  • Optimization: anthropic/claude-3-5-sonnet
  • Evaluation: anthropic/claude-3-haiku

Important Notes

🔑 Two Different API Keys:

  • Service API Key (sk-opt-*): For the MCP service subscription
  • OpenRouter API Key: For your personal model usage (configured in WebUI)

💰 Cost Structure:

  • Service subscription: Monthly fee for optimization features
  • OpenRouter usage: Pay-per-token for your chosen models

🔄 No NPM Package Changes Needed: When you update models in WebUI, the NPM package automatically uses the new settings!


💰 Cloud Subscription Plans

All plans include the same sophisticated AI optimization quality

🎯 Explorer - $2.99/month

  • 5,000 optimizations per month
  • Individual use (1 user, 1 API key)
  • Full AI features - context detection, template management, insights
  • Personal model configuration via WebUI
  • Community support
  • 18,000 optimizations per month
  • Team features (2 members, 3 API keys)
  • Full AI features - context detection, template management, insights
  • Personal model configuration via WebUI
  • Priority processing + email support

🚀 Innovator - $69.99/month

  • 75,000 optimizations per month
  • Large teams (5 members, 10 API keys)
  • Full AI features - context detection, template management, insights
  • Personal model configuration via WebUI
  • Advanced analytics + priority support + dedicated support channel

🆓 Free Trial: 5 optimizations with full feature access

🧠 AI Context Detection & Enhancement

The server automatically detects your prompt type and enhances optimization goals:

🎨 Image Generation Context

Detected patterns: --ar, --v, midjourney, dall-e, photorealistic, 4k

Input: "A beautiful landscape --ar 16:9 --v 6"
✅ Enhanced goals: parameter_preservation, keyword_density, technical_precision
✅ Preserves technical parameters (--ar, --v, etc.)
✅ Optimizes quality keywords and visual descriptors

🤖 LLM Interaction Context

Detected patterns: act as, you are a, role:, persona:, behave like

Input: "Act as a professional writer and help me with..."
✅ Enhanced goals: context_specificity, token_efficiency, actionability
✅ Improves role clarity and instruction precision
✅ Optimizes for better AI understanding

⚙️ Technical Automation Context

Detected patterns: def, function, execute, script, automation

Input: "Create a script to automate deployment process"
✅ Enhanced goals: technical_accuracy, parameter_preservation, precision
✅ Protects code elements and technical syntax
✅ Enhances technical precision and clarity

💬 Human Communication Context (Default)

All other prompts get standard optimization for human readability and clarity.

📊 Enhanced Optimization Features

Professional Optimization (All Users)

🎯 Optimized Prompt

Create a comprehensive technical blog post about artificial intelligence that systematically explores current real-world applications, evidence-based benefits, existing limitations and challenges, and data-driven future implications for businesses and society.

Confidence: 87.3%
Plan: Creator
AI Context: Human Communication
Goals Enhanced: Yes (clarity → clarity, specificity, actionability)

🧠 AI Context Benefits Applied
- ✅ Standard optimization rules applied
- ✅ Human communication optimized

✅ Auto-saved as template (ID: tmp_abc123)
*High-confidence optimization automatically saved for future use*

📋 Similar Templates Found
1. AI Article Writing Template (92.1% similarity)
2. Technical Blog Post Structure (85.6% similarity)
*Use `search_templates` tool to explore your template library*

📊 Optimization Insights

Performance Analysis:
- Clarity improvement: +21.9%
- Specificity boost: +17.3%  
- Length optimization: +15.2%

Prompt Analysis:
- Complexity level: intermediate
- Optimization confidence: 87.3%

AI Recommendations:
- Optimization achieved 87.3% confidence
- Template automatically saved for future reference
- Prompt optimized from 15 to 23 words

*Professional analytics and improvement recommendations*

---
*Professional cloud-based AI optimization with context awareness*
💡 Manage account & configure models: https://promptoptimizer-blog.vercel.app/dashboard
📊 Check quota: Use `get_quota_status` tool
🔍 Search templates: Use `search_templates` tool

🔧 Universal MCP Client Support

Claude Desktop

{
  "mcpServers": {
    "prompt-optimizer": {
      "command": "npx",
      "args": ["mcp-prompt-optimizer"],
      "env": {
        "OPTIMIZER_API_KEY": "sk-opt-your-key-here"
      }
    }
  }
}

Cursor IDE

Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "prompt-optimizer": {
      "command": "npx", 
      "args": ["mcp-prompt-optimizer"],
      "env": {
        "OPTIMIZER_API_KEY": "sk-opt-your-key-here"
      }
    }
  }
}

Windsurf

Configure in IDE settings or add to MCP configuration file.

Other MCP Clients

  • Cline: Standard MCP configuration
  • VS Code: MCP extension setup
  • Zed: MCP server configuration
  • Replit: Environment variable setup
  • JetBrains IDEs: MCP plugin configuration
  • Emacs/Vim/Neovim: MCP client setup

🛠️ Available Tools

optimize_prompt

Professional AI optimization with context detection

{
  "prompt": "Your prompt text",
  "goals": ["clarity", "specificity"], // Optional
  "ai_context": "llm_interaction" // Auto-detected if not specified
}

get_quota_status

Check subscription status and usage

// No parameters needed

search_templates

Search your saved template library

{
  "query": "blog post", // Optional
  "ai_context": "human_communication", // Optional filter
  "limit": 5 // Max results
}

🎯 Optimization Goals

  • clarity - Make prompts clearer and more understandable
  • conciseness - Remove unnecessary words while preserving meaning
  • creativity - Enhance creative and imaginative aspects
  • specificity - Add specific details and concrete examples
  • actionability - Make prompts more directive and actionable
  • technical_accuracy - Improve technical precision and terminology
  • keyword_density - Optimize keyword placement and density
  • parameter_preservation - Preserve technical parameters (for image gen/code)
  • context_specificity - Enhance context-specific clarity
  • token_efficiency - Optimize for AI token usage

🤖 AI Context Types

  • human_communication - Emails, letters, social content
  • llm_interaction - AI conversations, role-playing prompts
  • image_generation - Image/art generation prompts (Midjourney, DALL-E)
  • technical_automation - DevOps, system administration, scripts
  • structured_output - JSON, data formats, templates
  • code_generation - Programming and development prompts
  • api_automation - API calls, integrations, workflows

🔧 Professional CLI Tools

Enhanced command-line tools for power users:

# Check API key and quota status
mcp-prompt-optimizer check-status

# Validate API key with backend
mcp-prompt-optimizer validate-key

# Test backend integration
mcp-prompt-optimizer test

# Run comprehensive diagnostic
mcp-prompt-optimizer diagnose

# Clear validation cache
mcp-prompt-optimizer clear-cache

# Show help and setup instructions
mcp-prompt-optimizer help

# Show version information
mcp-prompt-optimizer version

🏢 Team Collaboration Features

Team API Keys (sk-team-*)

  • Shared quotas across team members
  • Centralized billing and management
  • Team template libraries for consistency
  • Role-based access control
  • Team usage analytics

Individual API Keys (sk-opt-*)

  • Personal quotas and billing
  • Individual template libraries
  • Personal usage tracking
  • Account self-management

🔐 Security & Privacy

  • Enterprise-grade security with encrypted data transmission
  • API key validation with secure backend authentication
  • Quota enforcement with real-time usage tracking
  • Professional uptime with 99.9% availability SLA
  • GDPR compliant data handling and processing
  • No data retention - prompts processed and optimized immediately

📈 Advanced Features

Automatic Template Management

  • Auto-save high-confidence optimizations (>70% confidence)
  • Intelligent categorization by AI context and content type
  • Similarity search to find related templates
  • Template analytics with usage patterns and effectiveness

Real-time Optimization Insights

  • Performance metrics - clarity, specificity, length improvements
  • Confidence scoring with detailed analysis
  • AI-powered recommendations for continuous improvement
  • Usage analytics and optimization patterns

Intelligent Context Routing

  • Automatic detection of prompt context and intent
  • Goal enhancement based on detected context
  • Parameter preservation for technical prompts
  • Context-specific optimizations for better results

🚀 Getting Started

🏃‍♂️ Fast Start (System Defaults)

  1. Sign up at promptoptimizer-blog.vercel.app/pricing
  2. Install the MCP server: npm install -g mcp-prompt-optimizer
  3. Configure your MCP client with your API key
  4. Start optimizing with intelligent AI context detection!

🎛️ Advanced Start (Custom Models)

  1. Sign up at promptoptimizer-blog.vercel.app/pricing
  2. Configure WebUI at dashboard with your OpenRouter key & models
  3. Install the MCP server: npm install -g mcp-prompt-optimizer
  4. Configure your MCP client with your API key
  5. Enjoy enhanced optimization with your chosen models!

📞 Support & Resources

🌟 Why Choose MCP Prompt Optimizer?

Professional Quality - Enterprise-grade optimization with consistent results
Universal Compatibility - Works with 10+ MCP clients out of the box
AI Context Awareness - Intelligent optimization based on prompt type
Personal Model Choice - Use your own OpenRouter models & pay-per-use
Template Management - Build and reuse optimization patterns
Team Collaboration - Shared resources and centralized management
Real-time Analytics - Track performance and improvement over time
Startup Validation - Comprehensive error handling and troubleshooting
Professional Support - From community to enterprise-level assistance


🚀 Professional MCP Server - Built for serious AI development with intelligent context detection, comprehensive template management, personal model configuration, and enterprise-grade reliability.

Get started with 5 free optimizations at promptoptimizer-blog.vercel.app/pricing

changelog

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[1.5.0] - 2025-09-25

Added

  • 🧠 Bayesian Optimization Support: Advanced parameter tuning and performance prediction
  • AG-UI Real-Time Features: Streaming optimization and WebSocket support
  • 🎯 Enhanced AI Context Detection: Improved weighted scoring system with 7 contexts
  • 📊 Advanced Analytics: New get_optimization_insights tool for Bayesian metrics
  • 🚀 Real-Time Status: New get_real_time_status tool for live optimization monitoring
  • 🔧 Feature Flags: ENABLE_BAYESIAN_OPTIMIZATION and ENABLE_AGUI_FEATURES environment variables
  • 📋 Enhanced Template Search: AI-aware filtering by sophistication, complexity, and strategy
  • 🎨 Rich Formatting: Improved output formatting with better visual organization

Changed

  • 🔄 Backend API Alignment: Updated to align with FastAPI Backend production-v2.1.0-bayesian
  • 🎯 Context Detection: Upgraded algorithm with weighted scoring and negative patterns
  • 📊 Quota Display: Enhanced quota status with visual indicators and feature breakdown
  • 🔍 Template Search: Expanded search parameters and improved result formatting
  • 🚀 Startup Process: Enhanced validation with feature status reporting

Fixed

  • API Endpoints: Corrected backend endpoint URLs for full compatibility
  • 🛡️ Error Handling: Improved fallback mechanisms for network issues
  • 📝 Template Display: Fixed template preview and confidence score formatting
  • 🔧 Environment Variables: Better handling of feature flag defaults

Technical

  • 📦 Dependencies: Updated to latest MCP SDK version
  • 🏗️ Architecture: Modular feature system with conditional tool loading
  • 🧪 Testing: Enhanced mock responses for development mode
  • 📖 Documentation: Updated tool descriptions and parameter schemas

Backend Compatibility

  • API Version: v1 (aligned with FastAPI backend)
  • Endpoint Mapping: /api/v1/mcp/* endpoints fully supported
  • Feature Parity: All backend features now accessible via MCP
  • Error Codes: Proper HTTP status code handling and user-friendly messages

[1.4.2] - 2025-09-17

Added

  • Basic template search functionality
  • Improved error handling for network issues
  • Development mode support

Changed

  • Updated API key validation process
  • Enhanced quota status display

Fixed

  • Connection timeout issues
  • Cache expiration handling

[1.4.1] - 2025-09-15

Fixed

  • API key format validation
  • Template auto-save threshold

[1.4.0] - 2025-09-10

Added

  • Template auto-save feature
  • Basic optimization insights
  • Cross-platform support improvements

Changed

  • Improved context detection
  • Enhanced error messages

[1.3.x] - Previous Versions

Historical versions with basic prompt optimization functionality.