Skip to main content
Configure endpoints for OpenAI, Anthropic, and local LLMs.

OpenAI Provider

Configure the OpenAI-compatible endpoint for /openai/v1/* requests.
providers:
  openai:
    base_url: https://api.openai.com/v1
    # api_key: ${OPENAI_API_KEY}  # Optional fallback
OptionDescription
base_urlAPI endpoint (any OpenAI-compatible URL)
api_keyOptional. Used if client doesn’t send Authorization header

Compatible APIs

Any OpenAI-compatible API works:
# OpenAI
providers:
  openai:
    base_url: https://api.openai.com/v1

# Azure OpenAI
providers:
  openai:
    base_url: https://your-resource.openai.azure.com/openai/v1

# OpenRouter
providers:
  openai:
    base_url: https://openrouter.ai/api/v1
    api_key: ${OPENROUTER_API_KEY}

# LiteLLM Proxy (self-hosted)
providers:
  openai:
    base_url: http://localhost:4000

# Together AI
providers:
  openai:
    base_url: https://api.together.xyz/v1

# Groq
providers:
  openai:
    base_url: https://api.groq.com/openai/v1

Anthropic Provider

Configure the Anthropic endpoint for /anthropic/v1/* requests.
providers:
  anthropic:
    base_url: https://api.anthropic.com
    # api_key: ${ANTHROPIC_API_KEY}  # Optional fallback
OptionDescription
base_urlAnthropic API endpoint
api_keyOptional. Used if client doesn’t send x-api-key header

Local LLM

Required for route mode only. Your local LLM for PII requests.
local:
  type: ollama
  base_url: http://localhost:11434
  model: llama3.2
OptionDescription
typeollama or openai (for compatible servers)
base_urlLocal LLM endpoint
modelModel to use for all PII requests
api_keyOnly needed for OpenAI-compatible servers

Ollama

local:
  type: ollama
  base_url: http://localhost:11434
  model: llama3.2

vLLM

local:
  type: openai
  base_url: http://localhost:8000/v1
  model: meta-llama/Llama-2-7b-chat-hf

llama.cpp

local:
  type: openai
  base_url: http://localhost:8080/v1
  model: local

LocalAI

local:
  type: openai
  base_url: http://localhost:8080/v1
  model: your-model
  api_key: ${LOCAL_API_KEY}  # if required

API Key Handling

PasteGuard forwards your client’s authentication headers to OpenAI or Anthropic. You can optionally set api_key in config as a fallback:
providers:
  openai:
    base_url: https://api.openai.com/v1
    api_key: ${OPENAI_API_KEY}  # Used if client doesn't send auth

  anthropic:
    base_url: https://api.anthropic.com
    api_key: ${ANTHROPIC_API_KEY}  # Used if client doesn't send x-api-key