Skip to content

LLM Providers

LLM providers supply the AI models that power ReArch conversations. Before anyone on your team can send a message to an AI agent, at least one provider must be configured with a valid API key.

ReArch supports multiple providers simultaneously. Users select which model to use from the model selector in the chat interface.

ProviderModelsAPI Key Source
AnthropicClaude Opus, Sonnet, Haikuconsole.anthropic.com
GoogleGemini Pro, Flash, Ultraaistudio.google.com
OpenAIGPT-4o, GPT-4, GPT-3.5platform.openai.com
Custom / OpenAI-compatibleAny model behind an OpenAI-compatible APIVaries
  1. Navigate to Administration > Settings > LLM Providers.
  2. Click Add Provider.
  3. Fill in the configuration:
FieldDescription
ProviderSelect from the supported providers list.
NameA display name for this provider configuration (e.g., “Anthropic Production”).
API KeyThe API key from your provider account.
Base URL(Optional) Override the default API endpoint. Useful for proxies or self-hosted models.
ModelsSelect which models from this provider should be available to users.
  1. Click Save.

The provider is immediately available. Users will see the enabled models in the model selector on their next conversation.

API keys are encrypted at rest using AES-256-GCM before being stored in MongoDB. The encryption key is derived from the ENCRYPTION_KEY environment variable (a 64-character hex string). If ENCRYPTION_KEY is not set, ReArch falls back to deriving a key from JWT_SECRET.

To generate an encryption key:

Terminal window
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
  1. Navigate to Administration > Settings > LLM Providers.
  2. Click the Edit button on the provider you want to modify.
  3. Update the fields. To change the API key, enter a new one — the existing key is not displayed.
  4. Click Save.
  1. Navigate to Administration > Settings > LLM Providers.
  2. Click the Delete button on the provider.
  3. Confirm the deletion.

Removing a provider does not affect existing conversations that used its models. Historical messages and costs are preserved. New messages can no longer use models from the deleted provider.

ReArch tracks AI usage costs per conversation. After each agent interaction, the backend calculates the cost based on:

  • Input tokens — The number of tokens in the prompt sent to the model.
  • Output tokens — The number of tokens in the model’s response.
  • Reasoning tokens — Additional tokens used for chain-of-thought reasoning (model-dependent).

Costs are displayed in the session sidebar during a conversation and aggregated in the Usage Analytics dashboard.

You can configure multiple providers simultaneously. This is useful for:

  • Cost optimization — Use a cheaper model for simple tasks and a more capable model for complex ones.
  • Redundancy — If one provider has an outage, users can switch to another.
  • Evaluation — Compare model quality across providers on the same codebase.

Users select their preferred model from the model selector in the chat interface. The selection persists for the current conversation but can be changed at any time.