Skip to content

Key Concepts & Glossary

This page defines the key terms used throughout the ReArch documentation. If you encounter an unfamiliar term, check here first.

An interactive session between a user and an AI coding agent. Each conversation is backed by a dedicated Docker container linked to a specific repository. Conversations track messages, agent responses, file changes, costs, and pull requests.

A connection to a git provider (GitHub or Bitbucket). A resource stores workspace credentials and acts as a parent for imported repositories. Each resource represents one provider workspace.

A specific git repository imported from a resource. Repositories must be enabled before they can be used in conversations. Each repository has a configured branch, a container template (or custom .rearch/ folder), and a built Docker image.

A pre-built environment definition that ReArch injects into a repository’s Docker build. Templates provide a standard development environment without requiring changes to the repository.

TemplateWhat it includes
MinimalVS Code (code-server) + AI coding agent
Node.jsMinimal + Node.js runtime with dev server
Node.js + BrowserNode.js + Playwright with Chromium for visual verification

An alternative to templates. A directory at the root of your repository containing a Dockerfile, entrypoint.sh, and optionally custom agent tools. This gives full control over the container environment, including databases, custom services, and initialization logic.

A reusable prompt template that can be attached to conversations or suggested to users. Skills define structured instructions that guide the AI agent’s behavior for specific tasks (e.g., “Write unit tests”, “Refactor to TypeScript”).

A configuration that constrains AI agent behavior. Guardrails define what the agent is allowed or not allowed to do within a conversation, such as restricting file modifications to specific directories or preventing destructive operations.

A service that implements the Model Context Protocol, providing tools that AI agents can call. Examples include GitHub (for issue management), Sentry (for error tracking), and Brave Search (for web searches). MCP servers are configured centrally and shared across all conversations via the MCP proxy.

A standalone ReArch service that connects to multiple upstream MCP servers and exposes them as a single endpoint to conversation containers. This avoids duplicating credentials and connections across containers.

A configured AI model provider (Anthropic, Google, OpenAI, etc.) with API credentials stored encrypted in the database. Administrators add providers through the admin panel, and users select which model to use when sending messages.

The container orchestration mode used for production deployments. ReArch uses Docker Swarm for service replication, rolling updates, and overlay networking.

The reverse proxy that routes incoming requests to the correct service based on subdomain. Traefik also handles TLS termination, automatic Let’s Encrypt certificates, and forward-auth middleware for protecting conversation containers.

A Traefik middleware pattern where requests to conversation containers are first verified by oauth2-proxy against Keycloak. Only authenticated users can access container services (VS Code, application previews).

The Redis-backed job queue used by ReArch for background processing. BullMQ handles container provisioning, image builds, and scheduled cleanup tasks.

An open-source implementation of VS Code that runs in the browser. Each conversation container includes code-server, allowing users to browse and edit files directly alongside the AI agent.

The AI coding agent that runs inside each conversation container. OpenCode receives natural language instructions, reads and modifies files, runs commands, and reports results back through the chat interface.