Skip to content

MCP Gateway Stack

This repository now includes an Arion-managed MCP gateway runtime for long-lived local services. Arion evaluates Nix modules and runs Docker orchestration under the hood.

Stack definition

  • arion-compose.nix
  • arion-pkgs.nix
  • docker/mcp-router/nginx.conf
  • docker/mcp-gateway/catalogs/agent-hub.yaml
  • docker/mcp-gateway/registry/all.yaml
  • docker/mcp-gateway/config.yaml
  • docker/mcp-gateway/tools.yaml

The stack is declarative and committed to the repository. It runs:

  • traefik as the HTTP ingress on 0.0.0.0:${TRAEFIK_HTTP_PORT:-8811}
  • mcp-router (Nginx) behind Traefik for MCP path routing
  • mcp-gateway behind /mcp
  • mcp-nixos-http behind /mcp/nixos

Optional bundles can also run as part of the same stack:

  • AI local bundle:
  • llama-cpp
  • whisper-cpp
  • llm-proxy (LiteLLM)
  • open-webui
  • Dify bundle:
  • dify-web
  • dify-api
  • dify-worker
  • dify-sandbox
  • dify-postgres
  • dify-redis
  • minio

When llama-cpp is included (ai or full bundles), startup bootstraps a default small model into .data/llama-cpp/models/ if missing. Configure with LLAMA_CPP_MODEL_FILE and LLAMA_CPP_MODEL_URL.

When whisper-cpp is included (ai or full bundles), startup bootstraps a default STT model into .data/whisper-cpp/models/ if missing. Configure with WHISPER_CPP_MODEL_FILE and WHISPER_CPP_MODEL_URL.

Gateway state is persisted under .data/mcp-gateway. Optional bundle state is persisted under .data/openwebui, .data/llama-cpp, .data/whisper-cpp, .data/dify, and .data/minio.

Commands

Use the existing Just entrypoints:

just mcp-up
just mcp-up-ai
just mcp-up-dify
just mcp-up-all
just stack-up-all
just mcp-down
just mcp-status
just mcp-logs
just mcp-topology

Command intent:

  • just mcp-up-all: MCP-only behavior. Enables all MCP servers in .mcp.json and brings up the MCP runtime stack.
  • just stack-up-all: full platform behavior. Brings up MCP plus all optional HTTP AI bundles (OpenWebUI, llama.cpp, whisper.cpp, proxy, Dify, Postgres, Redis, MinIO).

These route through scripts/mcp-compose.sh, which executes the flake runner mcp-runtime, and are intended to run from the hermetic Nix shell via the existing just enter-nix behavior.

Current runtime defaults

  • Traefik HTTP port: ${TRAEFIK_HTTP_PORT:-8811}
  • Secrets path in container: /workspace/.env
  • Gateway routes are explicitly declared in arion-compose.nix
  • Server definitions are loaded from local catalog docker/mcp-gateway/catalogs/agent-hub.yaml
  • Registry enrollment is loaded from docker/mcp-gateway/registry/all.yaml
  • Container socket is auto-detected by mcp-runtime from:
  • /var/run/docker.sock
  • /mnt/wsl/shared-docker/docker.sock
  • $XDG_RUNTIME_DIR/podman/podman.sock
  • $HOME/.docker/run/docker.sock
  • Override socket detection explicitly with CONTAINER_SOCKET=/path/to/socket.

Notes

  • The runtime source of truth is the committed Arion files.
  • MCP server images are pinned by digest in docker/mcp-gateway/catalogs/agent-hub.yaml.
  • .mcp.json is client-facing and generated from flake.nix via just sync-ai.
  • .vscode/mcp.json is a symlink to .mcp.json.
  • ref: "" in registry files is intentional: it means resolve by server name against the loaded catalog.
  • Traefik includes a reserved middleware name oidc-auth for future forward auth integration; routes are currently unauthenticated by default.
  • OpenWebUI and Dify prompt exports are generated from canonical repo prompt assets by scripts/export-ai-platform-artifacts.sh and written to:
  • docker/openwebui/agents.json
  • docker/dify/agents.json