Per-route HTTP-level blocking policies for L7 routes. Two rule types: block_user_agent (substring match against User-Agent, returns 403) and require_header (named header must be present, returns 403). Config: L7Policy struct with type/value fields, added as L7Policies slice on Route. Validated in config (type enum, non-empty value, warning if set on L4 routes). DB: Migration 4 creates l7_policies table with route_id FK (cascade delete), type CHECK constraint, UNIQUE(route_id, type, value). New l7policies.go with ListL7Policies, CreateL7Policy, DeleteL7Policy, GetRouteID. Seed updated to persist policies from config. L7 middleware: PolicyMiddleware in internal/l7/policy.go evaluates rules in order, returns 403 on first match, no-op if empty. Composed into the handler chain between context injection and reverse proxy. Server: L7PolicyRule type on RouteInfo with AddL7Policy/RemoveL7Policy mutation methods on ListenerState. handleL7 threads policies into l7.RouteConfig. Startup loads policies per L7 route from DB. Proto: L7Policy message, repeated l7_policies on Route. Three new RPCs: ListL7Policies, AddL7Policy, RemoveL7Policy. All follow the write-through pattern. Client: L7Policy type, ListL7Policies/AddL7Policy/RemoveL7Policy methods. CLI: mcproxyctl policies list/add/remove subcommands. Tests: 6 PolicyMiddleware unit tests (no policies, UA match/no-match, header present/absent, multiple rules). 4 DB tests (CRUD, cascade, duplicate, GetRouteID). 3 gRPC tests (add+list, remove, validation). 2 end-to-end L7 tests (UA block, required header with allow/deny). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mc-proxy
mc-proxy is a Layer 4 TLS SNI proxy and router for Metacircular Dynamics services. It reads the SNI hostname from incoming TLS ClientHello messages and proxies the raw TCP stream to the matched backend. It does not terminate TLS.
A global firewall (IP, CIDR, GeoIP country blocking) is evaluated before any routing decision. Blocked connections receive a TCP RST with no further information.
Quick Start
# Build
make mc-proxy
# Run locally (creates srv/ with example config on first run)
make devserver
# Full CI pipeline: vet → lint → test → build
make all
Configuration
Copy the example config and edit it:
cp mc-proxy.toml.example /srv/mc-proxy/mc-proxy.toml
See ARCHITECTURE.md for the full configuration reference.
Key sections:
[database]— SQLite database path (required)[[listeners]]— TCP ports to bind and their route tables (seeds DB on first run)[grpc]— optional gRPC admin API with TLS/mTLS[firewall]— global blocklist (IP, CIDR, GeoIP country)[proxy]— connect timeout, idle timeout, shutdown timeout
CLI Commands
| Command | Purpose |
|---|---|
mc-proxy server -c <config> |
Start the proxy |
mc-proxy status -c <config> |
Query a running instance's health via gRPC |
mc-proxy snapshot -c <config> |
Create a database backup (VACUUM INTO) |
Deployment
See RUNBOOK.md for operational procedures.
# Install on a Linux host
sudo deploy/scripts/install.sh
# Or build and run as a container
make docker
docker run -v /srv/mc-proxy:/srv/mc-proxy mc-proxy server -c /srv/mc-proxy/mc-proxy.toml
Design
mc-proxy intentionally omits a REST API and web frontend. The gRPC admin API is the sole management interface. This is an intentional departure from the Metacircular engineering standards — mc-proxy is pre-auth infrastructure and a minimal attack surface is prioritized over interface breadth.
See ARCHITECTURE.md for the full system specification.
License
Proprietary. Metacircular Dynamics.