Configurable maximum concurrent connections per listener. When the limit is reached, new connections are closed immediately after accept. 0 means unlimited (default, preserving existing behavior). Config: Listener gains max_connections field, validated non-negative. DB: Migration 3 adds listeners.max_connections column. UpdateListenerMaxConns method for runtime changes via gRPC. CreateListener updated to persist max_connections on seed. Server: ListenerState/ListenerData gain MaxConnections. Limit checked in serve() after Accept but before handleConn — if ActiveConnections >= MaxConnections, connection is closed and the accept loop continues. SetMaxConnections method for runtime updates. Proto: SetListenerMaxConnections RPC added. ListenerStatus gains max_connections field. Generated code regenerated. gRPC server: SetListenerMaxConnections implements write-through (DB first, then in-memory update). GetStatus includes max_connections. Client: SetListenerMaxConnections method, MaxConnections in ListenerStatus. Tests: DB CRUD and UpdateListenerMaxConns, server connection limit enforcement (accept 2, reject 3rd, close one, accept again), gRPC SetListenerMaxConnections round-trip with DB persistence, not-found error handling. Also updates PROJECT_PLAN.md with phases 6-8 and PROGRESS.md with tracking for the new features. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
14 KiB
PROJECT_PLAN.md
Implementation plan for L7 HTTP/2 proxying and PROXY protocol support as described in ARCHITECTURE.md. The plan brings the existing L4-only codebase to the dual-mode (L4/L7) architecture with PROXY protocol for multi-hop deployments.
Guiding Principles
- Each phase produces a working, testable system. No phase leaves the codebase in a broken state.
- The existing L4 path must remain fully functional throughout. All current tests must continue to pass at every phase boundary.
- Database migrations are forward-only and non-destructive (
ALTER TABLE ADD COLUMNwith defaults). - New packages are introduced incrementally:
internal/proxyproto/in phase 2,internal/l7/in phase 3.
Phase 1: Database & Config Foundation
Extend the config structs, database schema, and seeding logic to support the new fields. No behavioral changes -- the proxy continues to operate as L4-only, but the data model is ready for phases 2-3.
1.1 Config struct updates
Update internal/config/config.go:
Listener: addProxyProtocol boolfield (toml:"proxy_protocol").Route: add fields:Mode string(toml:"mode") --"l4"(default) or"l7".TLSCert string(toml:"tls_cert") -- path to PEM certificate.TLSKey string(toml:"tls_key") -- path to PEM private key.BackendTLS bool(toml:"backend_tls") -- re-encrypt to backend.SendProxyProtocol bool(toml:"send_proxy_protocol") -- send PROXY v2.
1.2 Config validation updates
Update config.validate():
Route.Modemust be""(treated as"l4"),"l4", or"l7".- If
Mode == "l7":TLSCertandTLSKeyare required and must point to readable files. Validate the cert/key pair loads withtls.LoadX509KeyPair. - If
Mode == "l4": warn (log) ifTLSCertorTLSKeyare set. BackendTLSonly meaningful for L7 (ignored on L4, no error).
1.3 Database migration (v2)
Add migration in internal/db/migrations.go:
ALTER TABLE listeners ADD COLUMN proxy_protocol INTEGER NOT NULL DEFAULT 0;
ALTER TABLE routes ADD COLUMN mode TEXT NOT NULL DEFAULT 'l4'
CHECK(mode IN ('l4', 'l7'));
ALTER TABLE routes ADD COLUMN tls_cert TEXT NOT NULL DEFAULT '';
ALTER TABLE routes ADD COLUMN tls_key TEXT NOT NULL DEFAULT '';
ALTER TABLE routes ADD COLUMN backend_tls INTEGER NOT NULL DEFAULT 0;
ALTER TABLE routes ADD COLUMN send_proxy_protocol INTEGER NOT NULL DEFAULT 0;
1.4 DB struct and CRUD updates
db.Listener: addProxyProtocol bool.db.Route: addMode,TLSCert,TLSKey,BackendTLS,SendProxyProtocolfields.- Update
ListListeners,CreateListener,ListRoutes,CreateRouteto read/write new columns. - Update
Seed()to persist new fields from config.
1.5 Server data loading
Update cmd/mc-proxy/server.go and internal/server/:
ListenerData: addProxyProtocol bool.ListenerState: storeProxyProtocol.- Route lookup returns a
RouteInfostruct (hostname, backend, mode, tls_cert, tls_key, backend_tls, send_proxy_protocol) instead of just a backend string. This is the key internal API change that phases 2-3 build on.
1.6 Tests
- Config tests: valid L7 route, missing cert/key, invalid mode.
- DB tests: migration v2, CRUD with new fields, seed with new fields.
- Server tests: existing tests pass unchanged (all routes default to L4).
Phase 2: PROXY Protocol
Implement PROXY protocol v1/v2 parsing and v2 writing. Integrate into the server pipeline: receive on listeners, send on routes (L4 path only in this phase; L7 sending is added in phase 3).
2.1 internal/proxyproto/ package
New package with:
Parser (receive):
Parse(r io.Reader, deadline time.Time) (Header, error)-- reads and parses a PROXY protocol header (v1 or v2) from the connection.Headerstruct:Version(1 or 2),SrcAddr/DstAddr(netip.AddrPort),Command(PROXY or LOCAL).- v1 parsing: text-based,
PROXY TCP4/TCP6 srcip dstip srcport dstport\r\n. - v2 parsing: binary signature (12 bytes), version/command, family/protocol, address length, addresses.
- Enforce maximum header size (536 bytes, per spec).
- 5-second deadline for reading the header.
Writer (send):
WriteV2(w io.Writer, src, dst netip.AddrPort) error-- writes a PROXY protocol v2 header.- Always writes PROXY command (not LOCAL).
- Supports IPv4 and IPv6.
Tests:
- Parse v1 TCP4 and TCP6 headers.
- Parse v2 TCP4 and TCP6 headers.
- Parse v2 LOCAL command.
- Reject malformed headers (bad signature, truncated, invalid family).
- Round-trip: write v2, parse it back.
- Timeout handling.
2.2 Server integration (receive)
Update internal/server/server.go handleConn():
After accept, before firewall check:
- If
ls.ProxyProtocolis true, callproxyproto.Parse()with a 5-second deadline. - On success: use
Header.SrcAddr.Addr()as the real client IP for firewall checks and logging. - On failure: reset the connection (malformed or timeout).
- If
ls.ProxyProtocolis false: use TCP source IP (existing behavior). No PROXY header parsing -- a connection starting with a PROXY header will fail SNI extraction and be reset (correct behavior).
2.3 Server integration (send, L4)
Update the L4 relay path in handleConn():
After dialing the backend, before writing the peeked ClientHello:
- If
route.SendProxyProtocolis true, callproxyproto.WriteV2()with the real client IP (from step 2.2 or TCP source) and the backend address. - Then write the peeked ClientHello bytes.
- Then relay as before.
2.4 Tests
- Server test: listener with
proxy_protocol = true, client sends v2 header + TLS ClientHello, verify backend receives the connection. - Server test: listener with
proxy_protocol = true, client sends garbage instead of PROXY header, verify RST. - Server test: route with
send_proxy_protocol = true, verify backend receives PROXY v2 header before the ClientHello. - Server test: route with
send_proxy_protocol = false, verify no PROXY header sent. - Firewall test: verify the real client IP (from PROXY header) is used for firewall evaluation, not the TCP source IP.
Phase 3: L7 Proxying
Implement TLS termination and HTTP/2 reverse proxying. This is the largest phase.
3.1 internal/l7/ package
prefixconn.go:
PrefixConnstruct: wrapsnet.Conn, prepends buffered bytes before reading from the underlying connection.Read(): returns from buffer first, then from underlying conn.- All other
net.Connmethods delegate to the underlying conn.
reverseproxy.go:
Handlerstruct: holds route config, backend transport, logger.NewHandler(route RouteInfo, logger *slog.Logger) *Handler-- creates an HTTP handler that reverse proxies to the route's backend.- Uses
httputil.ReverseProxyinternally with:Directorfunction: sets scheme, host, injectsX-Forwarded-For,X-Forwarded-Proto,X-Real-IPfrom real client IP.Transport: configured for h2c (whenbackend_tls = false) or h2-over-TLS (whenbackend_tls = true).ErrorHandler: returns 502 with minimal body.
- h2c transport:
http2.TransportwithAllowHTTP: trueand customDialTLSContextthat returns a plain TCP connection. - TLS transport:
http2.Transportwith standard TLS config.
serve.go:
Serve(ctx context.Context, conn net.Conn, peeked []byte, route RouteInfo, clientAddr netip.Addr, logger *slog.Logger) error- Main entry point called from
server.handleConn()for L7 routes. - Creates
PrefixConnwrappingconnwithpeekedbytes. - Creates
tls.Connusingtls.Server()with the route's certificate. - Completes the TLS handshake (with timeout).
- Creates an HTTP/2 server (
http2.Server) and serves a single connection using the reverse proxy handler. - Injects real client IP into request context for header injection.
- Returns when the connection is closed.
- Main entry point called from
Tests:
prefixconn_test.go: verify buffered bytes are read first, then underlying conn. VerifyClose(),RemoteAddr(), etc. delegate.reverseproxy_test.go: HTTP/2 reverse proxy to h2c backend, verify request forwarding, header injection, error handling (502 on dial failure).serve_test.go: full TLS termination test -- client sends TLS ClientHello, proxy terminates and forwards to plaintext backend.- gRPC-through-L7 test: gRPC client → L7 proxy → h2c gRPC backend, verify unary RPC, server streaming, and trailers.
3.2 Server integration
Update internal/server/server.go handleConn():
After route lookup, branch on route.Mode:
"l4"(or""): existing behavior (dial, optional PROXY send, forward ClientHello, relay)."l7": calll7.Serve(ctx, conn, peeked, route, realClientIP, logger). The L7 path handles its own backend dialing and PROXY protocol sending internally.
3.3 PROXY protocol sending (L7)
In l7.Serve(), after dialing the backend but before starting HTTP
traffic: if route.SendProxyProtocol is true, write a PROXY v2 header
on the backend connection. The HTTP transport then uses the connection
with the header already sent.
3.4 Tests
- End-to-end: L4 and L7 routes on the same listener, verify both work.
- L7 with h2c backend serving HTTP/2 responses.
- L7 with
backend_tls = true(re-encrypt). - L7 with
send_proxy_protocol = true. - L7 TLS handshake failure (expired cert, wrong hostname).
- L7 backend unreachable (verify 502).
- Existing L4 tests unchanged.
Phase 4: gRPC API & CLI Updates
Update the admin API, proto definitions, client library, and CLI tools to support the new route and listener fields.
4.1 Proto updates
Update proto/mc_proxy/v1/admin.proto:
Routemessage: addmode,tls_cert,tls_key,backend_tls,send_proxy_protocolfields.AddRouteRequest: add the same fields.ListenerStatusmessage: addproxy_protocolfield.- Regenerate code with
make proto.
4.2 gRPC server updates
Update internal/grpcserver/grpcserver.go:
AddRoute: accept and validate new fields. L7 routes require valid cert/key paths. Persist all fields viadb.CreateRoute().ListRoutes: return full route info including new fields.GetStatus: includeproxy_protocolin listener status.
4.3 Client package updates
Update client/mcproxy/client.go:
Routestruct: addMode,TLSCert,TLSKey,BackendTLS,SendProxyProtocol.AddRoute(): accept and pass new fields.ListRoutes(): return full route info.ListenerStatus: addProxyProtocol.
4.4 mcproxyctl updates
Update cmd/mcproxyctl/routes.go:
routes add: accept--mode,--tls-cert,--tls-key,--backend-tls,--send-proxy-protocolflags.routes list: display mode and other fields in output.
4.5 Tests
- gRPC server tests: add/list L7 routes, validation of cert paths.
- Client tests: round-trip new fields.
- Verify backward compatibility: adding a route without new fields defaults to L4 with no PROXY protocol.
Phase 5: Integration & Polish
End-to-end validation, dev config updates, and documentation cleanup.
5.1 Dev config update
Update srv/mc-proxy.toml with example L7 routes and generate test
certificates for local development.
5.2 Multi-hop integration test
Test the edge→origin deployment pattern:
- Two mc-proxy instances (different configs).
- Edge: L4 passthrough with
send_proxy_protocol = true. - Origin:
proxy_protocol = truelistener, mix of L4 and L7 routes. - Verify real client IP flows through for firewall and
X-Forwarded-For.
5.3 gRPC-through-L7 validation
Test that gRPC services (unary, server-streaming, client-streaming, bidirectional) work correctly through the L7 reverse proxy, including:
- Trailer propagation (gRPC status codes).
- Large messages.
- Deadline/timeout propagation.
5.4 Web UI through L7 validation
Test that htmx-based web UIs work through the L7 proxy:
- Standard HTTP/1.1 and HTTP/2 requests.
- SSE (server-sent events) if used.
- Static asset serving.
5.5 Documentation
- Verify ARCHITECTURE.md matches final implementation.
- Update CLAUDE.md if any package structure or rules changed.
- Update Makefile if new build targets are needed.
Phase 6: Per-Listener Connection Limits
Add configurable maximum concurrent connection limits per listener.
6.1 Config: MaxConnections int64 on Listener (0 = unlimited)
6.2 DB: migration 3 adds listeners.max_connections, CRUD updates
6.3 Server: enforce limit in serve() after Accept, before handleConn
6.4 Proto/gRPC: SetListenerMaxConnections RPC, max_connections in ListenerStatus
6.5 Client/CLI: SetListenerMaxConnections method, status display
6.6 Tests: DB CRUD, server limit enforcement, gRPC round-trip
Phase 7: L7 Policies
Per-route HTTP blocking rules for L7 routes: user-agent blocking (substring match) and required header enforcement.
7.1 Config: L7Policy struct (type + value), L7Policies on Route
7.2 DB: migration 4 creates l7_policies table, new l7policies.go CRUD
7.3 L7 middleware: PolicyMiddleware in internal/l7/policy.go
7.4 Server/L7 integration: thread policies from RouteInfo to RouteConfig
7.5 Proto/gRPC: L7Policy message, ListL7Policies/AddL7Policy/RemoveL7Policy RPCs
7.6 Client/CLI: policy methods, mcproxyctl policies subcommand
7.7 Startup: load L7 policies per route in loadListenersFromDB
7.8 Tests: middleware unit tests, DB CRUD + cascade, gRPC round-trip, e2e
Phase 8: Prometheus Metrics
Instrument the proxy with Prometheus-compatible metrics exposed via a separate HTTP endpoint.