Configurable maximum concurrent connections per listener. When the
limit is reached, new connections are closed immediately after accept.
0 means unlimited (default, preserving existing behavior).
Config: Listener gains max_connections field, validated non-negative.
DB: Migration 3 adds listeners.max_connections column.
UpdateListenerMaxConns method for runtime changes via gRPC.
CreateListener updated to persist max_connections on seed.
Server: ListenerState/ListenerData gain MaxConnections. Limit checked
in serve() after Accept but before handleConn — if ActiveConnections
>= MaxConnections, connection is closed and the accept loop continues.
SetMaxConnections method for runtime updates.
Proto: SetListenerMaxConnections RPC added. ListenerStatus gains
max_connections field. Generated code regenerated.
gRPC server: SetListenerMaxConnections implements write-through
(DB first, then in-memory update). GetStatus includes max_connections.
Client: SetListenerMaxConnections method, MaxConnections in
ListenerStatus.
Tests: DB CRUD and UpdateListenerMaxConns, server connection limit
enforcement (accept 2, reject 3rd, close one, accept again), gRPC
SetListenerMaxConnections round-trip with DB persistence, not-found
error handling.
Also updates PROJECT_PLAN.md with phases 6-8 and PROGRESS.md with
tracking for the new features.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. DB migration: add CHECK(mode IN ('l4', 'l7')) constraint on the
routes.mode column. ARCHITECTURE.md documented this constraint but
migration v2 omitted it. Enforces mode validity at the database
level in addition to application-level validation.
2. L7 reverse proxy: distinguish timeout errors from connection errors
in the ErrorHandler. Backend timeouts now return HTTP 504 Gateway
Timeout instead of 502. Uses errors.Is(context.DeadlineExceeded)
and net.Error.Timeout() detection. Added isTimeoutError unit tests.
3. Config validation: warn when L4 routes have tls_cert or tls_key set
(they are silently ignored). ARCHITECTURE.md documented this warning
but config.validate() did not emit it. Uses slog.Warn.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Multi-hop integration tests (server package):
- TestMultiHopProxyProtocol: full edge→origin deployment with two
mc-proxy instances. Edge uses L4 passthrough with send_proxy_protocol,
origin has proxy_protocol listener with L7 route. Verifies the real
client IP (127.0.0.1) flows through PROXY protocol into the origin's
X-Forwarded-For header on the h2c backend.
- TestMultiHopFirewallBlocksRealIP: origin firewall blocks an IP from
the PROXY header while allowing the TCP peer (edge proxy). Verifies
the backend is never reached.
L7 package integration tests:
- TestL7LargeResponse: 1 MB response through the reverse proxy.
- TestL7GRPCTrailers: HTTP/2 trailer propagation (Grpc-Status,
Grpc-Message) through the reverse proxy, validating gRPC
compatibility.
- TestL7HTTP11Fallback: client negotiates HTTP/1.1 only (no h2 ALPN),
verifies the proxy falls back to HTTP/1.1 serving and still
forwards to the h2c backend successfully.
Also updates PROGRESS.md to mark all five phases complete.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New internal/l7 package implements TLS termination and HTTP/2 reverse
proxying for L7 routes. The proxy terminates the client TLS connection
using per-route certificates, then forwards HTTP/2 traffic to backends
over h2c (plaintext HTTP/2) or h2 (re-encrypted TLS).
PrefixConn replays the peeked ClientHello bytes into crypto/tls.Server
so the TLS handshake sees the complete ClientHello despite SNI
extraction having already read it.
Serve() is the L7 entry point: TLS handshake with route certificate,
ALPN negotiation (h2 preferred, HTTP/1.1 fallback), then HTTP reverse
proxy via httputil.ReverseProxy. Backend transport uses h2c by default
(AllowHTTP + plain TCP dial) or h2-over-TLS when backend_tls is set.
Forwarding headers (X-Forwarded-For, X-Forwarded-Proto, X-Real-IP)
are injected from the real client IP in the Rewrite function. PROXY
protocol v2 is sent to backends when send_proxy_protocol is enabled,
using the request context to carry the client address through the
HTTP/2 transport's dial function.
Server integration: handleConn dispatches to handleL7 when route.Mode
is "l7". The L7 handler converts RouteInfo to l7.RouteConfig and
delegates to l7.Serve.
L7 package tests: PrefixConn (4 tests), h2c backend round-trip,
forwarding header injection, backend unreachable (502), multiple
HTTP/2 requests over one connection.
Server integration tests: L7 route through full server pipeline with
TLS client, mixed L4+L7 routes on the same listener verifying both
paths work independently.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New internal/proxyproto package implements PROXY protocol parsing and
writing without buffering past the header boundary (reads exact byte
counts so the connection is correctly positioned for SNI extraction).
Parser: auto-detects v1 (text) and v2 (binary) by first byte. Parses
TCP4/TCP6 for both versions plus v2 LOCAL command. Enforces max header
sizes and read deadlines.
Writer: generates v2 binary headers for IPv4 and IPv6 with PROXY
command.
Server integration:
- Receive: when listener.ProxyProtocol is true, parses PROXY header
before firewall check. Real client IP from header is used for
firewall evaluation and logging. Malformed headers cause RST.
- Send: when route.SendProxyProtocol is true, writes PROXY v2 header
to backend before forwarding the ClientHello bytes.
Tests cover v1/v2 parsing, malformed rejection, timeout, round-trip
write+parse, and five server integration tests: receive with valid
header, receive with garbage, send verification, send-disabled
verification, and firewall evaluation using the real client IP.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extend the config, database schema, and server internals to support
per-route L4/L7 mode selection and PROXY protocol fields. This is the
foundation for L7 HTTP/2 reverse proxying and multi-hop PROXY protocol
support described in the updated ARCHITECTURE.md.
Config: Listener gains ProxyProtocol; Route gains Mode, TLSCert,
TLSKey, BackendTLS, SendProxyProtocol. L7 routes validated at load
time (cert/key pair must exist and parse). Mode defaults to "l4".
DB: Migration v2 adds columns to listeners and routes tables. CRUD
and seeding updated to persist all new fields.
Server: RouteInfo replaces bare backend string in route lookup.
handleConn dispatches on route.Mode (L7 path stubbed with error).
ListenerState and ListenerData carry ProxyProtocol flag.
All existing L4 tests pass unchanged. New tests cover migration v2,
L7 field persistence, config validation for mode/cert/key, and
proxy_protocol flag round-tripping.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces a new command-line tool for managing mc-proxy via the gRPC
admin API over Unix socket. Commands include route and firewall rule
CRUD operations, health checks, and status queries.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove TCP listener support from gRPC server; Unix socket is now the
only transport for the admin API (access controlled via filesystem
permissions)
- Add standard gRPC health check service (grpc.health.v1.Health)
- Implement MCPROXY_* environment variable overrides for config
- Create client/mcproxy package with full API coverage and tests
- Update ARCHITECTURE.md and dev config (srv/mc-proxy.toml)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Rate limiting: per-source-IP connection rate limiter in the firewall layer
with configurable limit and sliding window. Blocklisted IPs are rejected
before rate limit evaluation to avoid wasting quota. Unix socket: the gRPC
admin API can now listen on a Unix domain socket (no TLS required), secured
by file permissions (0600), as a simpler alternative for local-only access.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rewrite README with project overview and quick start. Add RUNBOOK with
operational procedures and incident playbooks. Fix Dockerfile for Go 1.25
with version injection. Add docker-compose.yml. Clean up golangci.yaml
for mc-proxy. Add server tests (10) covering the full proxy pipeline with
TCP echo backends, and grpcserver tests (13) covering all admin API RPCs
with bufconn and write-through DB verification.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename proto/gen directories from mc-proxy to mc_proxy for valid protobuf
package naming. Add CLI status subcommand for querying running instance
health via gRPC. Add systemd backup service/timer and backup pruning
script. Add buf.yaml and proto-lint Makefile target. Add shutdown_timeout
config field.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Database (internal/db) stores listeners, routes, and firewall rules with
WAL mode, foreign keys, and idempotent migrations. First run seeds from
TOML config; subsequent runs load from DB as source of truth.
gRPC admin API now writes to the database before updating in-memory state
(write-through cache pattern). Adds snapshot command for VACUUM INTO
backups. Refactors firewall.New to accept raw rule slices instead of
config struct for flexibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Layer 4 TLS SNI proxy with global firewall (IP/CIDR/GeoIP blocking),
per-listener route tables, bidirectional TCP relay with half-close
propagation, and a gRPC admin API (routes, firewall, status) with
TLS/mTLS support.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>