14 Commits

Author SHA1 Message Date
8b1c89fdc9 Add mcp build command and deploy auto-build
Extends MCP to own the full build-push-deploy lifecycle. When deploying,
the CLI checks whether each component's image tag exists in the registry
and builds/pushes automatically if missing and build config is present.

- Add Build, Push, ImageExists to runtime.Runtime interface (podman impl)
- Add mcp build <service>[/<image>] command
- Add [build] section to CLI config (workspace path)
- Add path and [build.images] to service definitions
- Wire auto-build into mcp deploy before agent RPC
- Update ARCHITECTURE.md with runtime interface and deploy auto-build docs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 01:34:25 -07:00
d7f18a5d90 Add Platform Evolution tracking to PROGRESS_V1.md
Phase A complete: route declarations, port allocation, $PORT env vars.
Phase B in progress: agent mc-proxy route registration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 01:25:26 -07:00
5a802bceb6 Merge pull request 'Add route declarations and automatic port allocation' (#1) from mcp-routes-port-allocation into master 2026-03-27 08:16:20 +00:00
777ba8a0e1 Add route declarations and automatic port allocation to MCP agent
Service definitions can now declare routes per component instead of
manual port mappings:

  [[components.routes]]
  name = "rest"
  port = 8443
  mode = "l4"

The agent allocates free host ports at deploy time and injects
$PORT/$PORT_<NAME> env vars into containers. Backward compatible:
components with old-style ports= work unchanged.

Changes:
- Proto: RouteSpec message, routes + env fields on ComponentSpec
- Servicedef: RouteDef parsing and validation from TOML
- Registry: component_routes table with host_port tracking
- Runtime: Env field on ContainerSpec, -e flag in BuildRunArgs
- Agent: PortAllocator (random 10000-60000, availability check),
  deploy wiring for route→port mapping and env injection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 01:04:47 -07:00
503c52dc26 Update service definition example for convention-driven format
Drop uses_mcdsl, full image URLs, ports, network, user, restart.
Add route declarations and service-level version. Image names and
most config are now derived from conventions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 00:19:12 -07:00
6465da3547 Add build and release lifecycle to ARCHITECTURE.md
Service definitions now include [build] config (path, uses_mcdsl,
images) so MCP owns the full build-push-deploy lifecycle, replacing
mcdeploy.toml. Documents mcp build, mcp sync auto-build, image
versioning policy (explicit tags, never :latest), and workspace
convention.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:31:05 -07:00
e18a3647bf Add Nix flake for mcp and mcp-agent
Exposes two packages:
- default (mcp CLI) for operator workstations
- mcp-agent for managed nodes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 22:46:36 -07:00
1e58dcce27 Implement mcp purge command for registry cleanup
Add PurgeComponent RPC to the agent service that removes stale registry
entries for components that are both gone (observed state is removed,
unknown, or exited) and unwanted (not in any current service definition).
Refuses to purge components with running or stopped containers. When all
components of a service are purged, the service row is deleted too.
Supports --dry-run to preview without modifying the database.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 22:30:45 -07:00
1afbf5e1f6 Add purge design to architecture doc
Purge removes stale registry entries — components that are no longer
in service definitions and have no running container. Designed as an
explicit, safe operation separate from sync: sync is additive (push
desired state), purge is subtractive (remove forgotten entries).

Includes safety rules (refuses to purge running containers), dry-run
mode, agent RPC definition, and rationale for why sync should not be
made destructive.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 22:22:27 -07:00
ea8a42a696 P5.2 + P5.3: Bootstrap docs, README, and RUNBOOK
- docs/bootstrap.md: step-by-step bootstrap procedure with lessons
  learned from the first deployment (NixOS sandbox issues, podman
  rootless setup, container naming, MCR auth workaround)
- README.md: quick-start guide, command reference, doc links
- RUNBOOK.md: operational procedures for operators (health checks,
  common operations, unsealing metacrypt, cert renewal, incident
  response, disaster recovery, file locations)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:32:22 -07:00
ff9bfc5087 Update PROGRESS_V1.md with deployment status and remaining work
Documents Phase 6 (deployment), bugs fixed during rollout,
remaining work organized by priority (operational, quality,
design, infrastructure), and current platform state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:27:30 -07:00
17ac0f3014 Trim whitespace from token file in CLI
Token files with trailing newlines caused gRPC "non-printable ASCII
characters" errors in the authorization header.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:19:27 -07:00
7133871be2 Default CLI config path to ~/.config/mcp/mcp.toml
Eliminates the need to pass --config on every command.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:16:34 -07:00
efa32a7712 Fix container name handling for hyphenated service names
Extract ContainerNameFor and SplitContainerName into names.go.
ContainerNameFor handles single-component services where service
name equals component name (e.g., mc-proxy → "mc-proxy" not
"mc-proxy-mc-proxy"). SplitContainerName checks known services
from the registry before falling back to naive split on "-", fixing
mc-proxy being misidentified as service "mc" component "proxy".

Also fixes podman ps JSON parsing (Command field is []string not
string) found during deployment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:13:20 -07:00
2262 changed files with 6786332 additions and 247 deletions

View File

@@ -192,6 +192,9 @@ for a service by prefix and derive component names automatically
```
mcp login Authenticate to MCIAS, store token
mcp build <service> Build and push images for a service
mcp build <service>/<image> Build and push a single image
mcp deploy <service> Deploy all components from service definition
mcp deploy <service>/<component> Deploy a single component
mcp deploy <service> -f <file> Deploy from explicit file
@@ -203,10 +206,11 @@ mcp list List services from all agents (registry,
mcp ps Live check: query runtime on all agents, show running
containers with uptime and version
mcp status [service] Full picture: live query + drift + recent events
mcp sync Push service definitions to agent (update desired
state without deploying)
mcp sync Push service definitions to agent; build missing
images if source tree is available
mcp adopt <service> Adopt all <service>-* containers into a service
mcp purge [service[/component]] Remove stale registry entries (--dry-run to preview)
mcp service show <service> Print current spec from agent registry
mcp service edit <service> Open service definition in $EDITOR
@@ -234,25 +238,34 @@ Example: `~/.config/mcp/services/metacrypt.toml`
name = "metacrypt"
node = "rift"
active = true
version = "v1.0.0"
[build.images]
metacrypt = "Dockerfile.api"
metacrypt-web = "Dockerfile.web"
[[components]]
name = "api"
image = "mcr.svc.mcp.metacircular.net:8443/metacrypt:latest"
network = "docker_default"
user = "0:0"
restart = "unless-stopped"
ports = ["127.0.0.1:18443:8443", "127.0.0.1:19443:9443"]
volumes = ["/srv/metacrypt:/srv/metacrypt"]
[[components.routes]]
name = "rest"
port = 8443
mode = "l4"
[[components.routes]]
name = "grpc"
port = 9443
mode = "l4"
[[components]]
name = "web"
image = "mcr.svc.mcp.metacircular.net:8443/metacrypt-web:latest"
network = "docker_default"
user = "0:0"
restart = "unless-stopped"
ports = ["127.0.0.1:18080:8080"]
volumes = ["/srv/metacrypt:/srv/metacrypt"]
cmd = ["server", "--config", "/srv/metacrypt/metacrypt.toml"]
[[components.routes]]
port = 443
mode = "l7"
```
### Active State
@@ -286,6 +299,12 @@ chain:
If neither exists (first deploy, no file), the deploy fails with an error
telling the operator to create a service definition.
Before pushing to the agent, the CLI checks that each component's image
tag exists in the registry. If a tag is missing and a `[build]` section
is configured, the CLI builds and pushes the image automatically (same
logic as `mcp sync` auto-build, described below). This makes `mcp deploy`
a single command for the bump-build-push-deploy workflow.
The CLI pushes the resolved spec to the agent. The agent records it in its
registry and executes the deploy. The service definition file on disk is
**not** modified -- it represents the operator's declared intent, not the
@@ -333,6 +352,83 @@ Service definition files can be:
- **Generated by converting from mcdeploy.toml** during initial MCP
migration (one-time).
### Build Configuration
Service definitions include a `[build]` section that tells MCP how to
build container images from source. This replaces the standalone
`mcdeploy.toml` -- MCP owns the full build-push-deploy lifecycle.
Top-level build fields:
| Field | Purpose |
|-------|---------|
| `path` | Source directory relative to the workspace root |
| `build.uses_mcdsl` | Whether the mcdsl module is needed at build time |
| `build.images.<name>` | Maps each image name to its Dockerfile path |
The workspace root is configured in `~/.config/mcp/mcp.toml`:
```toml
[build]
workspace = "~/src/metacircular"
```
A service with `path = "mcr"` resolves to `~/src/metacircular/mcr`. The
convention assumes `~/src/metacircular/<path>` on operator workstations
(vade, orion). The workspace path can be overridden but the convention
should hold for all standard machines.
### Build and Release Workflow
The standard release workflow for a service:
1. **Tag** the release in git (`git tag -a v1.1.0`).
2. **Build** the images: `mcp build <service>` reads the service
definition, locates the source tree via `path`, and runs `docker
build` using each Dockerfile in `[build.images]`. Images are tagged
with the version from the component `image` field and pushed to MCR.
3. **Update** the service definition: bump the version tag in each
component's `image` field.
4. **Deploy**: `mcp sync` or `mcp deploy <service>`.
#### `mcp build` Resolution
`mcp build <service>` does the following:
1. Read the service definition to find `[build.images]` and `path`.
2. Resolve the source tree: `<workspace>/<path>`.
3. For each image in `[build.images]`:
a. Build with the Dockerfile at `<source>/<dockerfile>`.
b. If `uses_mcdsl = true`, include the mcdsl directory in the build
context (or use a multi-module build strategy).
c. Tag as `<registry>/<image>:<version>` (version extracted from the
matching component's `image` field).
d. Push to MCR.
#### `mcp sync` Auto-Build
`mcp sync` pushes service definitions to agents. Before deploying, it
checks that each component's image tag exists in the registry:
- **Tag exists** → proceed with deploy.
- **Tag missing, source tree available** → build and push automatically,
then deploy.
- **Tag missing, no source tree** → fail with error:
`"mcr:v1.1.0 not found in registry and no source tree at ~/src/metacircular/mcr"`.
This ensures `mcp sync` is a single command for the common case (tag,
update version, sync) while failing clearly when the build environment
is not available.
#### Image Versioning
Service definitions MUST pin explicit version tags (e.g., `v1.1.0`),
never `:latest`. This ensures:
- `mcp status` shows the actual running version.
- Deployments are reproducible.
- Rollbacks are explicit (change the tag back to the previous version).
---
## Agent
@@ -566,6 +662,29 @@ The agent runs as a dedicated `mcp` system user. Podman runs rootless under
this user. All containers are owned by `mcp`. The NixOS configuration
provisions the `mcp` user with podman access.
#### Runtime Interface
The `runtime.Runtime` interface abstracts the container runtime. The agent
(and the CLI, for build operations) use it for all container operations.
| Method | Used by | Purpose |
|--------|---------|---------|
| `Pull(image)` | Agent | `podman pull <image>` |
| `Run(spec)` | Agent | `podman run -d ...` |
| `Stop(name)` | Agent | `podman stop <name>` |
| `Remove(name)` | Agent | `podman rm <name>` |
| `Inspect(name)` | Agent | `podman inspect <name>` |
| `List()` | Agent | `podman ps -a` |
| `Build(image, contextDir, dockerfile)` | CLI | `podman build -t <image> -f <dockerfile> <contextDir>` |
| `Push(image)` | CLI | `podman push <image>` |
| `ImageExists(image)` | CLI | `podman manifest inspect docker://<image>` (checks remote registry) |
The first six methods are used by the agent during deploy and monitoring.
The last three are used by the CLI during `mcp build` and `mcp deploy`
auto-build. They are on the same interface because the CLI uses the local
podman installation directly -- no gRPC RPC needed, since builds happen
on the operator's workstation, not on the deployment node.
#### Deploy Flow
When the agent receives a `Deploy` RPC:
@@ -1133,6 +1252,7 @@ mcp/
│ ├── mcp/ CLI
│ │ ├── main.go
│ │ ├── login.go
│ │ ├── build.go build and push images
│ │ ├── deploy.go
│ │ ├── lifecycle.go stop, start, restart
│ │ ├── status.go list, ps, status
@@ -1195,6 +1315,147 @@ mcp/
---
## Registry Cleanup: Purge
### Problem
The agent's registry accumulates stale entries over time. A component
that was replaced (e.g., `mcns/coredns``mcns/mcns`) or a service
that was decommissioned remains in the registry indefinitely with
`observed=removed` or `observed=unknown`. There is no mechanism to tell
the agent "this component no longer exists and should not be tracked."
This causes:
- Perpetual drift alerts for components that will never return.
- Noise in `mcp status` and `mcp list` output.
- Confusion about what the agent is actually responsible for.
The existing `mcp sync` compares local service definitions against the
agent's registry and updates desired state for components that are
defined. But it does not remove components or services that are *absent*
from the local definitions — sync is additive, not declarative.
### Design: `mcp purge`
Purge removes registry entries that are both **unwanted** (not in any
current service definition) and **gone** (no corresponding container in
the runtime). It is the garbage collector for the registry.
```
mcp purge [--dry-run] Purge all stale entries
mcp purge <service> [--dry-run] Purge stale entries for one service
mcp purge <service>/<component> [--dry-run] Purge a specific component
```
#### Semantics
Purge operates on the agent's registry, not on containers. It never
stops or removes running containers. The rules:
1. **Component purge**: a component is eligible for purge when:
- Its observed state is `removed`, `unknown`, or `exited`, AND
- It is not present in any current service definition file
(i.e., `mcp sync` would not recreate it).
Purging a component deletes its registry entry (from `components`,
`component_ports`, `component_volumes`, `component_cmd`) and its
event history.
2. **Service purge**: a service is eligible for purge when all of its
components have been purged (or it has no components). Purging a
service deletes its `services` row.
3. **Safety**: purge refuses to remove a component whose observed state
is `running` or `stopped` (i.e., a container still exists in the
runtime). This prevents accidentally losing track of live containers.
The operator must `mcp stop` and wait for the container to be removed
before purging, or manually remove it via podman.
4. **Dry run**: `--dry-run` lists what would be purged without modifying
the registry. This is the default-safe way to preview the operation.
#### Interaction with Sync
`mcp sync` pushes desired state from service definitions. `mcp purge`
removes entries that sync would never touch. They are complementary:
- `sync` answers: "what should exist?" (additive)
- `purge` answers: "what should be forgotten?" (subtractive)
A full cleanup is: `mcp sync && mcp purge`.
An alternative design would make `mcp sync` itself remove entries not
present in service definitions (fully declarative sync). This was
rejected because:
- Sync currently only operates on services that have local definition
files. A service without a local file is left untouched — this is
desirable when multiple operators or workstations manage different
services.
- Making sync destructive increases the blast radius of a missing file
(accidentally deleting the local `mcr.toml` would cause sync to
purge MCR from the registry).
- Purge as a separate, explicit command with `--dry-run` gives the
operator clear control over what gets cleaned up.
#### Agent RPC
```protobuf
rpc PurgeComponent(PurgeRequest) returns (PurgeResponse);
message PurgeRequest {
string service = 1; // service name (empty = all services)
string component = 2; // component name (empty = all eligible in service)
bool dry_run = 3; // preview only, do not modify registry
}
message PurgeResponse {
repeated PurgeResult results = 1;
}
message PurgeResult {
string service = 1;
string component = 2;
bool purged = 3; // true if removed (or would be, in dry-run)
string reason = 4; // why eligible, or why refused
}
```
The CLI sends the set of currently-defined service/component names
alongside the purge request so the agent can determine what is "not in
any current service definition" without needing access to the CLI's
filesystem.
#### Example
After replacing `mcns/coredns` with `mcns/mcns`:
```
$ mcp purge --dry-run
would purge mcns/coredns (observed=removed, not in service definitions)
$ mcp purge
purged mcns/coredns
$ mcp status
SERVICE COMPONENT DESIRED OBSERVED VERSION
mc-proxy mc-proxy running running latest
mcns mcns running running v1.0.0
mcr api running running latest
mcr web running running latest
metacrypt api running running latest
metacrypt web running running latest
```
#### Registry Auth
Purge also cleans up after the `mcp adopt` workflow. When containers are
adopted and later removed (replaced by a proper deploy), the adopted
entries linger. Purge removes them once the containers are gone and the
service definition no longer references them.
---
## Future Work (v2+)
These are explicitly out of scope for v1 but inform the design:

View File

@@ -47,5 +47,108 @@
## Phase 5: Integration and Polish
- [ ] **P5.1** Integration test suite
- [ ] **P5.2** Bootstrap procedure test
- [ ] **P5.3** Documentation (CLAUDE.md, README.md, RUNBOOK.md)
- [x] **P5.2** Bootstrap procedure — documented in `docs/bootstrap.md`
- [x] **P5.3** Documentation CLAUDE.md, README.md, RUNBOOK.md
## Phase 6: Deployment (completed 2026-03-26)
- [x] **P6.1** NixOS config for mcp user (rootless podman, subuid/subgid, systemd service)
- [x] **P6.2** TLS cert provisioned from Metacrypt (DNS + IP SANs)
- [x] **P6.3** MCIAS system account (mcp-agent with admin role)
- [x] **P6.4** Container migration (metacrypt, mc-proxy, mcr, mcns → mcp user)
- [x] **P6.5** MCP bootstrap (adopt, sync, export service definitions)
- [x] **P6.6** Service definitions completed with full container specs
## Deployment Bugs Fixed During Rollout
- podman ps JSON: `Command` field is `[]string` not `string`
- Container name handling: `splitContainerName` naive split broke `mc-proxy`
→ extracted `ContainerNameFor`/`SplitContainerName` with registry-aware lookup
- CLI default config path: `~/.config/mcp/mcp.toml`
- Token file whitespace: trim newlines before sending in gRPC metadata
- NixOS systemd sandbox: `ProtectHome` blocks `/run/user`, `ProtectSystem=strict`
blocks podman runtime dir → relaxed to `ProtectSystem=full`, `ProtectHome=false`
- Agent needs `PATH`, `HOME`, `XDG_RUNTIME_DIR` in systemd environment
## Platform Evolution (see PLATFORM_EVOLUTION.md)
### Phase A — COMPLETE (2026-03-27)
- [x] Route declarations in service definitions (`[[components.routes]]`)
- [x] Automatic port allocation by agent (10000-60000, mutex-serialized)
- [x] `$PORT` / `$PORT_<NAME>` env var injection into containers
- [x] Proto: `RouteSpec` message, `routes` + `env` on `ComponentSpec`
- [x] Registry: `component_routes` table with `host_port` tracking
- [x] Backward compatible: old-style `ports` strings still work
### Phase B — IN PROGRESS
- [ ] Agent connects to mc-proxy via Unix socket on deploy
- [ ] Agent calls `AddRoute` to register routes with mc-proxy
- [ ] Agent calls `RemoveRoute` on service stop/teardown
- [ ] Agent config: `[mcproxy] socket` field
- [ ] TLS certs: pre-provisioned at convention path (Phase C automates)
## Remaining Work
### Operational — Next Priority
- [ ] **MCR auth for mcp user** — podman pull from MCR requires OCI token
auth. Currently using image save/load workaround. Need either: OCI token
flow support in the agent, or podman login with service account credentials.
- [ ] **Vade DNS routing** — Tailscale MagicDNS intercepts `*.svc.mcp.metacircular.net`
queries on vade, preventing hostname-based TLS connections. CLI currently
uses IP address directly. Fix: Tailscale DNS configuration or split-horizon
setup on vade.
- [ ] **Service export completeness**`mcp service export` only captures
name + image from the registry. Should include full spec (network, ports,
volumes, user, restart, cmd). Requires the agent's `ListServices` response
to include full `ComponentSpec` data, not just `ComponentInfo`.
### Quality
- [ ] **P5.1** Integration test suite — end-to-end CLI → agent → podman tests
- [ ] **P5.2** Bootstrap procedure test — documented and verified
- [ ] **README.md** — quick-start guide
- [ ] **RUNBOOK.md** — operational procedures (unseal metacrypt, restart
services, disaster recovery)
### Design
- [ ] **Self-management** — how MCP updates mc-proxy and its own agent without
circular dependency. Likely answer: NixOS manages the agent and mc-proxy
binaries; MCP manages their containers. Or: staged restart with health
checks.
- [ ] **ARCHITECTURE.md proto naming** — update spec to match buf-lint-compliant
message names (StopServiceRequest vs ServiceRequest, AdoptContainers vs
AdoptContainer).
- [ ] **mcdsl DefaultPath helper**`DefaultPath(name) string` for consistent
config file discovery across all services. Root: /srv, /etc. User: XDG, /srv.
- [ ] **Engineering standards update** — document REST+gRPC parity exception
for infrastructure services (MCP agent).
### Infrastructure
- [ ] **Certificate renewal** — MCP-managed cert renewal before expiry.
Agent cert expires 2026-06-24. Need automated renewal via Metacrypt ACME
or REST API.
- [ ] **Monitor alerting** — configure alert_command on rift (ntfy, webhook,
or custom script) for drift/flap notifications.
- [ ] **Backup timer** — install mcp-agent-backup timer via NixOS config.
## Current State (2026-03-26)
MCP is deployed and operational on rift. The agent runs as a systemd service
under the `mcp` user with rootless podman. All platform services (metacrypt,
mc-proxy, mcr, mcns) are managed by MCP with complete service definitions.
```
$ mcp status
SERVICE COMPONENT DESIRED OBSERVED VERSION
mc-proxy mc-proxy running running latest
mcns coredns running running 1.12.1
mcr api running running latest
mcr web running running latest
metacrypt api running running latest
metacrypt web running running latest
```

119
README.md Normal file
View File

@@ -0,0 +1,119 @@
# MCP — Metacircular Control Plane
MCP is the orchestrator for the [Metacircular](https://metacircular.net)
platform. It manages container lifecycle, tracks what services run where,
and transfers files between the operator's workstation and managed nodes.
## Architecture
**CLI** (`mcp`) — thin client on the operator's workstation. Reads local
service definition files, pushes intent to agents, queries status.
**Agent** (`mcp-agent`) — per-node daemon. Manages containers via rootless
podman, stores a SQLite registry of desired/observed state, monitors for
drift, and alerts the operator.
## Quick Start
### Build
```bash
make all # vet, lint, test, build
make mcp # CLI only
make mcp-agent # agent only
```
### Install the CLI
```bash
cp mcp ~/.local/bin/
mkdir -p ~/.config/mcp/services
```
Create `~/.config/mcp/mcp.toml`:
```toml
[services]
dir = "/home/<user>/.config/mcp/services"
[mcias]
server_url = "https://mcias.metacircular.net:8443"
service_name = "mcp"
[auth]
token_path = "/home/<user>/.config/mcp/token"
[[nodes]]
name = "rift"
address = "100.95.252.120:9444"
```
### Authenticate
```bash
mcp login
```
### Check status
```bash
mcp status # full picture: services, drift, events
mcp ps # live container check with uptime
mcp list # quick registry query
```
### Deploy a service
Write a service definition in `~/.config/mcp/services/<name>.toml`:
```toml
name = "myservice"
node = "rift"
active = true
[[components]]
name = "api"
image = "mcr.svc.mcp.metacircular.net:8443/myservice:v1.0.0"
network = "mcpnet"
user = "0:0"
restart = "unless-stopped"
ports = ["127.0.0.1:8443:8443"]
volumes = ["/srv/myservice:/srv/myservice"]
cmd = ["server", "--config", "/srv/myservice/myservice.toml"]
```
Then deploy:
```bash
mcp deploy myservice
```
## Commands
| Command | Description |
|---------|-------------|
| `mcp login` | Authenticate to MCIAS |
| `mcp deploy <service>[/<component>]` | Deploy from service definition |
| `mcp stop <service>` | Stop all components |
| `mcp start <service>` | Start all components |
| `mcp restart <service>` | Restart all components |
| `mcp list` | List services (registry) |
| `mcp ps` | Live container check |
| `mcp status [service]` | Full status with drift and events |
| `mcp sync` | Push all service definitions |
| `mcp adopt <service>` | Adopt running containers |
| `mcp service show <service>` | Print spec from agent |
| `mcp service edit <service>` | Edit definition in $EDITOR |
| `mcp service export <service>` | Export agent spec to file |
| `mcp push <file> <service> [path]` | Push file to node |
| `mcp pull <service> <path> [file]` | Pull file from node |
| `mcp node list` | List nodes |
| `mcp node add <name> <addr>` | Add a node |
| `mcp node remove <name>` | Remove a node |
## Documentation
- [ARCHITECTURE.md](ARCHITECTURE.md) — design specification
- [RUNBOOK.md](RUNBOOK.md) — operational procedures
- [PROJECT_PLAN_V1.md](PROJECT_PLAN_V1.md) — implementation plan
- [PROGRESS_V1.md](PROGRESS_V1.md) — progress and remaining work

305
RUNBOOK.md Normal file
View File

@@ -0,0 +1,305 @@
# MCP Runbook
Operational procedures for the Metacircular Control Plane. Written for
operators at 3 AM.
## Service Overview
MCP manages container lifecycle on Metacircular nodes. Two components:
- **mcp-agent** — systemd service on each node (rift). Manages containers
via rootless podman, stores registry in SQLite, monitors for drift.
- **mcp** — CLI on the operator's workstation (vade). Pushes desired state,
queries status.
## Health Checks
### Quick status
```bash
mcp status
```
Shows all services, desired vs observed state, drift, and recent events.
No drift = healthy.
### Agent process
```bash
ssh rift "doas systemctl status mcp-agent"
ssh rift "doas journalctl -u mcp-agent --since '10 min ago' --no-pager"
```
### Individual service
```bash
mcp status metacrypt
```
## Common Operations
### Check what's running
```bash
mcp ps # live check with uptime
mcp list # from registry (no runtime query)
mcp status # full picture with drift and events
```
### Restart a service
```bash
mcp restart metacrypt
```
Restarts all components. Does not change the `active` flag. Metacrypt
will need to be unsealed after restart.
### Stop a service
```bash
mcp stop metacrypt
```
Sets `active = false` in the service definition file and stops all
containers. The agent will not restart them.
### Start a stopped service
```bash
mcp start metacrypt
```
Sets `active = true` and starts all containers.
### Deploy an update
Edit the service definition to update the image tag, then deploy:
```bash
mcp service edit metacrypt # opens in $EDITOR
mcp deploy metacrypt # deploys all components
mcp deploy metacrypt/web # deploy just the web component
```
### Push a config file to a node
```bash
mcp push metacrypt.toml metacrypt # → /srv/metacrypt/metacrypt.toml
mcp push cert.pem metacrypt certs/cert.pem # → /srv/metacrypt/certs/cert.pem
```
### Pull a file from a node
```bash
mcp pull metacrypt metacrypt.toml ./local-copy.toml
```
### Sync desired state
Push all service definitions to the agent without deploying:
```bash
mcp sync
```
### View service definition
```bash
mcp service show metacrypt # from agent registry
cat ~/.config/mcp/services/metacrypt.toml # local file
```
### Export service definition from agent
```bash
mcp service export metacrypt
```
Writes the agent's current spec to the local service definition file.
## Unsealing Metacrypt
Metacrypt starts sealed after any restart. Unseal via the API:
```bash
curl -sk -X POST https://metacrypt.svc.mcp.metacircular.net:8443/v1/unseal \
-H "Content-Type: application/json" \
-d '{"password":"<unseal-password>"}'
```
Or via the web UI at `https://metacrypt.svc.mcp.metacircular.net`.
**Important:** Restarting metacrypt-api requires unsealing. To avoid this
when updating just the UI, deploy only the web component:
```bash
mcp deploy metacrypt/web
```
## Agent Management
### Restart the agent
```bash
ssh rift "doas systemctl restart mcp-agent"
```
Containers keep running — the agent is stateless w.r.t. container
lifecycle. Podman's restart policy keeps containers up.
### View agent logs
```bash
ssh rift "doas journalctl -u mcp-agent -f" # follow
ssh rift "doas journalctl -u mcp-agent --since today" # today's logs
```
### Agent database backup
```bash
ssh rift "doas -u mcp /usr/local/bin/mcp-agent snapshot --config /srv/mcp/mcp-agent.toml"
```
Backups go to `/srv/mcp/backups/`.
### Update the agent binary
```bash
# On vade, in the mcp repo:
make clean && make mcp-agent
scp mcp-agent rift:/tmp/
ssh rift "doas systemctl stop mcp-agent && \
doas cp /tmp/mcp-agent /usr/local/bin/mcp-agent && \
doas systemctl start mcp-agent"
```
### Update the CLI binary
```bash
make clean && make mcp
cp mcp ~/.local/bin/
```
## Node Management
### List nodes
```bash
mcp node list
```
### Add a node
```bash
mcp node add <name> <address:port>
```
### Remove a node
```bash
mcp node remove <name>
```
## TLS Certificate Renewal
The agent's TLS cert is at `/srv/mcp/certs/cert.pem`. Check expiry:
```bash
ssh rift "openssl x509 -in /srv/mcp/certs/cert.pem -noout -enddate"
```
To renew (requires a Metacrypt token):
```bash
export METACRYPT_TOKEN="<token>"
ssh rift "curl -sk -X POST https://127.0.0.1:18443/v1/engine/request \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer $METACRYPT_TOKEN' \
-d '{
\"mount\": \"pki\",
\"operation\": \"issue\",
\"path\": \"web\",
\"data\": {
\"issuer\": \"web\",
\"common_name\": \"mcp-agent.svc.mcp.metacircular.net\",
\"profile\": \"server\",
\"dns_names\": [\"mcp-agent.svc.mcp.metacircular.net\"],
\"ip_addresses\": [\"100.95.252.120\", \"192.168.88.181\"],
\"ttl\": \"2160h\"
}
}'" > /tmp/cert-response.json
# Extract and install cert+key from the JSON response, then:
ssh rift "doas systemctl restart mcp-agent"
```
## Incident Procedures
### Service not running (drift detected)
1. `mcp status` — identify which service/component drifted.
2. Check agent logs: `ssh rift "doas journalctl -u mcp-agent --since '10 min ago'"`
3. Check container logs: `ssh rift "doas -u mcp podman logs <container-name>"`
4. Restart: `mcp restart <service>`
5. If metacrypt: unseal after restart.
### Agent unreachable
1. Check if the agent process is running: `ssh rift "doas systemctl status mcp-agent"`
2. If stopped: `ssh rift "doas systemctl start mcp-agent"`
3. Check logs for crash reason: `ssh rift "doas journalctl -u mcp-agent -n 50"`
4. Containers keep running independently — podman's restart policy handles them.
### Token expired
MCP CLI shows `UNAUTHENTICATED` or `PERMISSION_DENIED`:
1. Check token: the mcp-agent service account token is at `~/.config/mcp/token`
2. Validate: `curl -sk -X POST -H "Authorization: Bearer $(cat ~/.config/mcp/token)" https://mcias.metacircular.net:8443/v1/token/validate`
3. If expired: generate a new service account token from MCIAS admin dashboard.
### Database corruption
The agent's SQLite database is at `/srv/mcp/mcp.db`:
1. Stop the agent: `ssh rift "doas systemctl stop mcp-agent"`
2. Restore from backup: `ssh rift "doas -u mcp cp /srv/mcp/backups/<latest>.db /srv/mcp/mcp.db"`
3. Start the agent: `ssh rift "doas systemctl start mcp-agent"`
4. Run `mcp sync` to re-push desired state.
If no backup exists, delete the database and re-bootstrap:
1. `ssh rift "doas -u mcp rm /srv/mcp/mcp.db"`
2. `ssh rift "doas systemctl start mcp-agent"` (creates fresh database)
3. `mcp sync` (pushes all service definitions)
### Disaster recovery (rift lost)
1. Provision new machine, connect to overlay network.
2. Apply NixOS config (creates mcp user, installs agent).
3. Install mcp-agent binary.
4. Restore `/srv/` from backups (each service's backup timer creates daily snapshots).
5. Provision TLS cert from Metacrypt.
6. Start agent: `doas systemctl start mcp-agent`
7. `mcp sync` from vade to push service definitions.
8. Unseal Metacrypt.
## File Locations
### On rift (agent)
| Path | Purpose |
|------|---------|
| `/srv/mcp/mcp-agent.toml` | Agent config |
| `/srv/mcp/mcp.db` | Registry database |
| `/srv/mcp/certs/` | Agent TLS cert and key |
| `/srv/mcp/backups/` | Database snapshots |
| `/srv/<service>/` | Service data directories |
### On vade (CLI)
| Path | Purpose |
|------|---------|
| `~/.config/mcp/mcp.toml` | CLI config |
| `~/.config/mcp/token` | MCIAS bearer token |
| `~/.config/mcp/services/` | Service definition files |

168
cmd/mcp/build.go Normal file
View File

@@ -0,0 +1,168 @@
package main
import (
"context"
"fmt"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"git.wntrmute.dev/kyle/mcp/internal/config"
"git.wntrmute.dev/kyle/mcp/internal/runtime"
"git.wntrmute.dev/kyle/mcp/internal/servicedef"
)
func buildCmd() *cobra.Command {
return &cobra.Command{
Use: "build <service>[/<image>]",
Short: "Build and push images for a service",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
cfg, err := config.LoadCLIConfig(cfgPath)
if err != nil {
return fmt.Errorf("load config: %w", err)
}
serviceName, imageFilter := parseServiceArg(args[0])
def, err := loadServiceDef(cmd, cfg, serviceName)
if err != nil {
return err
}
rt := &runtime.Podman{}
return buildServiceImages(cmd.Context(), cfg, def, rt, imageFilter)
},
}
}
// buildServiceImages builds and pushes images for a service definition.
// If imageFilter is non-empty, only the matching image is built.
func buildServiceImages(ctx context.Context, cfg *config.CLIConfig, def *servicedef.ServiceDef, rt *runtime.Podman, imageFilter string) error {
if def.Build == nil || len(def.Build.Images) == 0 {
return fmt.Errorf("service %q has no [build.images] configuration", def.Name)
}
if def.Path == "" {
return fmt.Errorf("service %q has no path configured", def.Name)
}
if cfg.Build.Workspace == "" {
return fmt.Errorf("build.workspace is not configured in %s", cfgPath)
}
sourceDir := filepath.Join(cfg.Build.Workspace, def.Path)
for imageName, dockerfile := range def.Build.Images {
if imageFilter != "" && imageName != imageFilter {
continue
}
imageRef := findImageRef(def, imageName)
if imageRef == "" {
return fmt.Errorf("no component references image %q in service %q", imageName, def.Name)
}
fmt.Printf("building %s from %s\n", imageRef, dockerfile)
if err := rt.Build(ctx, imageRef, sourceDir, dockerfile); err != nil {
return fmt.Errorf("build %s: %w", imageRef, err)
}
fmt.Printf("pushing %s\n", imageRef)
if err := rt.Push(ctx, imageRef); err != nil {
return fmt.Errorf("push %s: %w", imageRef, err)
}
}
if imageFilter != "" {
if _, ok := def.Build.Images[imageFilter]; !ok {
return fmt.Errorf("image %q not found in [build.images] for service %q", imageFilter, def.Name)
}
}
return nil
}
// findImageRef finds the full image reference for a build image name by
// matching it against component image fields. The image name from
// [build.images] matches the repository name in the component's image
// reference (the path segment after the last slash, before the tag).
func findImageRef(def *servicedef.ServiceDef, imageName string) string {
for _, c := range def.Components {
repoName := extractRepoName(c.Image)
if repoName == imageName {
return c.Image
}
}
return ""
}
// extractRepoName returns the repository name from an image reference.
// Examples:
//
// "mcr.svc.mcp.metacircular.net:8443/mcr:v1.1.0" -> "mcr"
// "mcr.svc.mcp.metacircular.net:8443/mcr-web:v1.2.0" -> "mcr-web"
// "mcr-web:v1.2.0" -> "mcr-web"
// "mcr-web" -> "mcr-web"
func extractRepoName(image string) string {
// Strip registry prefix (everything up to and including the last slash).
name := image
if i := strings.LastIndex(image, "/"); i >= 0 {
name = image[i+1:]
}
// Strip tag.
if i := strings.LastIndex(name, ":"); i >= 0 {
name = name[:i]
}
return name
}
// ensureImages checks that all component images exist in the registry.
// If an image is missing and the service has build configuration, it
// builds and pushes the image. Returns nil if all images are available.
func ensureImages(ctx context.Context, cfg *config.CLIConfig, def *servicedef.ServiceDef, rt *runtime.Podman, component string) error {
if def.Build == nil || len(def.Build.Images) == 0 {
return nil // no build config, skip auto-build
}
for _, c := range def.Components {
if component != "" && c.Name != component {
continue
}
repoName := extractRepoName(c.Image)
dockerfile, ok := def.Build.Images[repoName]
if !ok {
continue // no Dockerfile for this image, skip
}
exists, err := rt.ImageExists(ctx, c.Image)
if err != nil {
return fmt.Errorf("check image %s: %w", c.Image, err)
}
if exists {
continue
}
// Image missing — build and push.
if def.Path == "" {
return fmt.Errorf("image %s not found in registry and service %q has no path configured", c.Image, def.Name)
}
if cfg.Build.Workspace == "" {
return fmt.Errorf("image %s not found in registry and build.workspace is not configured", c.Image)
}
sourceDir := filepath.Join(cfg.Build.Workspace, def.Path)
fmt.Printf("image %s not found, building from %s\n", c.Image, dockerfile)
if err := rt.Build(ctx, c.Image, sourceDir, dockerfile); err != nil {
return fmt.Errorf("auto-build %s: %w", c.Image, err)
}
fmt.Printf("pushing %s\n", c.Image)
if err := rt.Push(ctx, c.Image); err != nil {
return fmt.Errorf("auto-push %s: %w", c.Image, err)
}
}
return nil
}

View File

@@ -10,6 +10,7 @@ import (
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/config"
"git.wntrmute.dev/kyle/mcp/internal/runtime"
"git.wntrmute.dev/kyle/mcp/internal/servicedef"
)
@@ -31,6 +32,12 @@ func deployCmd() *cobra.Command {
return err
}
// Auto-build missing images if the service has build config.
rt := &runtime.Podman{}
if err := ensureImages(cmd.Context(), cfg, def, rt, component); err != nil {
return err
}
spec := servicedef.ToProto(def)
address, err := findNodeAddress(cfg, def.Node)

View File

@@ -6,6 +6,7 @@ import (
"crypto/x509"
"fmt"
"os"
"strings"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/config"
@@ -68,5 +69,5 @@ func loadBearerToken(cfg *config.CLIConfig) (string, error) {
if err != nil {
return "", fmt.Errorf("read token from %q: %w (run 'mcp login' first)", cfg.Auth.TokenPath, err)
}
return string(token), nil
return strings.TrimSpace(string(token)), nil
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"log"
"os"
"path/filepath"
"github.com/spf13/cobra"
)
@@ -18,7 +19,11 @@ func main() {
Use: "mcp",
Short: "Metacircular Control Plane CLI",
}
root.PersistentFlags().StringVarP(&cfgPath, "config", "c", "", "config file path")
defaultCfg := ""
if home, err := os.UserHomeDir(); err == nil {
defaultCfg = filepath.Join(home, ".config", "mcp", "mcp.toml")
}
root.PersistentFlags().StringVarP(&cfgPath, "config", "c", defaultCfg, "config file path")
root.AddCommand(&cobra.Command{
Use: "version",
@@ -29,6 +34,7 @@ func main() {
})
root.AddCommand(loginCmd())
root.AddCommand(buildCmd())
root.AddCommand(deployCmd())
root.AddCommand(stopCmd())
root.AddCommand(startCmd())
@@ -42,6 +48,7 @@ func main() {
root.AddCommand(pushCmd())
root.AddCommand(pullCmd())
root.AddCommand(nodeCmd())
root.AddCommand(purgeCmd())
if err := root.Execute(); err != nil {
log.Fatal(err)

119
cmd/mcp/purge.go Normal file
View File

@@ -0,0 +1,119 @@
package main
import (
"context"
"fmt"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/config"
"git.wntrmute.dev/kyle/mcp/internal/servicedef"
"github.com/spf13/cobra"
)
func purgeCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "purge [service[/component]]",
Short: "Remove stale registry entries for gone, undefined components",
Long: `Purge removes registry entries that are both unwanted (not in any
current service definition) and gone (no corresponding container in the
runtime). It never stops or removes running containers.
Use --dry-run to preview what would be purged.`,
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
cfg, err := config.LoadCLIConfig(cfgPath)
if err != nil {
return fmt.Errorf("load config: %w", err)
}
dryRun, _ := cmd.Flags().GetBool("dry-run")
var service, component string
if len(args) == 1 {
service, component = parseServiceArg(args[0])
}
// Load all local service definitions to build the set of
// currently-defined service/component pairs.
definedComponents := buildDefinedComponents(cfg)
// Build node address lookup.
nodeAddr := make(map[string]string, len(cfg.Nodes))
for _, n := range cfg.Nodes {
nodeAddr[n.Name] = n.Address
}
// If a specific service was given and we can find its node,
// only talk to that node. Otherwise, talk to all nodes.
targetNodes := cfg.Nodes
if service != "" {
if nodeName, nodeAddr, err := findServiceNode(cfg, service); err == nil {
targetNodes = []config.NodeConfig{{Name: nodeName, Address: nodeAddr}}
}
}
anyResults := false
for _, node := range targetNodes {
client, conn, err := dialAgent(node.Address, cfg)
if err != nil {
return fmt.Errorf("dial %s: %w", node.Name, err)
}
defer func() { _ = conn.Close() }()
resp, err := client.PurgeComponent(context.Background(), &mcpv1.PurgeRequest{
Service: service,
Component: component,
DryRun: dryRun,
DefinedComponents: definedComponents,
})
if err != nil {
return fmt.Errorf("purge on %s: %w", node.Name, err)
}
for _, r := range resp.GetResults() {
anyResults = true
if r.GetPurged() {
if dryRun {
fmt.Printf("would purge %s/%s (%s)\n", r.GetService(), r.GetComponent(), r.GetReason())
} else {
fmt.Printf("purged %s/%s (%s)\n", r.GetService(), r.GetComponent(), r.GetReason())
}
} else {
fmt.Printf("skipped %s/%s (%s)\n", r.GetService(), r.GetComponent(), r.GetReason())
}
}
}
if !anyResults {
fmt.Println("nothing to purge")
}
return nil
},
}
cmd.Flags().Bool("dry-run", false, "preview what would be purged without modifying the registry")
return cmd
}
// buildDefinedComponents reads all local service definition files and returns
// a list of "service/component" strings for every defined component.
func buildDefinedComponents(cfg *config.CLIConfig) []string {
defs, err := servicedef.LoadAll(cfg.Services.Dir)
if err != nil {
// If we can't read service definitions, return an empty list.
// The agent will treat every component as undefined, which is the
// most conservative behavior (everything eligible gets purged).
return nil
}
var defined []string
for _, def := range defs {
for _, comp := range def.Components {
defined = append(defined, def.Name+"/"+comp.Name)
}
}
return defined
}

View File

@@ -11,6 +11,8 @@ RestartSec=5
User=mcp
Group=mcp
Environment=HOME=/srv/mcp
Environment=XDG_RUNTIME_DIR=/run/user/%U
NoNewPrivileges=true
ProtectSystem=strict

198
docs/bootstrap.md Normal file
View File

@@ -0,0 +1,198 @@
# MCP Bootstrap Procedure
How to bring MCP up on a node for the first time, including migrating
existing containers from another user's podman instance.
## Prerequisites
- NixOS configuration applied with `configs/mcp.nix` (creates `mcp` user
with rootless podman, subuid/subgid, systemd service)
- MCIAS system account with `admin` role (for token validation and cert
provisioning)
- Metacrypt running (for TLS certificate issuance)
## Step 1: Provision TLS Certificate
Issue a cert from Metacrypt with DNS and IP SANs:
```bash
export METACRYPT_TOKEN="<admin-token>"
# From a machine that can reach Metacrypt (e.g., via loopback on rift):
curl -sk -X POST https://127.0.0.1:18443/v1/engine/request \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $METACRYPT_TOKEN" \
-d '{
"mount": "pki",
"operation": "issue",
"path": "web",
"data": {
"issuer": "web",
"common_name": "mcp-agent.svc.mcp.metacircular.net",
"profile": "server",
"dns_names": ["mcp-agent.svc.mcp.metacircular.net"],
"ip_addresses": ["<tailscale-ip>", "<lan-ip>"],
"ttl": "2160h"
}
}' > cert-response.json
# Extract cert and key from the JSON response and install:
doas cp cert.pem /srv/mcp/certs/cert.pem
doas cp key.pem /srv/mcp/certs/key.pem
doas chown mcp:mcp /srv/mcp/certs/cert.pem /srv/mcp/certs/key.pem
doas chmod 600 /srv/mcp/certs/cert.pem /srv/mcp/certs/key.pem
```
## Step 2: Add DNS Record
Add an A record for `mcp-agent.svc.mcp.metacircular.net` pointing to the
node's IP in the MCNS zone file, bump the serial, restart CoreDNS.
## Step 3: Write Agent Config
Create `/srv/mcp/mcp-agent.toml`:
```toml
[server]
grpc_addr = "<tailscale-ip>:9444"
tls_cert = "/srv/mcp/certs/cert.pem"
tls_key = "/srv/mcp/certs/key.pem"
[database]
path = "/srv/mcp/mcp.db"
[mcias]
server_url = "https://mcias.metacircular.net:8443"
service_name = "mcp-agent"
[agent]
node_name = "<node-name>"
container_runtime = "podman"
[monitor]
interval = "60s"
alert_command = []
cooldown = "15m"
flap_threshold = 3
flap_window = "10m"
retention = "30d"
[log]
level = "info"
```
## Step 4: Install Agent Binary
```bash
scp mcp-agent <node>:/tmp/
ssh <node> "doas cp /tmp/mcp-agent /usr/local/bin/mcp-agent"
```
## Step 5: Start the Agent
```bash
ssh <node> "doas systemctl start mcp-agent"
ssh <node> "doas systemctl status mcp-agent"
```
## Step 6: Configure CLI
On the operator's workstation, create `~/.config/mcp/mcp.toml` and save
the MCIAS admin service account token to `~/.config/mcp/token`.
## Step 7: Migrate Containers (if existing)
If containers are running under another user (e.g., `kyle`), migrate them
to the `mcp` user's podman. Process each service in dependency order:
**Dependency order:** Metacrypt → MC-Proxy → MCR → MCNS
For each service:
```bash
# 1. Stop containers under the old user
ssh <node> "podman stop <container> && podman rm <container>"
# 2. Transfer ownership of data directory
ssh <node> "doas chown -R mcp:mcp /srv/<service>"
# 3. Transfer images to mcp's podman
ssh <node> "podman save <image> -o /tmp/<service>.tar"
ssh <node> "doas su -l -s /bin/sh mcp -c 'XDG_RUNTIME_DIR=/run/user/<uid> podman load -i /tmp/<service>.tar'"
# 4. Start containers under mcp (with new naming convention)
ssh <node> "doas su -l -s /bin/sh mcp -c 'XDG_RUNTIME_DIR=/run/user/<uid> podman run -d \
--name <service>-<component> \
--network mcpnet \
--restart unless-stopped \
--user 0:0 \
-p <ports> \
-v /srv/<service>:/srv/<service> \
<image> <cmd>'"
```
**Container naming convention:** `<service>-<component>` (e.g.,
`metacrypt-api`, `metacrypt-web`, `mc-proxy`).
**Network:** Services whose components need to communicate (metacrypt
api↔web, mcr api↔web) must be on the same podman network with DNS
enabled. Create with `podman network create mcpnet`.
**Config updates:** If service configs reference container names for
inter-component communication (e.g., `vault_grpc = "metacrypt:9443"`),
update them to use the new names (e.g., `vault_grpc = "metacrypt-api:9443"`).
**Unseal Metacrypt** after migration — it starts sealed.
## Step 8: Adopt Containers
```bash
mcp adopt metacrypt
mcp adopt mc-proxy
mcp adopt mcr
mcp adopt mcns
```
## Step 9: Export and Complete Service Definitions
```bash
mcp service export metacrypt
mcp service export mc-proxy
mcp service export mcr
mcp service export mcns
```
The exported files will have name + image only. Edit each file to add the
full container spec: network, ports, volumes, user, restart, cmd.
Then sync to push the complete specs:
```bash
mcp sync
```
## Step 10: Verify
```bash
mcp status
```
All services should show `desired: running`, `observed: running`, no drift.
## Lessons Learned (from first deployment, 2026-03-26)
- **NixOS systemd sandbox**: `ProtectHome=true` blocks `/run/user` which
rootless podman needs. Use `ProtectHome=false`. `ProtectSystem=strict`
also blocks it; use `full` instead.
- **PATH**: the agent's systemd unit needs `PATH=/run/current-system/sw/bin`
to find podman.
- **XDG_RUNTIME_DIR**: must be set to `/run/user/<uid>` for rootless podman.
Pin the UID in NixOS config to avoid drift.
- **Podman ps JSON**: the `Command` field is `[]string`, not `string`.
- **Container naming**: `mc-proxy` (service with hyphen) breaks naive split
on `-`. The agent uses registry-aware splitting.
- **Token whitespace**: token files with trailing newlines cause gRPC header
errors. The CLI trims whitespace.
- **MCR auth**: rootless podman under a new user can't pull from MCR without
OCI token auth. Workaround: `podman save` + `podman load` to transfer
images.

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1774388614,
"narHash": "sha256-tFwzTI0DdDzovdE9+Ras6CUss0yn8P9XV4Ja6RjA+nU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1073dad219cb244572b74da2b20c7fe39cb3fa9e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-25.11",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

48
flake.nix Normal file
View File

@@ -0,0 +1,48 @@
{
description = "mcp - Metacircular Control Plane";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
};
outputs =
{ self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
version = "0.1.0";
in
{
packages.${system} = {
default = pkgs.buildGoModule {
pname = "mcp";
inherit version;
src = ./.;
vendorHash = null;
subPackages = [
"cmd/mcp"
];
ldflags = [
"-s"
"-w"
"-X main.version=${version}"
];
};
mcp-agent = pkgs.buildGoModule {
pname = "mcp-agent";
inherit version;
src = ./.;
vendorHash = null;
subPackages = [
"cmd/mcp-agent"
];
ldflags = [
"-s"
"-w"
"-X main.version=${version}"
];
};
};
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -28,6 +28,7 @@ const (
McpAgentService_GetServiceStatus_FullMethodName = "/mcp.v1.McpAgentService/GetServiceStatus"
McpAgentService_LiveCheck_FullMethodName = "/mcp.v1.McpAgentService/LiveCheck"
McpAgentService_AdoptContainers_FullMethodName = "/mcp.v1.McpAgentService/AdoptContainers"
McpAgentService_PurgeComponent_FullMethodName = "/mcp.v1.McpAgentService/PurgeComponent"
McpAgentService_PushFile_FullMethodName = "/mcp.v1.McpAgentService/PushFile"
McpAgentService_PullFile_FullMethodName = "/mcp.v1.McpAgentService/PullFile"
McpAgentService_NodeStatus_FullMethodName = "/mcp.v1.McpAgentService/NodeStatus"
@@ -50,6 +51,8 @@ type McpAgentServiceClient interface {
LiveCheck(ctx context.Context, in *LiveCheckRequest, opts ...grpc.CallOption) (*LiveCheckResponse, error)
// Adopt
AdoptContainers(ctx context.Context, in *AdoptContainersRequest, opts ...grpc.CallOption) (*AdoptContainersResponse, error)
// Purge
PurgeComponent(ctx context.Context, in *PurgeRequest, opts ...grpc.CallOption) (*PurgeResponse, error)
// File transfer
PushFile(ctx context.Context, in *PushFileRequest, opts ...grpc.CallOption) (*PushFileResponse, error)
PullFile(ctx context.Context, in *PullFileRequest, opts ...grpc.CallOption) (*PullFileResponse, error)
@@ -155,6 +158,16 @@ func (c *mcpAgentServiceClient) AdoptContainers(ctx context.Context, in *AdoptCo
return out, nil
}
func (c *mcpAgentServiceClient) PurgeComponent(ctx context.Context, in *PurgeRequest, opts ...grpc.CallOption) (*PurgeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(PurgeResponse)
err := c.cc.Invoke(ctx, McpAgentService_PurgeComponent_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *mcpAgentServiceClient) PushFile(ctx context.Context, in *PushFileRequest, opts ...grpc.CallOption) (*PushFileResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(PushFileResponse)
@@ -202,6 +215,8 @@ type McpAgentServiceServer interface {
LiveCheck(context.Context, *LiveCheckRequest) (*LiveCheckResponse, error)
// Adopt
AdoptContainers(context.Context, *AdoptContainersRequest) (*AdoptContainersResponse, error)
// Purge
PurgeComponent(context.Context, *PurgeRequest) (*PurgeResponse, error)
// File transfer
PushFile(context.Context, *PushFileRequest) (*PushFileResponse, error)
PullFile(context.Context, *PullFileRequest) (*PullFileResponse, error)
@@ -244,6 +259,9 @@ func (UnimplementedMcpAgentServiceServer) LiveCheck(context.Context, *LiveCheckR
func (UnimplementedMcpAgentServiceServer) AdoptContainers(context.Context, *AdoptContainersRequest) (*AdoptContainersResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AdoptContainers not implemented")
}
func (UnimplementedMcpAgentServiceServer) PurgeComponent(context.Context, *PurgeRequest) (*PurgeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method PurgeComponent not implemented")
}
func (UnimplementedMcpAgentServiceServer) PushFile(context.Context, *PushFileRequest) (*PushFileResponse, error) {
return nil, status.Error(codes.Unimplemented, "method PushFile not implemented")
}
@@ -436,6 +454,24 @@ func _McpAgentService_AdoptContainers_Handler(srv interface{}, ctx context.Conte
return interceptor(ctx, in, info, handler)
}
func _McpAgentService_PurgeComponent_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PurgeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(McpAgentServiceServer).PurgeComponent(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: McpAgentService_PurgeComponent_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(McpAgentServiceServer).PurgeComponent(ctx, req.(*PurgeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _McpAgentService_PushFile_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PushFileRequest)
if err := dec(in); err != nil {
@@ -533,6 +569,10 @@ var McpAgentService_ServiceDesc = grpc.ServiceDesc{
MethodName: "AdoptContainers",
Handler: _McpAgentService_AdoptContainers_Handler,
},
{
MethodName: "PurgeComponent",
Handler: _McpAgentService_PurgeComponent_Handler,
},
{
MethodName: "PushFile",
Handler: _McpAgentService_PushFile_Handler,

View File

@@ -31,6 +31,7 @@ type Agent struct {
Runtime runtime.Runtime
Monitor *monitor.Monitor
Logger *slog.Logger
PortAlloc *PortAllocator
}
// Run starts the agent: opens the database, sets up the gRPC server with
@@ -56,6 +57,7 @@ func Run(cfg *config.AgentConfig) error {
Runtime: rt,
Monitor: mon,
Logger: logger,
PortAlloc: NewPortAllocator(),
}
tlsCert, err := tls.LoadX509KeyPair(cfg.Server.TLSCert, cfg.Server.TLSKey)

View File

@@ -5,6 +5,7 @@ import (
"database/sql"
"errors"
"fmt"
"strings"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/registry"
@@ -49,7 +50,7 @@ func (a *Agent) Deploy(ctx context.Context, req *mcpv1.DeployRequest) (*mcpv1.De
// deployComponent handles the full deploy lifecycle for a single component.
func (a *Agent) deployComponent(ctx context.Context, serviceName string, cs *mcpv1.ComponentSpec, active bool) *mcpv1.ComponentResult {
compName := cs.GetName()
containerName := serviceName + "-" + compName
containerName := ContainerNameFor(serviceName, compName)
desiredState := "running"
if !active {
@@ -58,6 +59,25 @@ func (a *Agent) deployComponent(ctx context.Context, serviceName string, cs *mcp
a.Logger.Info("deploying component", "service", serviceName, "component", compName, "desired", desiredState)
// Convert proto routes to registry routes.
var regRoutes []registry.Route
for _, r := range cs.GetRoutes() {
mode := r.GetMode()
if mode == "" {
mode = "l4"
}
name := r.GetName()
if name == "" {
name = "default"
}
regRoutes = append(regRoutes, registry.Route{
Name: name,
Port: int(r.GetPort()),
Mode: mode,
Hostname: r.GetHostname(),
})
}
regComp := &registry.Component{
Name: compName,
Service: serviceName,
@@ -70,6 +90,7 @@ func (a *Agent) deployComponent(ctx context.Context, serviceName string, cs *mcp
Ports: cs.GetPorts(),
Volumes: cs.GetVolumes(),
Cmd: cs.GetCmd(),
Routes: regRoutes,
}
if err := ensureComponent(a.DB, regComp); err != nil {
@@ -89,16 +110,34 @@ func (a *Agent) deployComponent(ctx context.Context, serviceName string, cs *mcp
_ = a.Runtime.Stop(ctx, containerName) // may not exist yet
_ = a.Runtime.Remove(ctx, containerName) // may not exist yet
// Build the container spec. If the component has routes, use route-based
// port allocation and env injection. Otherwise, fall back to legacy ports.
runSpec := runtime.ContainerSpec{
Name: containerName,
Image: cs.GetImage(),
Network: cs.GetNetwork(),
User: cs.GetUser(),
Restart: cs.GetRestart(),
Ports: cs.GetPorts(),
Volumes: cs.GetVolumes(),
Cmd: cs.GetCmd(),
Env: cs.GetEnv(),
}
if len(regRoutes) > 0 && a.PortAlloc != nil {
ports, env, err := a.allocateRoutePorts(serviceName, compName, regRoutes)
if err != nil {
return &mcpv1.ComponentResult{
Name: compName,
Error: fmt.Sprintf("allocate route ports: %v", err),
}
}
runSpec.Ports = ports
runSpec.Env = append(runSpec.Env, env...)
} else {
// Legacy: use ports directly from the spec.
runSpec.Ports = cs.GetPorts()
}
if err := a.Runtime.Run(ctx, runSpec); err != nil {
_ = registry.UpdateComponentState(a.DB, serviceName, compName, "", "removed")
return &mcpv1.ComponentResult{
@@ -117,6 +156,36 @@ func (a *Agent) deployComponent(ctx context.Context, serviceName string, cs *mcp
}
}
// allocateRoutePorts allocates host ports for each route, stores them in
// the registry, and returns the port mappings and env vars for the container.
func (a *Agent) allocateRoutePorts(service, component string, routes []registry.Route) ([]string, []string, error) {
var ports []string
var env []string
for _, r := range routes {
hostPort, err := a.PortAlloc.Allocate()
if err != nil {
return nil, nil, fmt.Errorf("allocate port for route %q: %w", r.Name, err)
}
if err := registry.UpdateRouteHostPort(a.DB, service, component, r.Name, hostPort); err != nil {
a.PortAlloc.Release(hostPort)
return nil, nil, fmt.Errorf("store host port for route %q: %w", r.Name, err)
}
ports = append(ports, fmt.Sprintf("127.0.0.1:%d:%d", hostPort, r.Port))
if len(routes) == 1 {
env = append(env, fmt.Sprintf("PORT=%d", hostPort))
} else {
envName := "PORT_" + strings.ToUpper(r.Name)
env = append(env, fmt.Sprintf("%s=%d", envName, hostPort))
}
}
return ports, env, nil
}
// ensureService creates the service if it does not exist, or updates its
// active flag if it does.
func ensureService(db *sql.DB, name string, active bool) error {

View File

@@ -27,7 +27,7 @@ func (a *Agent) StopService(ctx context.Context, req *mcpv1.StopServiceRequest)
var results []*mcpv1.ComponentResult
for _, c := range components {
containerName := req.GetName() + "-" + c.Name
containerName := ContainerNameFor(req.GetName(), c.Name)
r := &mcpv1.ComponentResult{Name: c.Name, Success: true}
if err := a.Runtime.Stop(ctx, containerName); err != nil {
@@ -94,7 +94,7 @@ func (a *Agent) RestartService(ctx context.Context, req *mcpv1.RestartServiceReq
// startComponent removes any existing container and runs a fresh one from
// the registry spec, then updates state to running.
func startComponent(ctx context.Context, a *Agent, service string, c *registry.Component) *mcpv1.ComponentResult {
containerName := service + "-" + c.Name
containerName := ContainerNameFor(service, c.Name)
r := &mcpv1.ComponentResult{Name: c.Name, Success: true}
// Remove any pre-existing container; ignore errors for non-existent ones.
@@ -118,7 +118,7 @@ func startComponent(ctx context.Context, a *Agent, service string, c *registry.C
// restartComponent stops, removes, and re-creates a container without
// changing the desired_state in the registry.
func restartComponent(ctx context.Context, a *Agent, service string, c *registry.Component) *mcpv1.ComponentResult {
containerName := service + "-" + c.Name
containerName := ContainerNameFor(service, c.Name)
r := &mcpv1.ComponentResult{Name: c.Name, Success: true}
_ = a.Runtime.Stop(ctx, containerName)
@@ -142,7 +142,7 @@ func restartComponent(ctx context.Context, a *Agent, service string, c *registry
// componentToSpec builds a runtime.ContainerSpec from a registry Component.
func componentToSpec(service string, c *registry.Component) runtime.ContainerSpec {
return runtime.ContainerSpec{
Name: service + "-" + c.Name,
Name: ContainerNameFor(service, c.Name),
Image: c.Image,
Network: c.Network,
User: c.UserSpec,

34
internal/agent/names.go Normal file
View File

@@ -0,0 +1,34 @@
package agent
import "strings"
// ContainerNameFor returns the expected container name for a service and
// component. For single-component services where the component name equals
// the service name, the container name is just the service name (e.g.,
// "mc-proxy" not "mc-proxy-mc-proxy").
func ContainerNameFor(service, component string) string {
if service == component {
return service
}
return service + "-" + component
}
// SplitContainerName splits a container name into service and component parts.
// It checks known service names first to handle names like "mc-proxy" where a
// naive split on "-" would produce the wrong result. If no known service
// matches, it falls back to splitting on the first "-".
func SplitContainerName(name string, knownServices map[string]bool) (service, component string) {
if knownServices[name] {
return name, name
}
for svc := range knownServices {
prefix := svc + "-"
if strings.HasPrefix(name, prefix) && len(name) > len(prefix) {
return svc, name[len(prefix):]
}
}
if i := strings.Index(name, "-"); i >= 0 {
return name[:i], name[i+1:]
}
return name, name
}

View File

@@ -0,0 +1,69 @@
package agent
import (
"fmt"
"math/rand/v2"
"net"
"sync"
)
const (
portRangeMin = 10000
portRangeMax = 60000
maxRetries = 10
)
// PortAllocator manages host port allocation for route-based deployments.
// It tracks allocated ports within the agent session to avoid double-allocation.
type PortAllocator struct {
mu sync.Mutex
allocated map[int]bool
}
// NewPortAllocator creates a new PortAllocator.
func NewPortAllocator() *PortAllocator {
return &PortAllocator{
allocated: make(map[int]bool),
}
}
// Allocate picks a free port in range [10000, 60000).
// It tries random ports, checks availability with net.Listen, and retries up to 10 times.
func (pa *PortAllocator) Allocate() (int, error) {
pa.mu.Lock()
defer pa.mu.Unlock()
for i := range maxRetries {
port := portRangeMin + rand.IntN(portRangeMax-portRangeMin)
if pa.allocated[port] {
continue
}
if !isPortFree(port) {
continue
}
pa.allocated[port] = true
return port, nil
_ = i
}
return 0, fmt.Errorf("failed to allocate port after %d attempts", maxRetries)
}
// Release marks a port as available again.
func (pa *PortAllocator) Release(port int) {
pa.mu.Lock()
defer pa.mu.Unlock()
delete(pa.allocated, port)
}
// isPortFree checks if a TCP port is available by attempting to listen on it.
func isPortFree(port int) bool {
ln, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", port))
if err != nil {
return false
}
_ = ln.Close()
return true
}

View File

@@ -0,0 +1,65 @@
package agent
import (
"testing"
)
func TestPortAllocator_Allocate(t *testing.T) {
pa := NewPortAllocator()
port, err := pa.Allocate()
if err != nil {
t.Fatalf("allocate: %v", err)
}
if port < portRangeMin || port >= portRangeMax {
t.Fatalf("port %d out of range [%d, %d)", port, portRangeMin, portRangeMax)
}
}
func TestPortAllocator_NoDuplicates(t *testing.T) {
pa := NewPortAllocator()
ports := make(map[int]bool)
for range 20 {
port, err := pa.Allocate()
if err != nil {
t.Fatalf("allocate: %v", err)
}
if ports[port] {
t.Fatalf("duplicate port allocated: %d", port)
}
ports[port] = true
}
}
func TestPortAllocator_Release(t *testing.T) {
pa := NewPortAllocator()
port, err := pa.Allocate()
if err != nil {
t.Fatalf("allocate: %v", err)
}
pa.Release(port)
// After release, the port should no longer be tracked as allocated.
pa.mu.Lock()
if pa.allocated[port] {
t.Fatal("port should not be tracked after release")
}
pa.mu.Unlock()
}
func TestPortAllocator_PortIsFree(t *testing.T) {
pa := NewPortAllocator()
port, err := pa.Allocate()
if err != nil {
t.Fatalf("allocate: %v", err)
}
// The port should be free (we only track it, we don't hold the listener).
if !isPortFree(port) {
t.Fatalf("allocated port %d should be free on the system", port)
}
}

155
internal/agent/purge.go Normal file
View File

@@ -0,0 +1,155 @@
package agent
import (
"context"
"fmt"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/registry"
)
// PurgeComponent removes stale registry entries for components that are both
// gone (observed state is removed/unknown/exited) and unwanted (not in any
// current service definition). It never touches running containers.
func (a *Agent) PurgeComponent(ctx context.Context, req *mcpv1.PurgeRequest) (*mcpv1.PurgeResponse, error) {
a.Logger.Info("PurgeComponent",
"service", req.GetService(),
"component", req.GetComponent(),
"dry_run", req.GetDryRun(),
)
// Build a set of defined service/component pairs for quick lookup.
defined := make(map[string]bool, len(req.GetDefinedComponents()))
for _, dc := range req.GetDefinedComponents() {
defined[dc] = true
}
// Determine which services to examine.
var services []registry.Service
if req.GetService() != "" {
svc, err := registry.GetService(a.DB, req.GetService())
if err != nil {
return nil, fmt.Errorf("get service %q: %w", req.GetService(), err)
}
services = []registry.Service{*svc}
} else {
var err error
services, err = registry.ListServices(a.DB)
if err != nil {
return nil, fmt.Errorf("list services: %w", err)
}
}
var results []*mcpv1.PurgeResult
for _, svc := range services {
components, err := registry.ListComponents(a.DB, svc.Name)
if err != nil {
return nil, fmt.Errorf("list components for %q: %w", svc.Name, err)
}
// If a specific component was requested, filter to just that one.
if req.GetComponent() != "" {
var filtered []registry.Component
for _, c := range components {
if c.Name == req.GetComponent() {
filtered = append(filtered, c)
}
}
components = filtered
}
for _, comp := range components {
result := a.evaluatePurge(svc.Name, &comp, defined, req.GetDryRun())
results = append(results, result)
}
// If all components of this service were purged (not dry-run),
// check if the service should be cleaned up too.
if !req.GetDryRun() {
remaining, err := registry.ListComponents(a.DB, svc.Name)
if err != nil {
a.Logger.Warn("failed to check remaining components", "service", svc.Name, "err", err)
continue
}
if len(remaining) == 0 {
if err := registry.DeleteService(a.DB, svc.Name); err != nil {
a.Logger.Warn("failed to delete empty service", "service", svc.Name, "err", err)
} else {
a.Logger.Info("purged empty service", "service", svc.Name)
}
}
}
}
return &mcpv1.PurgeResponse{Results: results}, nil
}
// purgeableStates are observed states that indicate a component's container
// is gone and the registry entry can be safely removed.
var purgeableStates = map[string]bool{
"removed": true,
"unknown": true,
"exited": true,
}
// evaluatePurge checks whether a single component is eligible for purge and,
// if not in dry-run mode, deletes it.
func (a *Agent) evaluatePurge(service string, comp *registry.Component, defined map[string]bool, dryRun bool) *mcpv1.PurgeResult {
key := service + "/" + comp.Name
// Safety: refuse to purge components with a live container.
if !purgeableStates[comp.ObservedState] {
return &mcpv1.PurgeResult{
Service: service,
Component: comp.Name,
Purged: false,
Reason: fmt.Sprintf("observed=%s, container still exists", comp.ObservedState),
}
}
// Don't purge components that are still in service definitions.
if defined[key] {
return &mcpv1.PurgeResult{
Service: service,
Component: comp.Name,
Purged: false,
Reason: "still in service definitions",
}
}
reason := fmt.Sprintf("observed=%s, not in service definitions", comp.ObservedState)
if dryRun {
return &mcpv1.PurgeResult{
Service: service,
Component: comp.Name,
Purged: true,
Reason: reason,
}
}
// Delete events first (events table has no FK to components).
if err := registry.DeleteComponentEvents(a.DB, service, comp.Name); err != nil {
a.Logger.Warn("failed to delete events during purge", "service", service, "component", comp.Name, "err", err)
}
// Delete the component (CASCADE handles ports, volumes, cmd).
if err := registry.DeleteComponent(a.DB, service, comp.Name); err != nil {
return &mcpv1.PurgeResult{
Service: service,
Component: comp.Name,
Purged: false,
Reason: fmt.Sprintf("delete failed: %v", err),
}
}
a.Logger.Info("purged component", "service", service, "component", comp.Name, "reason", reason)
return &mcpv1.PurgeResult{
Service: service,
Component: comp.Name,
Purged: true,
Reason: reason,
}
}

View File

@@ -0,0 +1,405 @@
package agent
import (
"context"
"testing"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
"git.wntrmute.dev/kyle/mcp/internal/registry"
)
func TestPurgeComponentRemoved(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
// Set up a service with a stale component.
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "coredns",
Service: "mcns",
Image: "coredns:latest",
DesiredState: "running",
ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
// Insert an event for this component.
if err := registry.InsertEvent(a.DB, "mcns", "coredns", "running", "removed"); err != nil {
t.Fatalf("insert event: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
DefinedComponents: []string{"mcns/mcns"},
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 {
t.Fatalf("expected 1 result, got %d", len(resp.Results))
}
r := resp.Results[0]
if !r.Purged {
t.Fatalf("expected purged=true, got reason: %s", r.Reason)
}
if r.Service != "mcns" || r.Component != "coredns" {
t.Fatalf("unexpected result: %s/%s", r.Service, r.Component)
}
// Verify component was deleted.
_, err = registry.GetComponent(a.DB, "mcns", "coredns")
if err == nil {
t.Fatal("component should have been deleted")
}
// Service should also be deleted since it has no remaining components.
_, err = registry.GetService(a.DB, "mcns")
if err == nil {
t.Fatal("service should have been deleted (no remaining components)")
}
}
func TestPurgeRefusesRunning(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcr", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "api",
Service: "mcr",
Image: "mcr:latest",
DesiredState: "running",
ObservedState: "running",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
Service: "mcr",
Component: "api",
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 {
t.Fatalf("expected 1 result, got %d", len(resp.Results))
}
if resp.Results[0].Purged {
t.Fatal("should not purge a running component")
}
// Verify component still exists.
_, err = registry.GetComponent(a.DB, "mcr", "api")
if err != nil {
t.Fatalf("component should still exist: %v", err)
}
}
func TestPurgeRefusesStopped(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcr", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "api",
Service: "mcr",
Image: "mcr:latest",
DesiredState: "stopped",
ObservedState: "stopped",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
Service: "mcr",
Component: "api",
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if resp.Results[0].Purged {
t.Fatal("should not purge a stopped component")
}
}
func TestPurgeSkipsDefinedComponent(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "mcns",
Service: "mcns",
Image: "mcns:latest",
DesiredState: "running",
ObservedState: "exited",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
DefinedComponents: []string{"mcns/mcns"},
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 {
t.Fatalf("expected 1 result, got %d", len(resp.Results))
}
if resp.Results[0].Purged {
t.Fatal("should not purge a component that is still in service definitions")
}
if resp.Results[0].Reason != "still in service definitions" {
t.Fatalf("unexpected reason: %s", resp.Results[0].Reason)
}
}
func TestPurgeDryRun(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "coredns",
Service: "mcns",
Image: "coredns:latest",
DesiredState: "running",
ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
DryRun: true,
DefinedComponents: []string{"mcns/mcns"},
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 {
t.Fatalf("expected 1 result, got %d", len(resp.Results))
}
if !resp.Results[0].Purged {
t.Fatal("dry run should report purged=true for eligible components")
}
// Verify component was NOT deleted (dry run).
_, err = registry.GetComponent(a.DB, "mcns", "coredns")
if err != nil {
t.Fatalf("component should still exist after dry run: %v", err)
}
}
func TestPurgeServiceFilter(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
// Create two services.
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "coredns", Service: "mcns", Image: "coredns:latest",
DesiredState: "running", ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
if err := registry.CreateService(a.DB, "mcr", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "old", Service: "mcr", Image: "old:latest",
DesiredState: "running", ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
// Purge only mcns.
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{
Service: "mcns",
})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 {
t.Fatalf("expected 1 result, got %d", len(resp.Results))
}
if resp.Results[0].Service != "mcns" {
t.Fatalf("expected mcns, got %s", resp.Results[0].Service)
}
// mcr/old should still exist.
_, err = registry.GetComponent(a.DB, "mcr", "old")
if err != nil {
t.Fatalf("mcr/old should still exist: %v", err)
}
}
func TestPurgeServiceDeletedWhenEmpty(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "coredns", Service: "mcns", Image: "coredns:latest",
DesiredState: "running", ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "old-thing", Service: "mcns", Image: "old:latest",
DesiredState: "stopped", ObservedState: "unknown",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
// Both components should be purged.
if len(resp.Results) != 2 {
t.Fatalf("expected 2 results, got %d", len(resp.Results))
}
for _, r := range resp.Results {
if !r.Purged {
t.Fatalf("expected purged=true for %s/%s: %s", r.Service, r.Component, r.Reason)
}
}
// Service should be deleted.
_, err = registry.GetService(a.DB, "mcns")
if err == nil {
t.Fatal("service should have been deleted")
}
}
func TestPurgeServiceKeptWhenComponentsRemain(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "mcns", true); err != nil {
t.Fatalf("create service: %v", err)
}
// Stale component (will be purged).
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "coredns", Service: "mcns", Image: "coredns:latest",
DesiredState: "running", ObservedState: "removed",
}); err != nil {
t.Fatalf("create component: %v", err)
}
// Live component (will not be purged).
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "mcns", Service: "mcns", Image: "mcns:latest",
DesiredState: "running", ObservedState: "running",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 2 {
t.Fatalf("expected 2 results, got %d", len(resp.Results))
}
// coredns should be purged, mcns should not.
purged := 0
for _, r := range resp.Results {
if r.Purged {
purged++
if r.Component != "coredns" {
t.Fatalf("expected coredns to be purged, got %s", r.Component)
}
}
}
if purged != 1 {
t.Fatalf("expected 1 purged, got %d", purged)
}
// Service should still exist.
_, err = registry.GetService(a.DB, "mcns")
if err != nil {
t.Fatalf("service should still exist: %v", err)
}
}
func TestPurgeExitedState(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "test", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "old", Service: "test", Image: "old:latest",
DesiredState: "stopped", ObservedState: "exited",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 || !resp.Results[0].Purged {
t.Fatalf("exited component should be purgeable")
}
}
func TestPurgeUnknownState(t *testing.T) {
rt := &fakeRuntime{}
a := newTestAgent(t, rt)
ctx := context.Background()
if err := registry.CreateService(a.DB, "test", true); err != nil {
t.Fatalf("create service: %v", err)
}
if err := registry.CreateComponent(a.DB, &registry.Component{
Name: "ghost", Service: "test", Image: "ghost:latest",
DesiredState: "running", ObservedState: "unknown",
}); err != nil {
t.Fatalf("create component: %v", err)
}
resp, err := a.PurgeComponent(ctx, &mcpv1.PurgeRequest{})
if err != nil {
t.Fatalf("PurgeComponent: %v", err)
}
if len(resp.Results) != 1 || !resp.Results[0].Purged {
t.Fatalf("unknown component should be purgeable")
}
}

View File

@@ -3,7 +3,6 @@ package agent
import (
"context"
"fmt"
"strings"
"time"
mcpv1 "git.wntrmute.dev/kyle/mcp/gen/mcp/v1"
@@ -75,7 +74,10 @@ func (a *Agent) liveCheckServices(ctx context.Context) ([]*mcpv1.ServiceInfo, er
}
var result []*mcpv1.ServiceInfo
knownServices := make(map[string]bool, len(services))
for _, svc := range services {
knownServices[svc.Name] = true
components, err := registry.ListComponents(a.DB, svc.Name)
if err != nil {
return nil, fmt.Errorf("list components for %q: %w", svc.Name, err)
@@ -87,7 +89,7 @@ func (a *Agent) liveCheckServices(ctx context.Context) ([]*mcpv1.ServiceInfo, er
}
for _, comp := range components {
containerName := svc.Name + "-" + comp.Name
containerName := ContainerNameFor(svc.Name, comp.Name)
ci := &mcpv1.ComponentInfo{
Name: comp.Name,
Image: comp.Image,
@@ -116,7 +118,7 @@ func (a *Agent) liveCheckServices(ctx context.Context) ([]*mcpv1.ServiceInfo, er
continue
}
svcName, compName := splitContainerName(c.Name)
svcName, compName := SplitContainerName(c.Name, knownServices)
result = append(result, &mcpv1.ServiceInfo{
Name: svcName,
@@ -210,13 +212,3 @@ func (a *Agent) GetServiceStatus(ctx context.Context, req *mcpv1.GetServiceStatu
RecentEvents: protoEvents,
}, nil
}
// splitContainerName splits a container name like "metacrypt-api" into service
// and component parts. If there is no hyphen, the whole name is used as both
// the service and component name.
func splitContainerName(name string) (service, component string) {
if i := strings.Index(name, "-"); i >= 0 {
return name[:i], name[i+1:]
}
return name, name
}

View File

@@ -253,22 +253,47 @@ func TestGetServiceStatus_IgnoreSkipsDrift(t *testing.T) {
}
func TestSplitContainerName(t *testing.T) {
known := map[string]bool{
"metacrypt": true,
"mc-proxy": true,
"mcr": true,
}
tests := []struct {
name string
service string
comp string
}{
{"metacrypt-api", "metacrypt", "api"},
{"metacrypt-web-ui", "metacrypt", "web-ui"},
{"metacrypt-web", "metacrypt", "web"},
{"mc-proxy", "mc-proxy", "mc-proxy"},
{"mcr-api", "mcr", "api"},
{"standalone", "standalone", "standalone"},
{"unknown-thing", "unknown", "thing"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
svc, comp := splitContainerName(tt.name)
svc, comp := SplitContainerName(tt.name, known)
if svc != tt.service || comp != tt.comp {
t.Fatalf("splitContainerName(%q) = (%q, %q), want (%q, %q)",
t.Fatalf("SplitContainerName(%q) = (%q, %q), want (%q, %q)",
tt.name, svc, comp, tt.service, tt.comp)
}
})
}
}
func TestContainerNameFor(t *testing.T) {
tests := []struct {
service, component, want string
}{
{"metacrypt", "api", "metacrypt-api"},
{"mc-proxy", "mc-proxy", "mc-proxy"},
{"mcr", "web", "mcr-web"},
}
for _, tt := range tests {
got := ContainerNameFor(tt.service, tt.component)
if got != tt.want {
t.Fatalf("ContainerNameFor(%q, %q) = %q, want %q",
tt.service, tt.component, got, tt.want)
}
}
}

View File

@@ -22,6 +22,10 @@ func (f *fakeRuntime) Pull(_ context.Context, _ string) error { re
func (f *fakeRuntime) Run(_ context.Context, _ runtime.ContainerSpec) error { return nil }
func (f *fakeRuntime) Stop(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) Remove(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) Build(_ context.Context, _, _, _ string) error { return nil }
func (f *fakeRuntime) Push(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) ImageExists(_ context.Context, _ string) (bool, error) { return true, nil }
func (f *fakeRuntime) List(_ context.Context) ([]runtime.ContainerInfo, error) {
return f.containers, f.listErr

View File

@@ -3,6 +3,7 @@ package config
import (
"fmt"
"os"
"strings"
toml "github.com/pelletier/go-toml/v2"
)
@@ -10,11 +11,17 @@ import (
// CLIConfig is the configuration for the mcp CLI binary.
type CLIConfig struct {
Services ServicesConfig `toml:"services"`
Build BuildConfig `toml:"build"`
MCIAS MCIASConfig `toml:"mcias"`
Auth AuthConfig `toml:"auth"`
Nodes []NodeConfig `toml:"nodes"`
}
// BuildConfig holds settings for building container images.
type BuildConfig struct {
Workspace string `toml:"workspace"`
}
// ServicesConfig defines where service definition files live.
type ServicesConfig struct {
Dir string `toml:"dir"`
@@ -66,6 +73,9 @@ func applyCLIEnvOverrides(cfg *CLIConfig) {
if v := os.Getenv("MCP_SERVICES_DIR"); v != "" {
cfg.Services.Dir = v
}
if v := os.Getenv("MCP_BUILD_WORKSPACE"); v != "" {
cfg.Build.Workspace = v
}
if v := os.Getenv("MCP_MCIAS_SERVER_URL"); v != "" {
cfg.MCIAS.ServerURL = v
}
@@ -93,5 +103,15 @@ func validateCLIConfig(cfg *CLIConfig) error {
if cfg.Auth.TokenPath == "" {
return fmt.Errorf("auth.token_path is required")
}
// Expand ~ in workspace path.
if strings.HasPrefix(cfg.Build.Workspace, "~/") {
home, err := os.UserHomeDir()
if err != nil {
return fmt.Errorf("expand workspace path: %w", err)
}
cfg.Build.Workspace = home + cfg.Build.Workspace[1:]
}
return nil
}

View File

@@ -47,6 +47,10 @@ func (f *fakeRuntime) Pull(_ context.Context, _ string) error { re
func (f *fakeRuntime) Run(_ context.Context, _ runtime.ContainerSpec) error { return nil }
func (f *fakeRuntime) Stop(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) Remove(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) Build(_ context.Context, _, _, _ string) error { return nil }
func (f *fakeRuntime) Push(_ context.Context, _ string) error { return nil }
func (f *fakeRuntime) ImageExists(_ context.Context, _ string) (bool, error) { return true, nil }
func (f *fakeRuntime) Inspect(_ context.Context, _ string) (runtime.ContainerInfo, error) {
return runtime.ContainerInfo{}, nil

View File

@@ -6,6 +6,15 @@ import (
"time"
)
// Route represents a route entry for a component in the registry.
type Route struct {
Name string
Port int
Mode string
Hostname string
HostPort int // agent-assigned host port (0 = not yet allocated)
}
// Component represents a component in the registry.
type Component struct {
Name string
@@ -20,6 +29,7 @@ type Component struct {
Ports []string
Volumes []string
Cmd []string
Routes []Route
CreatedAt time.Time
UpdatedAt time.Time
}
@@ -51,6 +61,9 @@ func CreateComponent(db *sql.DB, c *Component) error {
if err := setCmd(tx, c.Service, c.Name, c.Cmd); err != nil {
return err
}
if err := setRoutes(tx, c.Service, c.Name, c.Routes); err != nil {
return err
}
return tx.Commit()
}
@@ -84,6 +97,10 @@ func GetComponent(db *sql.DB, service, name string) (*Component, error) {
if err != nil {
return nil, err
}
c.Routes, err = getRoutes(db, service, name)
if err != nil {
return nil, err
}
return c, nil
}
@@ -115,6 +132,7 @@ func ListComponents(db *sql.DB, service string) ([]Component, error) {
c.Ports, _ = getPorts(db, c.Service, c.Name)
c.Volumes, _ = getVolumes(db, c.Service, c.Name)
c.Cmd, _ = getCmd(db, c.Service, c.Name)
c.Routes, _ = getRoutes(db, c.Service, c.Name)
components = append(components, c)
}
@@ -168,6 +186,9 @@ func UpdateComponentSpec(db *sql.DB, c *Component) error {
if err := setCmd(tx, c.Service, c.Name, c.Cmd); err != nil {
return err
}
if err := setRoutes(tx, c.Service, c.Name, c.Routes); err != nil {
return err
}
return tx.Commit()
}
@@ -274,3 +295,85 @@ func getCmd(db *sql.DB, service, component string) ([]string, error) {
}
return cmd, rows.Err()
}
// helper: set route definitions (delete + re-insert)
func setRoutes(tx *sql.Tx, service, component string, routes []Route) error {
if _, err := tx.Exec("DELETE FROM component_routes WHERE service = ? AND component = ?", service, component); err != nil {
return fmt.Errorf("clear routes %q/%q: %w", service, component, err)
}
for _, r := range routes {
mode := r.Mode
if mode == "" {
mode = "l4"
}
name := r.Name
if name == "" {
name = "default"
}
if _, err := tx.Exec(
"INSERT INTO component_routes (service, component, name, port, mode, hostname, host_port) VALUES (?, ?, ?, ?, ?, ?, ?)",
service, component, name, r.Port, mode, r.Hostname, r.HostPort,
); err != nil {
return fmt.Errorf("insert route %q/%q %q: %w", service, component, name, err)
}
}
return nil
}
func getRoutes(db *sql.DB, service, component string) ([]Route, error) {
rows, err := db.Query(
"SELECT name, port, mode, hostname, host_port FROM component_routes WHERE service = ? AND component = ? ORDER BY name",
service, component,
)
if err != nil {
return nil, fmt.Errorf("get routes %q/%q: %w", service, component, err)
}
defer func() { _ = rows.Close() }()
var routes []Route
for rows.Next() {
var r Route
if err := rows.Scan(&r.Name, &r.Port, &r.Mode, &r.Hostname, &r.HostPort); err != nil {
return nil, err
}
routes = append(routes, r)
}
return routes, rows.Err()
}
// UpdateRouteHostPort updates the agent-assigned host port for a specific route.
func UpdateRouteHostPort(db *sql.DB, service, component, routeName string, hostPort int) error {
res, err := db.Exec(
"UPDATE component_routes SET host_port = ? WHERE service = ? AND component = ? AND name = ?",
hostPort, service, component, routeName,
)
if err != nil {
return fmt.Errorf("update route host_port %q/%q/%q: %w", service, component, routeName, err)
}
n, _ := res.RowsAffected()
if n == 0 {
return fmt.Errorf("update route host_port %q/%q/%q: %w", service, component, routeName, sql.ErrNoRows)
}
return nil
}
// GetRouteHostPorts returns a map of route name to assigned host port for a component.
func GetRouteHostPorts(db *sql.DB, service, component string) (map[string]int, error) {
rows, err := db.Query(
"SELECT name, host_port FROM component_routes WHERE service = ? AND component = ?",
service, component,
)
if err != nil {
return nil, fmt.Errorf("get route host ports %q/%q: %w", service, component, err)
}
defer func() { _ = rows.Close() }()
result := make(map[string]int)
for rows.Next() {
var name string
var port int
if err := rows.Scan(&name, &port); err != nil {
return nil, err
}
result[name] = port
}
return result, rows.Err()
}

View File

@@ -127,4 +127,19 @@ var migrations = []string{
CREATE INDEX IF NOT EXISTS idx_events_component_time
ON events(service, component, timestamp);
`,
// Migration 2: component routes
`
CREATE TABLE IF NOT EXISTS component_routes (
service TEXT NOT NULL,
component TEXT NOT NULL,
name TEXT NOT NULL,
port INTEGER NOT NULL,
mode TEXT NOT NULL DEFAULT 'l4',
hostname TEXT NOT NULL DEFAULT '',
host_port INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY (service, component, name),
FOREIGN KEY (service, component) REFERENCES components(service, name) ON DELETE CASCADE
);
`,
}

View File

@@ -83,6 +83,15 @@ func CountEvents(db *sql.DB, service, component string, since time.Time) (int, e
return count, nil
}
// DeleteComponentEvents deletes all events for a specific component.
func DeleteComponentEvents(db *sql.DB, service, component string) error {
_, err := db.Exec("DELETE FROM events WHERE service = ? AND component = ?", service, component)
if err != nil {
return fmt.Errorf("delete events %q/%q: %w", service, component, err)
}
return nil
}
// PruneEvents deletes events older than the given time.
func PruneEvents(db *sql.DB, before time.Time) (int64, error) {
res, err := db.Exec(

View File

@@ -237,6 +237,160 @@ func TestCascadeDelete(t *testing.T) {
}
}
func TestComponentRoutes(t *testing.T) {
db := openTestDB(t)
if err := CreateService(db, "svc", true); err != nil {
t.Fatalf("create service: %v", err)
}
// Create component with routes
c := &Component{
Name: "api",
Service: "svc",
Image: "img:v1",
Restart: "unless-stopped",
DesiredState: "running",
ObservedState: "unknown",
Routes: []Route{
{Name: "rest", Port: 8443, Mode: "l7", Hostname: "api.example.com"},
{Name: "grpc", Port: 9443, Mode: "l4"},
},
}
if err := CreateComponent(db, c); err != nil {
t.Fatalf("create component: %v", err)
}
// Get and verify routes
got, err := GetComponent(db, "svc", "api")
if err != nil {
t.Fatalf("get: %v", err)
}
if len(got.Routes) != 2 {
t.Fatalf("routes: got %d, want 2", len(got.Routes))
}
// Routes are ordered by name: grpc, rest
if got.Routes[0].Name != "grpc" || got.Routes[0].Port != 9443 || got.Routes[0].Mode != "l4" {
t.Fatalf("route[0]: got %+v", got.Routes[0])
}
if got.Routes[1].Name != "rest" || got.Routes[1].Port != 8443 || got.Routes[1].Mode != "l7" || got.Routes[1].Hostname != "api.example.com" {
t.Fatalf("route[1]: got %+v", got.Routes[1])
}
// Update routes via UpdateComponentSpec
c.Routes = []Route{{Name: "http", Port: 8080, Mode: "l7"}}
if err := UpdateComponentSpec(db, c); err != nil {
t.Fatalf("update spec: %v", err)
}
got, _ = GetComponent(db, "svc", "api")
if len(got.Routes) != 1 || got.Routes[0].Name != "http" {
t.Fatalf("updated routes: got %+v", got.Routes)
}
// List components includes routes
comps, err := ListComponents(db, "svc")
if err != nil {
t.Fatalf("list: %v", err)
}
if len(comps) != 1 || len(comps[0].Routes) != 1 {
t.Fatalf("list routes: got %d components, %d routes", len(comps), len(comps[0].Routes))
}
}
func TestRouteHostPort(t *testing.T) {
db := openTestDB(t)
if err := CreateService(db, "svc", true); err != nil {
t.Fatalf("create service: %v", err)
}
c := &Component{
Name: "api",
Service: "svc",
Image: "img:v1",
Restart: "unless-stopped",
DesiredState: "running",
ObservedState: "unknown",
Routes: []Route{
{Name: "rest", Port: 8443, Mode: "l7"},
{Name: "grpc", Port: 9443, Mode: "l4"},
},
}
if err := CreateComponent(db, c); err != nil {
t.Fatalf("create component: %v", err)
}
// Initially host_port is 0
ports, err := GetRouteHostPorts(db, "svc", "api")
if err != nil {
t.Fatalf("get host ports: %v", err)
}
if ports["rest"] != 0 || ports["grpc"] != 0 {
t.Fatalf("initial host ports should be 0: %+v", ports)
}
// Update host ports
if err := UpdateRouteHostPort(db, "svc", "api", "rest", 12345); err != nil {
t.Fatalf("update rest: %v", err)
}
if err := UpdateRouteHostPort(db, "svc", "api", "grpc", 12346); err != nil {
t.Fatalf("update grpc: %v", err)
}
ports, _ = GetRouteHostPorts(db, "svc", "api")
if ports["rest"] != 12345 {
t.Fatalf("rest host_port: got %d, want 12345", ports["rest"])
}
if ports["grpc"] != 12346 {
t.Fatalf("grpc host_port: got %d, want 12346", ports["grpc"])
}
// Verify host_port is visible via GetComponent
got, _ := GetComponent(db, "svc", "api")
for _, r := range got.Routes {
if r.Name == "rest" && r.HostPort != 12345 {
t.Fatalf("GetComponent rest host_port: got %d", r.HostPort)
}
if r.Name == "grpc" && r.HostPort != 12346 {
t.Fatalf("GetComponent grpc host_port: got %d", r.HostPort)
}
}
// Update nonexistent route should fail
err = UpdateRouteHostPort(db, "svc", "api", "nonexistent", 99999)
if err == nil {
t.Fatal("expected error updating nonexistent route")
}
}
func TestRouteCascadeDelete(t *testing.T) {
db := openTestDB(t)
if err := CreateService(db, "svc", true); err != nil {
t.Fatalf("create service: %v", err)
}
c := &Component{
Name: "api", Service: "svc", Image: "img:v1",
Restart: "unless-stopped", DesiredState: "running", ObservedState: "unknown",
Routes: []Route{{Name: "rest", Port: 8443, Mode: "l4"}},
}
if err := CreateComponent(db, c); err != nil {
t.Fatalf("create component: %v", err)
}
// Delete service cascades to routes
if err := DeleteService(db, "svc"); err != nil {
t.Fatalf("delete service: %v", err)
}
// Routes table should be empty
ports, err := GetRouteHostPorts(db, "svc", "api")
if err != nil {
t.Fatalf("get routes after cascade: %v", err)
}
if len(ports) != 0 {
t.Fatalf("routes should be empty after cascade, got %d", len(ports))
}
}
func TestEvents(t *testing.T) {
db := openTestDB(t)

View File

@@ -3,6 +3,7 @@ package runtime
import (
"context"
"encoding/json"
"errors"
"fmt"
"os/exec"
"strings"
@@ -49,6 +50,9 @@ func (p *Podman) BuildRunArgs(spec ContainerSpec) []string {
for _, vol := range spec.Volumes {
args = append(args, "-v", vol)
}
for _, env := range spec.Env {
args = append(args, "-e", env)
}
args = append(args, spec.Image)
args = append(args, spec.Cmd...)
@@ -174,12 +178,46 @@ func (p *Podman) Inspect(ctx context.Context, name string) (ContainerInfo, error
return info, nil
}
// Build builds a container image from a Dockerfile.
func (p *Podman) Build(ctx context.Context, image, contextDir, dockerfile string) error {
args := []string{"build", "-t", image, "-f", dockerfile, contextDir}
cmd := exec.CommandContext(ctx, p.command(), args...) //nolint:gosec // args built programmatically
cmd.Dir = contextDir
if out, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("podman build %q: %w: %s", image, err, out)
}
return nil
}
// Push pushes a container image to a remote registry.
func (p *Podman) Push(ctx context.Context, image string) error {
cmd := exec.CommandContext(ctx, p.command(), "push", image) //nolint:gosec // args built programmatically
if out, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("podman push %q: %w: %s", image, err, out)
}
return nil
}
// ImageExists checks whether an image tag exists in a remote registry.
func (p *Podman) ImageExists(ctx context.Context, image string) (bool, error) {
cmd := exec.CommandContext(ctx, p.command(), "manifest", "inspect", "docker://"+image) //nolint:gosec // args built programmatically
if err := cmd.Run(); err != nil {
// Exit code 1 means the manifest was not found.
var exitErr *exec.ExitError
if ok := errors.As(err, &exitErr); ok && exitErr.ExitCode() == 1 {
return false, nil
}
return false, fmt.Errorf("podman manifest inspect %q: %w", image, err)
}
return true, nil
}
// podmanPSEntry is a single entry from podman ps --format json.
type podmanPSEntry struct {
Names []string `json:"Names"`
Image string `json:"Image"`
State string `json:"State"`
Command string `json:"Command"`
Command []string `json:"Command"`
}
// List returns information about all containers.

View File

@@ -16,6 +16,7 @@ type ContainerSpec struct {
Ports []string // "host:container" port mappings
Volumes []string // "host:container" volume mounts
Cmd []string // command and arguments
Env []string // environment variables (KEY=VALUE)
}
// ContainerInfo describes the observed state of a running or stopped container.
@@ -33,7 +34,9 @@ type ContainerInfo struct {
Started time.Time // when the container started (zero if not running)
}
// Runtime is the container runtime abstraction.
// Runtime is the container runtime abstraction. The first six methods are
// used by the agent for container lifecycle. The last three are used by the
// CLI for building and pushing images.
type Runtime interface {
Pull(ctx context.Context, image string) error
Run(ctx context.Context, spec ContainerSpec) error
@@ -41,6 +44,10 @@ type Runtime interface {
Remove(ctx context.Context, name string) error
Inspect(ctx context.Context, name string) (ContainerInfo, error)
List(ctx context.Context) ([]ContainerInfo, error)
Build(ctx context.Context, image, contextDir, dockerfile string) error
Push(ctx context.Context, image string) error
ImageExists(ctx context.Context, image string) (bool, error)
}
// ExtractVersion parses the tag from an image reference.

View File

@@ -76,6 +76,38 @@ func TestBuildRunArgs(t *testing.T) {
})
})
t.Run("env vars", func(t *testing.T) {
spec := ContainerSpec{
Name: "test-app",
Image: "img:latest",
Env: []string{"PORT=12345", "PORT_GRPC=12346"},
}
requireEqualArgs(t, p.BuildRunArgs(spec), []string{
"run", "-d", "--name", "test-app",
"-e", "PORT=12345", "-e", "PORT_GRPC=12346",
"img:latest",
})
})
t.Run("full spec with env", func(t *testing.T) {
spec := ContainerSpec{
Name: "svc-api",
Image: "img:latest",
Network: "net",
Ports: []string{"127.0.0.1:12345:8443"},
Volumes: []string{"/srv:/srv"},
Env: []string{"PORT=12345"},
}
requireEqualArgs(t, p.BuildRunArgs(spec), []string{
"run", "-d", "--name", "svc-api",
"--network", "net",
"-p", "127.0.0.1:12345:8443",
"-v", "/srv:/srv",
"-e", "PORT=12345",
"img:latest",
})
})
t.Run("cmd after image", func(t *testing.T) {
spec := ContainerSpec{
Name: "test-app",

View File

@@ -18,9 +18,26 @@ type ServiceDef struct {
Name string `toml:"name"`
Node string `toml:"node"`
Active *bool `toml:"active,omitempty"`
Path string `toml:"path,omitempty"`
Build *BuildDef `toml:"build,omitempty"`
Components []ComponentDef `toml:"components"`
}
// BuildDef describes how to build container images for a service.
type BuildDef struct {
Images map[string]string `toml:"images"`
UsesMCDSL bool `toml:"uses_mcdsl,omitempty"`
}
// RouteDef describes a route for a component, used for automatic port
// allocation and mc-proxy integration.
type RouteDef struct {
Name string `toml:"name,omitempty"`
Port int `toml:"port"`
Mode string `toml:"mode,omitempty"`
Hostname string `toml:"hostname,omitempty"`
}
// ComponentDef describes a single container component within a service.
type ComponentDef struct {
Name string `toml:"name"`
@@ -31,6 +48,8 @@ type ComponentDef struct {
Ports []string `toml:"ports,omitempty"`
Volumes []string `toml:"volumes,omitempty"`
Cmd []string `toml:"cmd,omitempty"`
Routes []RouteDef `toml:"routes,omitempty"`
Env []string `toml:"env,omitempty"`
}
// Load reads and parses a TOML service definition file. If the active field
@@ -129,11 +148,46 @@ func validate(def *ServiceDef) error {
return fmt.Errorf("duplicate component name %q in service %q", c.Name, def.Name)
}
seen[c.Name] = true
if err := validateRoutes(c.Name, def.Name, c.Routes); err != nil {
return err
}
}
return nil
}
// validateRoutes checks that routes within a component are valid.
func validateRoutes(compName, svcName string, routes []RouteDef) error {
if len(routes) == 0 {
return nil
}
routeNames := make(map[string]bool)
for i, r := range routes {
if r.Port <= 0 {
return fmt.Errorf("route port must be > 0 in component %q of service %q", compName, svcName)
}
if r.Mode != "" && r.Mode != "l4" && r.Mode != "l7" {
return fmt.Errorf("route mode must be \"l4\" or \"l7\" in component %q of service %q", compName, svcName)
}
if len(routes) > 1 && r.Name == "" {
return fmt.Errorf("route name is required when component has multiple routes in component %q of service %q", compName, svcName)
}
// Use index-based key for unnamed single routes.
key := r.Name
if key == "" {
key = fmt.Sprintf("_route_%d", i)
}
if routeNames[key] {
return fmt.Errorf("duplicate route name %q in component %q of service %q", r.Name, compName, svcName)
}
routeNames[key] = true
}
return nil
}
// ToProto converts a ServiceDef to a proto ServiceSpec.
func ToProto(def *ServiceDef) *mcpv1.ServiceSpec {
spec := &mcpv1.ServiceSpec{
@@ -142,7 +196,7 @@ func ToProto(def *ServiceDef) *mcpv1.ServiceSpec {
}
for _, c := range def.Components {
spec.Components = append(spec.Components, &mcpv1.ComponentSpec{
cs := &mcpv1.ComponentSpec{
Name: c.Name,
Image: c.Image,
Network: c.Network,
@@ -151,8 +205,18 @@ func ToProto(def *ServiceDef) *mcpv1.ServiceSpec {
Ports: c.Ports,
Volumes: c.Volumes,
Cmd: c.Cmd,
Env: c.Env,
}
for _, r := range c.Routes {
cs.Routes = append(cs.Routes, &mcpv1.RouteSpec{
Name: r.Name,
Port: int32(r.Port),
Mode: r.Mode,
Hostname: r.Hostname,
})
}
spec.Components = append(spec.Components, cs)
}
return spec
}
@@ -169,7 +233,7 @@ func FromProto(spec *mcpv1.ServiceSpec, node string) *ServiceDef {
}
for _, c := range spec.GetComponents() {
def.Components = append(def.Components, ComponentDef{
cd := ComponentDef{
Name: c.GetName(),
Image: c.GetImage(),
Network: c.GetNetwork(),
@@ -178,8 +242,18 @@ func FromProto(spec *mcpv1.ServiceSpec, node string) *ServiceDef {
Ports: c.GetPorts(),
Volumes: c.GetVolumes(),
Cmd: c.GetCmd(),
Env: c.GetEnv(),
}
for _, r := range c.GetRoutes() {
cd.Routes = append(cd.Routes, RouteDef{
Name: r.GetName(),
Port: int(r.GetPort()),
Mode: r.GetMode(),
Hostname: r.GetHostname(),
})
}
def.Components = append(def.Components, cd)
}
return def
}

View File

@@ -261,6 +261,203 @@ image = "img:latest"
}
}
func TestLoadWriteWithRoutes(t *testing.T) {
def := &ServiceDef{
Name: "myservice",
Node: "rift",
Active: boolPtr(true),
Components: []ComponentDef{
{
Name: "api",
Image: "img:latest",
Network: "docker_default",
Routes: []RouteDef{
{Name: "rest", Port: 8443, Mode: "l7", Hostname: "api.example.com"},
{Name: "grpc", Port: 9443, Mode: "l4"},
},
Env: []string{"FOO=bar"},
},
},
}
dir := t.TempDir()
path := filepath.Join(dir, "myservice.toml")
if err := Write(path, def); err != nil {
t.Fatalf("write: %v", err)
}
got, err := Load(path)
if err != nil {
t.Fatalf("load: %v", err)
}
if len(got.Components[0].Routes) != 2 {
t.Fatalf("routes: got %d, want 2", len(got.Components[0].Routes))
}
r := got.Components[0].Routes[0]
if r.Name != "rest" || r.Port != 8443 || r.Mode != "l7" || r.Hostname != "api.example.com" {
t.Fatalf("route[0] mismatch: %+v", r)
}
r2 := got.Components[0].Routes[1]
if r2.Name != "grpc" || r2.Port != 9443 || r2.Mode != "l4" {
t.Fatalf("route[1] mismatch: %+v", r2)
}
if len(got.Components[0].Env) != 1 || got.Components[0].Env[0] != "FOO=bar" {
t.Fatalf("env mismatch: %v", got.Components[0].Env)
}
}
func TestRouteValidation(t *testing.T) {
tests := []struct {
name string
def *ServiceDef
wantErr string
}{
{
name: "route missing port",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{{Name: "rest", Port: 0}},
}},
},
wantErr: "route port must be > 0",
},
{
name: "route invalid mode",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{{Port: 8443, Mode: "tcp"}},
}},
},
wantErr: "route mode must be",
},
{
name: "multi-route missing name",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{
{Name: "rest", Port: 8443},
{Port: 9443},
},
}},
},
wantErr: "route name is required when component has multiple routes",
},
{
name: "duplicate route name",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{
{Name: "rest", Port: 8443},
{Name: "rest", Port: 9443},
},
}},
},
wantErr: "duplicate route name",
},
{
name: "single unnamed route is valid",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{{Port: 8443}},
}},
},
wantErr: "",
},
{
name: "valid l4 mode",
def: &ServiceDef{
Name: "svc", Node: "rift",
Components: []ComponentDef{{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{{Port: 8443, Mode: "l4"}},
}},
},
wantErr: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validate(tt.def)
if tt.wantErr == "" {
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
return
}
if err == nil {
t.Fatal("expected validation error")
}
if got := err.Error(); !strings.Contains(got, tt.wantErr) {
t.Fatalf("error %q does not contain %q", got, tt.wantErr)
}
})
}
}
func TestProtoConversionWithRoutes(t *testing.T) {
def := &ServiceDef{
Name: "svc",
Node: "rift",
Active: boolPtr(true),
Components: []ComponentDef{
{
Name: "api",
Image: "img:v1",
Routes: []RouteDef{
{Name: "rest", Port: 8443, Mode: "l7", Hostname: "api.example.com"},
{Name: "grpc", Port: 9443, Mode: "l4"},
},
Env: []string{"PORT_REST=12345", "PORT_GRPC=12346"},
},
},
}
spec := ToProto(def)
if len(spec.Components[0].Routes) != 2 {
t.Fatalf("proto routes: got %d, want 2", len(spec.Components[0].Routes))
}
r := spec.Components[0].Routes[0]
if r.GetName() != "rest" || r.GetPort() != 8443 || r.GetMode() != "l7" || r.GetHostname() != "api.example.com" {
t.Fatalf("proto route[0] mismatch: %+v", r)
}
if len(spec.Components[0].Env) != 2 {
t.Fatalf("proto env: got %d, want 2", len(spec.Components[0].Env))
}
got := FromProto(spec, "rift")
if len(got.Components[0].Routes) != 2 {
t.Fatalf("round-trip routes: got %d, want 2", len(got.Components[0].Routes))
}
gotR := got.Components[0].Routes[0]
if gotR.Name != "rest" || gotR.Port != 8443 || gotR.Mode != "l7" || gotR.Hostname != "api.example.com" {
t.Fatalf("round-trip route[0] mismatch: %+v", gotR)
}
if len(got.Components[0].Env) != 2 {
t.Fatalf("round-trip env: got %d, want 2", len(got.Components[0].Env))
}
}
func TestProtoConversion(t *testing.T) {
def := sampleDef()

View File

@@ -23,6 +23,9 @@ service McpAgentService {
// Adopt
rpc AdoptContainers(AdoptContainersRequest) returns (AdoptContainersResponse);
// Purge
rpc PurgeComponent(PurgeRequest) returns (PurgeResponse);
// File transfer
rpc PushFile(PushFileRequest) returns (PushFileResponse);
rpc PullFile(PullFileRequest) returns (PullFileResponse);
@@ -33,6 +36,13 @@ service McpAgentService {
// --- Service lifecycle ---
message RouteSpec {
string name = 1; // route name (used for $PORT_<NAME>)
int32 port = 2; // external port on mc-proxy
string mode = 3; // "l4" or "l7"
string hostname = 4; // optional public hostname override
}
message ComponentSpec {
string name = 1;
string image = 2;
@@ -42,6 +52,8 @@ message ComponentSpec {
repeated string ports = 6;
repeated string volumes = 7;
repeated string cmd = 8;
repeated RouteSpec routes = 9;
repeated string env = 10;
}
message ServiceSpec {
@@ -234,3 +246,30 @@ message NodeStatusResponse {
double cpu_usage_percent = 10;
google.protobuf.Timestamp uptime_since = 11;
}
// --- Purge ---
message PurgeRequest {
// Service name (empty = all services).
string service = 1;
// Component name (empty = all eligible in service).
string component = 2;
// Preview only, do not modify registry.
bool dry_run = 3;
// Currently-defined service/component pairs (e.g., "mcns/mcns").
// The agent uses this to determine what is "not in any service definition".
repeated string defined_components = 4;
}
message PurgeResponse {
repeated PurgeResult results = 1;
}
message PurgeResult {
string service = 1;
string component = 2;
// true if removed (or would be, in dry-run).
bool purged = 3;
// Why eligible, or why refused.
string reason = 4;
}

21
vendor/github.com/dustin/go-humanize/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,21 @@
sudo: false
language: go
go_import_path: github.com/dustin/go-humanize
go:
- 1.13.x
- 1.14.x
- 1.15.x
- 1.16.x
- stable
- master
matrix:
allow_failures:
- go: master
fast_finish: true
install:
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
script:
- diff -u <(echo -n) <(gofmt -d -s .)
- go vet .
- go install -v -race ./...
- go test -v -race ./...

21
vendor/github.com/dustin/go-humanize/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
Copyright (c) 2005-2008 Dustin Sallings <dustin@spy.net>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
<http://www.opensource.org/licenses/mit-license.php>

124
vendor/github.com/dustin/go-humanize/README.markdown generated vendored Normal file
View File

@@ -0,0 +1,124 @@
# Humane Units [![Build Status](https://travis-ci.org/dustin/go-humanize.svg?branch=master)](https://travis-ci.org/dustin/go-humanize) [![GoDoc](https://godoc.org/github.com/dustin/go-humanize?status.svg)](https://godoc.org/github.com/dustin/go-humanize)
Just a few functions for helping humanize times and sizes.
`go get` it as `github.com/dustin/go-humanize`, import it as
`"github.com/dustin/go-humanize"`, use it as `humanize`.
See [godoc](https://pkg.go.dev/github.com/dustin/go-humanize) for
complete documentation.
## Sizes
This lets you take numbers like `82854982` and convert them to useful
strings like, `83 MB` or `79 MiB` (whichever you prefer).
Example:
```go
fmt.Printf("That file is %s.", humanize.Bytes(82854982)) // That file is 83 MB.
```
## Times
This lets you take a `time.Time` and spit it out in relative terms.
For example, `12 seconds ago` or `3 days from now`.
Example:
```go
fmt.Printf("This was touched %s.", humanize.Time(someTimeInstance)) // This was touched 7 hours ago.
```
Thanks to Kyle Lemons for the time implementation from an IRC
conversation one day. It's pretty neat.
## Ordinals
From a [mailing list discussion][odisc] where a user wanted to be able
to label ordinals.
0 -> 0th
1 -> 1st
2 -> 2nd
3 -> 3rd
4 -> 4th
[...]
Example:
```go
fmt.Printf("You're my %s best friend.", humanize.Ordinal(193)) // You are my 193rd best friend.
```
## Commas
Want to shove commas into numbers? Be my guest.
0 -> 0
100 -> 100
1000 -> 1,000
1000000000 -> 1,000,000,000
-100000 -> -100,000
Example:
```go
fmt.Printf("You owe $%s.\n", humanize.Comma(6582491)) // You owe $6,582,491.
```
## Ftoa
Nicer float64 formatter that removes trailing zeros.
```go
fmt.Printf("%f", 2.24) // 2.240000
fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24
fmt.Printf("%f", 2.0) // 2.000000
fmt.Printf("%s", humanize.Ftoa(2.0)) // 2
```
## SI notation
Format numbers with [SI notation][sinotation].
Example:
```go
humanize.SI(0.00000000223, "M") // 2.23 nM
```
## English-specific functions
The following functions are in the `humanize/english` subpackage.
### Plurals
Simple English pluralization
```go
english.PluralWord(1, "object", "") // object
english.PluralWord(42, "object", "") // objects
english.PluralWord(2, "bus", "") // buses
english.PluralWord(99, "locus", "loci") // loci
english.Plural(1, "object", "") // 1 object
english.Plural(42, "object", "") // 42 objects
english.Plural(2, "bus", "") // 2 buses
english.Plural(99, "locus", "loci") // 99 loci
```
### Word series
Format comma-separated words lists with conjuctions:
```go
english.WordSeries([]string{"foo"}, "and") // foo
english.WordSeries([]string{"foo", "bar"}, "and") // foo and bar
english.WordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar and baz
english.OxfordWordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar, and baz
```
[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion
[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix

31
vendor/github.com/dustin/go-humanize/big.go generated vendored Normal file
View File

@@ -0,0 +1,31 @@
package humanize
import (
"math/big"
)
// order of magnitude (to a max order)
func oomm(n, b *big.Int, maxmag int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
if mag == maxmag && maxmag >= 0 {
break
}
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}
// total order of magnitude
// (same as above, but with no upper limit)
func oom(n, b *big.Int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}

189
vendor/github.com/dustin/go-humanize/bigbytes.go generated vendored Normal file
View File

@@ -0,0 +1,189 @@
package humanize
import (
"fmt"
"math/big"
"strings"
"unicode"
)
var (
bigIECExp = big.NewInt(1024)
// BigByte is one byte in bit.Ints
BigByte = big.NewInt(1)
// BigKiByte is 1,024 bytes in bit.Ints
BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp)
// BigMiByte is 1,024 k bytes in bit.Ints
BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp)
// BigGiByte is 1,024 m bytes in bit.Ints
BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp)
// BigTiByte is 1,024 g bytes in bit.Ints
BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp)
// BigPiByte is 1,024 t bytes in bit.Ints
BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp)
// BigEiByte is 1,024 p bytes in bit.Ints
BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp)
// BigZiByte is 1,024 e bytes in bit.Ints
BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp)
// BigYiByte is 1,024 z bytes in bit.Ints
BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp)
// BigRiByte is 1,024 y bytes in bit.Ints
BigRiByte = (&big.Int{}).Mul(BigYiByte, bigIECExp)
// BigQiByte is 1,024 r bytes in bit.Ints
BigQiByte = (&big.Int{}).Mul(BigRiByte, bigIECExp)
)
var (
bigSIExp = big.NewInt(1000)
// BigSIByte is one SI byte in big.Ints
BigSIByte = big.NewInt(1)
// BigKByte is 1,000 SI bytes in big.Ints
BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp)
// BigMByte is 1,000 SI k bytes in big.Ints
BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp)
// BigGByte is 1,000 SI m bytes in big.Ints
BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp)
// BigTByte is 1,000 SI g bytes in big.Ints
BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp)
// BigPByte is 1,000 SI t bytes in big.Ints
BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp)
// BigEByte is 1,000 SI p bytes in big.Ints
BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp)
// BigZByte is 1,000 SI e bytes in big.Ints
BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp)
// BigYByte is 1,000 SI z bytes in big.Ints
BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp)
// BigRByte is 1,000 SI y bytes in big.Ints
BigRByte = (&big.Int{}).Mul(BigYByte, bigSIExp)
// BigQByte is 1,000 SI r bytes in big.Ints
BigQByte = (&big.Int{}).Mul(BigRByte, bigSIExp)
)
var bigBytesSizeTable = map[string]*big.Int{
"b": BigByte,
"kib": BigKiByte,
"kb": BigKByte,
"mib": BigMiByte,
"mb": BigMByte,
"gib": BigGiByte,
"gb": BigGByte,
"tib": BigTiByte,
"tb": BigTByte,
"pib": BigPiByte,
"pb": BigPByte,
"eib": BigEiByte,
"eb": BigEByte,
"zib": BigZiByte,
"zb": BigZByte,
"yib": BigYiByte,
"yb": BigYByte,
"rib": BigRiByte,
"rb": BigRByte,
"qib": BigQiByte,
"qb": BigQByte,
// Without suffix
"": BigByte,
"ki": BigKiByte,
"k": BigKByte,
"mi": BigMiByte,
"m": BigMByte,
"gi": BigGiByte,
"g": BigGByte,
"ti": BigTiByte,
"t": BigTByte,
"pi": BigPiByte,
"p": BigPByte,
"ei": BigEiByte,
"e": BigEByte,
"z": BigZByte,
"zi": BigZiByte,
"y": BigYByte,
"yi": BigYiByte,
"r": BigRByte,
"ri": BigRiByte,
"q": BigQByte,
"qi": BigQiByte,
}
var ten = big.NewInt(10)
func humanateBigBytes(s, base *big.Int, sizes []string) string {
if s.Cmp(ten) < 0 {
return fmt.Sprintf("%d B", s)
}
c := (&big.Int{}).Set(s)
val, mag := oomm(c, base, len(sizes)-1)
suffix := sizes[mag]
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// BigBytes produces a human readable representation of an SI size.
//
// See also: ParseBigBytes.
//
// BigBytes(82854982) -> 83 MB
func BigBytes(s *big.Int) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB", "RB", "QB"}
return humanateBigBytes(s, bigSIExp, sizes)
}
// BigIBytes produces a human readable representation of an IEC size.
//
// See also: ParseBigBytes.
//
// BigIBytes(82854982) -> 79 MiB
func BigIBytes(s *big.Int) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB", "RiB", "QiB"}
return humanateBigBytes(s, bigIECExp, sizes)
}
// ParseBigBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See also: BigBytes, BigIBytes.
//
// ParseBigBytes("42 MB") -> 42000000, nil
// ParseBigBytes("42 mib") -> 44040192, nil
func ParseBigBytes(s string) (*big.Int, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
val := &big.Rat{}
_, err := fmt.Sscanf(num, "%f", val)
if err != nil {
return nil, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bigBytesSizeTable[extra]; ok {
mv := (&big.Rat{}).SetInt(m)
val.Mul(val, mv)
rv := &big.Int{}
rv.Div(val.Num(), val.Denom())
return rv, nil
}
return nil, fmt.Errorf("unhandled size name: %v", extra)
}

143
vendor/github.com/dustin/go-humanize/bytes.go generated vendored Normal file
View File

@@ -0,0 +1,143 @@
package humanize
import (
"fmt"
"math"
"strconv"
"strings"
"unicode"
)
// IEC Sizes.
// kibis of bits
const (
Byte = 1 << (iota * 10)
KiByte
MiByte
GiByte
TiByte
PiByte
EiByte
)
// SI Sizes.
const (
IByte = 1
KByte = IByte * 1000
MByte = KByte * 1000
GByte = MByte * 1000
TByte = GByte * 1000
PByte = TByte * 1000
EByte = PByte * 1000
)
var bytesSizeTable = map[string]uint64{
"b": Byte,
"kib": KiByte,
"kb": KByte,
"mib": MiByte,
"mb": MByte,
"gib": GiByte,
"gb": GByte,
"tib": TiByte,
"tb": TByte,
"pib": PiByte,
"pb": PByte,
"eib": EiByte,
"eb": EByte,
// Without suffix
"": Byte,
"ki": KiByte,
"k": KByte,
"mi": MiByte,
"m": MByte,
"gi": GiByte,
"g": GByte,
"ti": TiByte,
"t": TByte,
"pi": PiByte,
"p": PByte,
"ei": EiByte,
"e": EByte,
}
func logn(n, b float64) float64 {
return math.Log(n) / math.Log(b)
}
func humanateBytes(s uint64, base float64, sizes []string) string {
if s < 10 {
return fmt.Sprintf("%d B", s)
}
e := math.Floor(logn(float64(s), base))
suffix := sizes[int(e)]
val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// Bytes produces a human readable representation of an SI size.
//
// See also: ParseBytes.
//
// Bytes(82854982) -> 83 MB
func Bytes(s uint64) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB"}
return humanateBytes(s, 1000, sizes)
}
// IBytes produces a human readable representation of an IEC size.
//
// See also: ParseBytes.
//
// IBytes(82854982) -> 79 MiB
func IBytes(s uint64) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"}
return humanateBytes(s, 1024, sizes)
}
// ParseBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See Also: Bytes, IBytes.
//
// ParseBytes("42 MB") -> 42000000, nil
// ParseBytes("42 mib") -> 44040192, nil
func ParseBytes(s string) (uint64, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
f, err := strconv.ParseFloat(num, 64)
if err != nil {
return 0, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bytesSizeTable[extra]; ok {
f *= float64(m)
if f >= math.MaxUint64 {
return 0, fmt.Errorf("too large: %v", s)
}
return uint64(f), nil
}
return 0, fmt.Errorf("unhandled size name: %v", extra)
}

116
vendor/github.com/dustin/go-humanize/comma.go generated vendored Normal file
View File

@@ -0,0 +1,116 @@
package humanize
import (
"bytes"
"math"
"math/big"
"strconv"
"strings"
)
// Comma produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Comma(834142) -> 834,142
func Comma(v int64) string {
sign := ""
// Min int64 can't be negated to a usable value, so it has to be special cased.
if v == math.MinInt64 {
return "-9,223,372,036,854,775,808"
}
if v < 0 {
sign = "-"
v = 0 - v
}
parts := []string{"", "", "", "", "", "", ""}
j := len(parts) - 1
for v > 999 {
parts[j] = strconv.FormatInt(v%1000, 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
v = v / 1000
j--
}
parts[j] = strconv.Itoa(int(v))
return sign + strings.Join(parts[j:], ",")
}
// Commaf produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Commaf(834142.32) -> 834,142.32
func Commaf(v float64) string {
buf := &bytes.Buffer{}
if v < 0 {
buf.Write([]byte{'-'})
v = 0 - v
}
comma := []byte{','}
parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}
// CommafWithDigits works like the Commaf but limits the resulting
// string to the given number of decimal places.
//
// e.g. CommafWithDigits(834142.32, 1) -> 834,142.3
func CommafWithDigits(f float64, decimals int) string {
return stripTrailingDigits(Commaf(f), decimals)
}
// BigComma produces a string form of the given big.Int in base 10
// with commas after every three orders of magnitude.
func BigComma(b *big.Int) string {
sign := ""
if b.Sign() < 0 {
sign = "-"
b.Abs(b)
}
athousand := big.NewInt(1000)
c := (&big.Int{}).Set(b)
_, m := oom(c, athousand)
parts := make([]string, m+1)
j := len(parts) - 1
mod := &big.Int{}
for b.Cmp(athousand) >= 0 {
b.DivMod(b, athousand, mod)
parts[j] = strconv.FormatInt(mod.Int64(), 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
j--
}
parts[j] = strconv.Itoa(int(b.Int64()))
return sign + strings.Join(parts[j:], ",")
}

41
vendor/github.com/dustin/go-humanize/commaf.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
//go:build go1.6
// +build go1.6
package humanize
import (
"bytes"
"math/big"
"strings"
)
// BigCommaf produces a string form of the given big.Float in base 10
// with commas after every three orders of magnitude.
func BigCommaf(v *big.Float) string {
buf := &bytes.Buffer{}
if v.Sign() < 0 {
buf.Write([]byte{'-'})
v.Abs(v)
}
comma := []byte{','}
parts := strings.Split(v.Text('f', -1), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}

49
vendor/github.com/dustin/go-humanize/ftoa.go generated vendored Normal file
View File

@@ -0,0 +1,49 @@
package humanize
import (
"strconv"
"strings"
)
func stripTrailingZeros(s string) string {
if !strings.ContainsRune(s, '.') {
return s
}
offset := len(s) - 1
for offset > 0 {
if s[offset] == '.' {
offset--
break
}
if s[offset] != '0' {
break
}
offset--
}
return s[:offset+1]
}
func stripTrailingDigits(s string, digits int) string {
if i := strings.Index(s, "."); i >= 0 {
if digits <= 0 {
return s[:i]
}
i++
if i+digits >= len(s) {
return s
}
return s[:i+digits]
}
return s
}
// Ftoa converts a float to a string with no trailing zeros.
func Ftoa(num float64) string {
return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64))
}
// FtoaWithDigits converts a float to a string but limits the resulting string
// to the given number of decimal places, and no trailing zeros.
func FtoaWithDigits(num float64, digits int) string {
return stripTrailingZeros(stripTrailingDigits(strconv.FormatFloat(num, 'f', 6, 64), digits))
}

8
vendor/github.com/dustin/go-humanize/humanize.go generated vendored Normal file
View File

@@ -0,0 +1,8 @@
/*
Package humanize converts boring ugly numbers to human-friendly strings and back.
Durations can be turned into strings such as "3 days ago", numbers
representing sizes like 82854982 into useful strings like, "83 MB" or
"79 MiB" (whichever you prefer).
*/
package humanize

192
vendor/github.com/dustin/go-humanize/number.go generated vendored Normal file
View File

@@ -0,0 +1,192 @@
package humanize
/*
Slightly adapted from the source to fit go-humanize.
Author: https://github.com/gorhill
Source: https://gist.github.com/gorhill/5285193
*/
import (
"math"
"strconv"
)
var (
renderFloatPrecisionMultipliers = [...]float64{
1,
10,
100,
1000,
10000,
100000,
1000000,
10000000,
100000000,
1000000000,
}
renderFloatPrecisionRounders = [...]float64{
0.5,
0.05,
0.005,
0.0005,
0.00005,
0.000005,
0.0000005,
0.00000005,
0.000000005,
0.0000000005,
}
)
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
// * thousands separator
// * decimal separator
// * decimal precision
//
// Usage: s := RenderFloat(format, n)
// The format parameter tells how to render the number n.
//
// See examples: http://play.golang.org/p/LXc1Ddm1lJ
//
// Examples of format strings, given n = 12345.6789:
// "#,###.##" => "12,345.67"
// "#,###." => "12,345"
// "#,###" => "12345,678"
// "#\u202F###,##" => "12345,68"
// "#.###,###### => 12.345,678900
// "" (aka default format) => 12,345.67
//
// The highest precision allowed is 9 digits after the decimal symbol.
// There is also a version for integer number, FormatInteger(),
// which is convenient for calls within template.
func FormatFloat(format string, n float64) string {
// Special cases:
// NaN = "NaN"
// +Inf = "+Infinity"
// -Inf = "-Infinity"
if math.IsNaN(n) {
return "NaN"
}
if n > math.MaxFloat64 {
return "Infinity"
}
if n < (0.0 - math.MaxFloat64) {
return "-Infinity"
}
// default format
precision := 2
decimalStr := "."
thousandStr := ","
positiveStr := ""
negativeStr := "-"
if len(format) > 0 {
format := []rune(format)
// If there is an explicit format directive,
// then default values are these:
precision = 9
thousandStr = ""
// collect indices of meaningful formatting directives
formatIndx := []int{}
for i, char := range format {
if char != '#' && char != '0' {
formatIndx = append(formatIndx, i)
}
}
if len(formatIndx) > 0 {
// Directive at index 0:
// Must be a '+'
// Raise an error if not the case
// index: 0123456789
// +0.000,000
// +000,000.0
// +0000.00
// +0000
if formatIndx[0] == 0 {
if format[formatIndx[0]] != '+' {
panic("RenderFloat(): invalid positive sign directive")
}
positiveStr = "+"
formatIndx = formatIndx[1:]
}
// Two directives:
// First is thousands separator
// Raise an error if not followed by 3-digit
// 0123456789
// 0.000,000
// 000,000.00
if len(formatIndx) == 2 {
if (formatIndx[1] - formatIndx[0]) != 4 {
panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers")
}
thousandStr = string(format[formatIndx[0]])
formatIndx = formatIndx[1:]
}
// One directive:
// Directive is decimal separator
// The number of digit-specifier following the separator indicates wanted precision
// 0123456789
// 0.00
// 000,0000
if len(formatIndx) == 1 {
decimalStr = string(format[formatIndx[0]])
precision = len(format) - formatIndx[0] - 1
}
}
}
// generate sign part
var signStr string
if n >= 0.000000001 {
signStr = positiveStr
} else if n <= -0.000000001 {
signStr = negativeStr
n = -n
} else {
signStr = ""
n = 0.0
}
// split number into integer and fractional parts
intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision])
// generate integer part string
intStr := strconv.FormatInt(int64(intf), 10)
// add thousand separator if required
if len(thousandStr) > 0 {
for i := len(intStr); i > 3; {
i -= 3
intStr = intStr[:i] + thousandStr + intStr[i:]
}
}
// no fractional part, we can leave now
if precision == 0 {
return signStr + intStr
}
// generate fractional part
fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision]))
// may need padding
if len(fracStr) < precision {
fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr
}
return signStr + intStr + decimalStr + fracStr
}
// FormatInteger produces a formatted number as string.
// See FormatFloat.
func FormatInteger(format string, n int) string {
return FormatFloat(format, float64(n))
}

25
vendor/github.com/dustin/go-humanize/ordinals.go generated vendored Normal file
View File

@@ -0,0 +1,25 @@
package humanize
import "strconv"
// Ordinal gives you the input number in a rank/ordinal format.
//
// Ordinal(3) -> 3rd
func Ordinal(x int) string {
suffix := "th"
switch x % 10 {
case 1:
if x%100 != 11 {
suffix = "st"
}
case 2:
if x%100 != 12 {
suffix = "nd"
}
case 3:
if x%100 != 13 {
suffix = "rd"
}
}
return strconv.Itoa(x) + suffix
}

127
vendor/github.com/dustin/go-humanize/si.go generated vendored Normal file
View File

@@ -0,0 +1,127 @@
package humanize
import (
"errors"
"math"
"regexp"
"strconv"
)
var siPrefixTable = map[float64]string{
-30: "q", // quecto
-27: "r", // ronto
-24: "y", // yocto
-21: "z", // zepto
-18: "a", // atto
-15: "f", // femto
-12: "p", // pico
-9: "n", // nano
-6: "µ", // micro
-3: "m", // milli
0: "",
3: "k", // kilo
6: "M", // mega
9: "G", // giga
12: "T", // tera
15: "P", // peta
18: "E", // exa
21: "Z", // zetta
24: "Y", // yotta
27: "R", // ronna
30: "Q", // quetta
}
var revSIPrefixTable = revfmap(siPrefixTable)
// revfmap reverses the map and precomputes the power multiplier
func revfmap(in map[float64]string) map[string]float64 {
rv := map[string]float64{}
for k, v := range in {
rv[v] = math.Pow(10, k)
}
return rv
}
var riParseRegex *regexp.Regexp
func init() {
ri := `^([\-0-9.]+)\s?([`
for _, v := range siPrefixTable {
ri += v
}
ri += `]?)(.*)`
riParseRegex = regexp.MustCompile(ri)
}
// ComputeSI finds the most appropriate SI prefix for the given number
// and returns the prefix along with the value adjusted to be within
// that prefix.
//
// See also: SI, ParseSI.
//
// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p")
func ComputeSI(input float64) (float64, string) {
if input == 0 {
return 0, ""
}
mag := math.Abs(input)
exponent := math.Floor(logn(mag, 10))
exponent = math.Floor(exponent/3) * 3
value := mag / math.Pow(10, exponent)
// Handle special case where value is exactly 1000.0
// Should return 1 M instead of 1000 k
if value == 1000.0 {
exponent += 3
value = mag / math.Pow(10, exponent)
}
value = math.Copysign(value, input)
prefix := siPrefixTable[exponent]
return value, prefix
}
// SI returns a string with default formatting.
//
// SI uses Ftoa to format float value, removing trailing zeros.
//
// See also: ComputeSI, ParseSI.
//
// e.g. SI(1000000, "B") -> 1 MB
// e.g. SI(2.2345e-12, "F") -> 2.2345 pF
func SI(input float64, unit string) string {
value, prefix := ComputeSI(input)
return Ftoa(value) + " " + prefix + unit
}
// SIWithDigits works like SI but limits the resulting string to the
// given number of decimal places.
//
// e.g. SIWithDigits(1000000, 0, "B") -> 1 MB
// e.g. SIWithDigits(2.2345e-12, 2, "F") -> 2.23 pF
func SIWithDigits(input float64, decimals int, unit string) string {
value, prefix := ComputeSI(input)
return FtoaWithDigits(value, decimals) + " " + prefix + unit
}
var errInvalid = errors.New("invalid input")
// ParseSI parses an SI string back into the number and unit.
//
// See also: SI, ComputeSI.
//
// e.g. ParseSI("2.2345 pF") -> (2.2345e-12, "F", nil)
func ParseSI(input string) (float64, string, error) {
found := riParseRegex.FindStringSubmatch(input)
if len(found) != 4 {
return 0, "", errInvalid
}
mag := revSIPrefixTable[found[2]]
unit := found[3]
base, err := strconv.ParseFloat(found[1], 64)
return base * mag, unit, err
}

117
vendor/github.com/dustin/go-humanize/times.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
package humanize
import (
"fmt"
"math"
"sort"
"time"
)
// Seconds-based time units
const (
Day = 24 * time.Hour
Week = 7 * Day
Month = 30 * Day
Year = 12 * Month
LongTime = 37 * Year
)
// Time formats a time into a relative string.
//
// Time(someT) -> "3 weeks ago"
func Time(then time.Time) string {
return RelTime(then, time.Now(), "ago", "from now")
}
// A RelTimeMagnitude struct contains a relative time point at which
// the relative format of time will switch to a new format string. A
// slice of these in ascending order by their "D" field is passed to
// CustomRelTime to format durations.
//
// The Format field is a string that may contain a "%s" which will be
// replaced with the appropriate signed label (e.g. "ago" or "from
// now") and a "%d" that will be replaced by the quantity.
//
// The DivBy field is the amount of time the time difference must be
// divided by in order to display correctly.
//
// e.g. if D is 2*time.Minute and you want to display "%d minutes %s"
// DivBy should be time.Minute so whatever the duration is will be
// expressed in minutes.
type RelTimeMagnitude struct {
D time.Duration
Format string
DivBy time.Duration
}
var defaultMagnitudes = []RelTimeMagnitude{
{time.Second, "now", time.Second},
{2 * time.Second, "1 second %s", 1},
{time.Minute, "%d seconds %s", time.Second},
{2 * time.Minute, "1 minute %s", 1},
{time.Hour, "%d minutes %s", time.Minute},
{2 * time.Hour, "1 hour %s", 1},
{Day, "%d hours %s", time.Hour},
{2 * Day, "1 day %s", 1},
{Week, "%d days %s", Day},
{2 * Week, "1 week %s", 1},
{Month, "%d weeks %s", Week},
{2 * Month, "1 month %s", 1},
{Year, "%d months %s", Month},
{18 * Month, "1 year %s", 1},
{2 * Year, "2 years %s", 1},
{LongTime, "%d years %s", Year},
{math.MaxInt64, "a long while %s", 1},
}
// RelTime formats a time into a relative string.
//
// It takes two times and two labels. In addition to the generic time
// delta string (e.g. 5 minutes), the labels are used applied so that
// the label corresponding to the smaller time is applied.
//
// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier"
func RelTime(a, b time.Time, albl, blbl string) string {
return CustomRelTime(a, b, albl, blbl, defaultMagnitudes)
}
// CustomRelTime formats a time into a relative string.
//
// It takes two times two labels and a table of relative time formats.
// In addition to the generic time delta string (e.g. 5 minutes), the
// labels are used applied so that the label corresponding to the
// smaller time is applied.
func CustomRelTime(a, b time.Time, albl, blbl string, magnitudes []RelTimeMagnitude) string {
lbl := albl
diff := b.Sub(a)
if a.After(b) {
lbl = blbl
diff = a.Sub(b)
}
n := sort.Search(len(magnitudes), func(i int) bool {
return magnitudes[i].D > diff
})
if n >= len(magnitudes) {
n = len(magnitudes) - 1
}
mag := magnitudes[n]
args := []interface{}{}
escaped := false
for _, ch := range mag.Format {
if escaped {
switch ch {
case 's':
args = append(args, lbl)
case 'd':
args = append(args, diff/mag.DivBy)
}
escaped = false
} else {
escaped = ch == '%'
}
}
return fmt.Sprintf(mag.Format, args...)
}

41
vendor/github.com/google/uuid/CHANGELOG.md generated vendored Normal file
View File

@@ -0,0 +1,41 @@
# Changelog
## [1.6.0](https://github.com/google/uuid/compare/v1.5.0...v1.6.0) (2024-01-16)
### Features
* add Max UUID constant ([#149](https://github.com/google/uuid/issues/149)) ([c58770e](https://github.com/google/uuid/commit/c58770eb495f55fe2ced6284f93c5158a62e53e3))
### Bug Fixes
* fix typo in version 7 uuid documentation ([#153](https://github.com/google/uuid/issues/153)) ([016b199](https://github.com/google/uuid/commit/016b199544692f745ffc8867b914129ecb47ef06))
* Monotonicity in UUIDv7 ([#150](https://github.com/google/uuid/issues/150)) ([a2b2b32](https://github.com/google/uuid/commit/a2b2b32373ff0b1a312b7fdf6d38a977099698a6))
## [1.5.0](https://github.com/google/uuid/compare/v1.4.0...v1.5.0) (2023-12-12)
### Features
* Validate UUID without creating new UUID ([#141](https://github.com/google/uuid/issues/141)) ([9ee7366](https://github.com/google/uuid/commit/9ee7366e66c9ad96bab89139418a713dc584ae29))
## [1.4.0](https://github.com/google/uuid/compare/v1.3.1...v1.4.0) (2023-10-26)
### Features
* UUIDs slice type with Strings() convenience method ([#133](https://github.com/google/uuid/issues/133)) ([cd5fbbd](https://github.com/google/uuid/commit/cd5fbbdd02f3e3467ac18940e07e062be1f864b4))
### Fixes
* Clarify that Parse's job is to parse but not necessarily validate strings. (Documents current behavior)
## [1.3.1](https://github.com/google/uuid/compare/v1.3.0...v1.3.1) (2023-08-18)
### Bug Fixes
* Use .EqualFold() to parse urn prefixed UUIDs ([#118](https://github.com/google/uuid/issues/118)) ([574e687](https://github.com/google/uuid/commit/574e6874943741fb99d41764c705173ada5293f0))
## Changelog

26
vendor/github.com/google/uuid/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,26 @@
# How to contribute
We definitely welcome patches and contribution to this project!
### Tips
Commits must be formatted according to the [Conventional Commits Specification](https://www.conventionalcommits.org).
Always try to include a test case! If it is not possible or not necessary,
please explain why in the pull request description.
### Releasing
Commits that would precipitate a SemVer change, as described in the Conventional
Commits Specification, will trigger [`release-please`](https://github.com/google-github-actions/release-please-action)
to create a release candidate pull request. Once submitted, `release-please`
will create a release.
For tips on how to work with `release-please`, see its documentation.
### Legal requirements
In order to protect both you and ourselves, you will need to sign the
[Contributor License Agreement](https://cla.developers.google.com/clas).
You may have already signed it for other Google projects.

9
vendor/github.com/google/uuid/CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,9 @@
Paul Borman <borman@google.com>
bmatsuo
shawnps
theory
jboverfelt
dsymonds
cd1
wallclockbuilder
dansouza

27
vendor/github.com/google/uuid/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2009,2014 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

21
vendor/github.com/google/uuid/README.md generated vendored Normal file
View File

@@ -0,0 +1,21 @@
# uuid
The uuid package generates and inspects UUIDs based on
[RFC 4122](https://datatracker.ietf.org/doc/html/rfc4122)
and DCE 1.1: Authentication and Security Services.
This package is based on the github.com/pborman/uuid package (previously named
code.google.com/p/go-uuid). It differs from these earlier packages in that
a UUID is a 16 byte array rather than a byte slice. One loss due to this
change is the ability to represent an invalid UUID (vs a NIL UUID).
###### Install
```sh
go get github.com/google/uuid
```
###### Documentation
[![Go Reference](https://pkg.go.dev/badge/github.com/google/uuid.svg)](https://pkg.go.dev/github.com/google/uuid)
Full `go doc` style documentation for the package can be viewed online without
installing this package by using the GoDoc site here:
http://pkg.go.dev/github.com/google/uuid

80
vendor/github.com/google/uuid/dce.go generated vendored Normal file
View File

@@ -0,0 +1,80 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"encoding/binary"
"fmt"
"os"
)
// A Domain represents a Version 2 domain
type Domain byte
// Domain constants for DCE Security (Version 2) UUIDs.
const (
Person = Domain(0)
Group = Domain(1)
Org = Domain(2)
)
// NewDCESecurity returns a DCE Security (Version 2) UUID.
//
// The domain should be one of Person, Group or Org.
// On a POSIX system the id should be the users UID for the Person
// domain and the users GID for the Group. The meaning of id for
// the domain Org or on non-POSIX systems is site defined.
//
// For a given domain/id pair the same token may be returned for up to
// 7 minutes and 10 seconds.
func NewDCESecurity(domain Domain, id uint32) (UUID, error) {
uuid, err := NewUUID()
if err == nil {
uuid[6] = (uuid[6] & 0x0f) | 0x20 // Version 2
uuid[9] = byte(domain)
binary.BigEndian.PutUint32(uuid[0:], id)
}
return uuid, err
}
// NewDCEPerson returns a DCE Security (Version 2) UUID in the person
// domain with the id returned by os.Getuid.
//
// NewDCESecurity(Person, uint32(os.Getuid()))
func NewDCEPerson() (UUID, error) {
return NewDCESecurity(Person, uint32(os.Getuid()))
}
// NewDCEGroup returns a DCE Security (Version 2) UUID in the group
// domain with the id returned by os.Getgid.
//
// NewDCESecurity(Group, uint32(os.Getgid()))
func NewDCEGroup() (UUID, error) {
return NewDCESecurity(Group, uint32(os.Getgid()))
}
// Domain returns the domain for a Version 2 UUID. Domains are only defined
// for Version 2 UUIDs.
func (uuid UUID) Domain() Domain {
return Domain(uuid[9])
}
// ID returns the id for a Version 2 UUID. IDs are only defined for Version 2
// UUIDs.
func (uuid UUID) ID() uint32 {
return binary.BigEndian.Uint32(uuid[0:4])
}
func (d Domain) String() string {
switch d {
case Person:
return "Person"
case Group:
return "Group"
case Org:
return "Org"
}
return fmt.Sprintf("Domain%d", int(d))
}

12
vendor/github.com/google/uuid/doc.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package uuid generates and inspects UUIDs.
//
// UUIDs are based on RFC 4122 and DCE 1.1: Authentication and Security
// Services.
//
// A UUID is a 16 byte (128 bit) array. UUIDs may be used as keys to
// maps or compared directly.
package uuid

59
vendor/github.com/google/uuid/hash.go generated vendored Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"crypto/md5"
"crypto/sha1"
"hash"
)
// Well known namespace IDs and UUIDs
var (
NameSpaceDNS = Must(Parse("6ba7b810-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceURL = Must(Parse("6ba7b811-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceOID = Must(Parse("6ba7b812-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceX500 = Must(Parse("6ba7b814-9dad-11d1-80b4-00c04fd430c8"))
Nil UUID // empty UUID, all zeros
// The Max UUID is special form of UUID that is specified to have all 128 bits set to 1.
Max = UUID{
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
}
)
// NewHash returns a new UUID derived from the hash of space concatenated with
// data generated by h. The hash should be at least 16 byte in length. The
// first 16 bytes of the hash are used to form the UUID. The version of the
// UUID will be the lower 4 bits of version. NewHash is used to implement
// NewMD5 and NewSHA1.
func NewHash(h hash.Hash, space UUID, data []byte, version int) UUID {
h.Reset()
h.Write(space[:]) //nolint:errcheck
h.Write(data) //nolint:errcheck
s := h.Sum(nil)
var uuid UUID
copy(uuid[:], s)
uuid[6] = (uuid[6] & 0x0f) | uint8((version&0xf)<<4)
uuid[8] = (uuid[8] & 0x3f) | 0x80 // RFC 4122 variant
return uuid
}
// NewMD5 returns a new MD5 (Version 3) UUID based on the
// supplied name space and data. It is the same as calling:
//
// NewHash(md5.New(), space, data, 3)
func NewMD5(space UUID, data []byte) UUID {
return NewHash(md5.New(), space, data, 3)
}
// NewSHA1 returns a new SHA1 (Version 5) UUID based on the
// supplied name space and data. It is the same as calling:
//
// NewHash(sha1.New(), space, data, 5)
func NewSHA1(space UUID, data []byte) UUID {
return NewHash(sha1.New(), space, data, 5)
}

38
vendor/github.com/google/uuid/marshal.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import "fmt"
// MarshalText implements encoding.TextMarshaler.
func (uuid UUID) MarshalText() ([]byte, error) {
var js [36]byte
encodeHex(js[:], uuid)
return js[:], nil
}
// UnmarshalText implements encoding.TextUnmarshaler.
func (uuid *UUID) UnmarshalText(data []byte) error {
id, err := ParseBytes(data)
if err != nil {
return err
}
*uuid = id
return nil
}
// MarshalBinary implements encoding.BinaryMarshaler.
func (uuid UUID) MarshalBinary() ([]byte, error) {
return uuid[:], nil
}
// UnmarshalBinary implements encoding.BinaryUnmarshaler.
func (uuid *UUID) UnmarshalBinary(data []byte) error {
if len(data) != 16 {
return fmt.Errorf("invalid UUID (got %d bytes)", len(data))
}
copy(uuid[:], data)
return nil
}

90
vendor/github.com/google/uuid/node.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"sync"
)
var (
nodeMu sync.Mutex
ifname string // name of interface being used
nodeID [6]byte // hardware for version 1 UUIDs
zeroID [6]byte // nodeID with only 0's
)
// NodeInterface returns the name of the interface from which the NodeID was
// derived. The interface "user" is returned if the NodeID was set by
// SetNodeID.
func NodeInterface() string {
defer nodeMu.Unlock()
nodeMu.Lock()
return ifname
}
// SetNodeInterface selects the hardware address to be used for Version 1 UUIDs.
// If name is "" then the first usable interface found will be used or a random
// Node ID will be generated. If a named interface cannot be found then false
// is returned.
//
// SetNodeInterface never fails when name is "".
func SetNodeInterface(name string) bool {
defer nodeMu.Unlock()
nodeMu.Lock()
return setNodeInterface(name)
}
func setNodeInterface(name string) bool {
iname, addr := getHardwareInterface(name) // null implementation for js
if iname != "" && addr != nil {
ifname = iname
copy(nodeID[:], addr)
return true
}
// We found no interfaces with a valid hardware address. If name
// does not specify a specific interface generate a random Node ID
// (section 4.1.6)
if name == "" {
ifname = "random"
randomBits(nodeID[:])
return true
}
return false
}
// NodeID returns a slice of a copy of the current Node ID, setting the Node ID
// if not already set.
func NodeID() []byte {
defer nodeMu.Unlock()
nodeMu.Lock()
if nodeID == zeroID {
setNodeInterface("")
}
nid := nodeID
return nid[:]
}
// SetNodeID sets the Node ID to be used for Version 1 UUIDs. The first 6 bytes
// of id are used. If id is less than 6 bytes then false is returned and the
// Node ID is not set.
func SetNodeID(id []byte) bool {
if len(id) < 6 {
return false
}
defer nodeMu.Unlock()
nodeMu.Lock()
copy(nodeID[:], id)
ifname = "user"
return true
}
// NodeID returns the 6 byte node id encoded in uuid. It returns nil if uuid is
// not valid. The NodeID is only well defined for version 1 and 2 UUIDs.
func (uuid UUID) NodeID() []byte {
var node [6]byte
copy(node[:], uuid[10:])
return node[:]
}

12
vendor/github.com/google/uuid/node_js.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
// Copyright 2017 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build js
package uuid
// getHardwareInterface returns nil values for the JS version of the code.
// This removes the "net" dependency, because it is not used in the browser.
// Using the "net" library inflates the size of the transpiled JS code by 673k bytes.
func getHardwareInterface(name string) (string, []byte) { return "", nil }

33
vendor/github.com/google/uuid/node_net.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
// Copyright 2017 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !js
package uuid
import "net"
var interfaces []net.Interface // cached list of interfaces
// getHardwareInterface returns the name and hardware address of interface name.
// If name is "" then the name and hardware address of one of the system's
// interfaces is returned. If no interfaces are found (name does not exist or
// there are no interfaces) then "", nil is returned.
//
// Only addresses of at least 6 bytes are returned.
func getHardwareInterface(name string) (string, []byte) {
if interfaces == nil {
var err error
interfaces, err = net.Interfaces()
if err != nil {
return "", nil
}
}
for _, ifs := range interfaces {
if len(ifs.HardwareAddr) >= 6 && (name == "" || name == ifs.Name) {
return ifs.Name, ifs.HardwareAddr
}
}
return "", nil
}

118
vendor/github.com/google/uuid/null.go generated vendored Normal file
View File

@@ -0,0 +1,118 @@
// Copyright 2021 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"bytes"
"database/sql/driver"
"encoding/json"
"fmt"
)
var jsonNull = []byte("null")
// NullUUID represents a UUID that may be null.
// NullUUID implements the SQL driver.Scanner interface so
// it can be used as a scan destination:
//
// var u uuid.NullUUID
// err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&u)
// ...
// if u.Valid {
// // use u.UUID
// } else {
// // NULL value
// }
//
type NullUUID struct {
UUID UUID
Valid bool // Valid is true if UUID is not NULL
}
// Scan implements the SQL driver.Scanner interface.
func (nu *NullUUID) Scan(value interface{}) error {
if value == nil {
nu.UUID, nu.Valid = Nil, false
return nil
}
err := nu.UUID.Scan(value)
if err != nil {
nu.Valid = false
return err
}
nu.Valid = true
return nil
}
// Value implements the driver Valuer interface.
func (nu NullUUID) Value() (driver.Value, error) {
if !nu.Valid {
return nil, nil
}
// Delegate to UUID Value function
return nu.UUID.Value()
}
// MarshalBinary implements encoding.BinaryMarshaler.
func (nu NullUUID) MarshalBinary() ([]byte, error) {
if nu.Valid {
return nu.UUID[:], nil
}
return []byte(nil), nil
}
// UnmarshalBinary implements encoding.BinaryUnmarshaler.
func (nu *NullUUID) UnmarshalBinary(data []byte) error {
if len(data) != 16 {
return fmt.Errorf("invalid UUID (got %d bytes)", len(data))
}
copy(nu.UUID[:], data)
nu.Valid = true
return nil
}
// MarshalText implements encoding.TextMarshaler.
func (nu NullUUID) MarshalText() ([]byte, error) {
if nu.Valid {
return nu.UUID.MarshalText()
}
return jsonNull, nil
}
// UnmarshalText implements encoding.TextUnmarshaler.
func (nu *NullUUID) UnmarshalText(data []byte) error {
id, err := ParseBytes(data)
if err != nil {
nu.Valid = false
return err
}
nu.UUID = id
nu.Valid = true
return nil
}
// MarshalJSON implements json.Marshaler.
func (nu NullUUID) MarshalJSON() ([]byte, error) {
if nu.Valid {
return json.Marshal(nu.UUID)
}
return jsonNull, nil
}
// UnmarshalJSON implements json.Unmarshaler.
func (nu *NullUUID) UnmarshalJSON(data []byte) error {
if bytes.Equal(data, jsonNull) {
*nu = NullUUID{}
return nil // valid null UUID
}
err := json.Unmarshal(data, &nu.UUID)
nu.Valid = err == nil
return err
}

59
vendor/github.com/google/uuid/sql.go generated vendored Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"database/sql/driver"
"fmt"
)
// Scan implements sql.Scanner so UUIDs can be read from databases transparently.
// Currently, database types that map to string and []byte are supported. Please
// consult database-specific driver documentation for matching types.
func (uuid *UUID) Scan(src interface{}) error {
switch src := src.(type) {
case nil:
return nil
case string:
// if an empty UUID comes from a table, we return a null UUID
if src == "" {
return nil
}
// see Parse for required string format
u, err := Parse(src)
if err != nil {
return fmt.Errorf("Scan: %v", err)
}
*uuid = u
case []byte:
// if an empty UUID comes from a table, we return a null UUID
if len(src) == 0 {
return nil
}
// assumes a simple slice of bytes if 16 bytes
// otherwise attempts to parse
if len(src) != 16 {
return uuid.Scan(string(src))
}
copy((*uuid)[:], src)
default:
return fmt.Errorf("Scan: unable to scan type %T into UUID", src)
}
return nil
}
// Value implements sql.Valuer so that UUIDs can be written to databases
// transparently. Currently, UUIDs map to strings. Please consult
// database-specific driver documentation for matching types.
func (uuid UUID) Value() (driver.Value, error) {
return uuid.String(), nil
}

134
vendor/github.com/google/uuid/time.go generated vendored Normal file
View File

@@ -0,0 +1,134 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"encoding/binary"
"sync"
"time"
)
// A Time represents a time as the number of 100's of nanoseconds since 15 Oct
// 1582.
type Time int64
const (
lillian = 2299160 // Julian day of 15 Oct 1582
unix = 2440587 // Julian day of 1 Jan 1970
epoch = unix - lillian // Days between epochs
g1582 = epoch * 86400 // seconds between epochs
g1582ns100 = g1582 * 10000000 // 100s of a nanoseconds between epochs
)
var (
timeMu sync.Mutex
lasttime uint64 // last time we returned
clockSeq uint16 // clock sequence for this run
timeNow = time.Now // for testing
)
// UnixTime converts t the number of seconds and nanoseconds using the Unix
// epoch of 1 Jan 1970.
func (t Time) UnixTime() (sec, nsec int64) {
sec = int64(t - g1582ns100)
nsec = (sec % 10000000) * 100
sec /= 10000000
return sec, nsec
}
// GetTime returns the current Time (100s of nanoseconds since 15 Oct 1582) and
// clock sequence as well as adjusting the clock sequence as needed. An error
// is returned if the current time cannot be determined.
func GetTime() (Time, uint16, error) {
defer timeMu.Unlock()
timeMu.Lock()
return getTime()
}
func getTime() (Time, uint16, error) {
t := timeNow()
// If we don't have a clock sequence already, set one.
if clockSeq == 0 {
setClockSequence(-1)
}
now := uint64(t.UnixNano()/100) + g1582ns100
// If time has gone backwards with this clock sequence then we
// increment the clock sequence
if now <= lasttime {
clockSeq = ((clockSeq + 1) & 0x3fff) | 0x8000
}
lasttime = now
return Time(now), clockSeq, nil
}
// ClockSequence returns the current clock sequence, generating one if not
// already set. The clock sequence is only used for Version 1 UUIDs.
//
// The uuid package does not use global static storage for the clock sequence or
// the last time a UUID was generated. Unless SetClockSequence is used, a new
// random clock sequence is generated the first time a clock sequence is
// requested by ClockSequence, GetTime, or NewUUID. (section 4.2.1.1)
func ClockSequence() int {
defer timeMu.Unlock()
timeMu.Lock()
return clockSequence()
}
func clockSequence() int {
if clockSeq == 0 {
setClockSequence(-1)
}
return int(clockSeq & 0x3fff)
}
// SetClockSequence sets the clock sequence to the lower 14 bits of seq. Setting to
// -1 causes a new sequence to be generated.
func SetClockSequence(seq int) {
defer timeMu.Unlock()
timeMu.Lock()
setClockSequence(seq)
}
func setClockSequence(seq int) {
if seq == -1 {
var b [2]byte
randomBits(b[:]) // clock sequence
seq = int(b[0])<<8 | int(b[1])
}
oldSeq := clockSeq
clockSeq = uint16(seq&0x3fff) | 0x8000 // Set our variant
if oldSeq != clockSeq {
lasttime = 0
}
}
// Time returns the time in 100s of nanoseconds since 15 Oct 1582 encoded in
// uuid. The time is only defined for version 1, 2, 6 and 7 UUIDs.
func (uuid UUID) Time() Time {
var t Time
switch uuid.Version() {
case 6:
time := binary.BigEndian.Uint64(uuid[:8]) // Ignore uuid[6] version b0110
t = Time(time)
case 7:
time := binary.BigEndian.Uint64(uuid[:8])
t = Time((time>>16)*10000 + g1582ns100)
default: // forward compatible
time := int64(binary.BigEndian.Uint32(uuid[0:4]))
time |= int64(binary.BigEndian.Uint16(uuid[4:6])) << 32
time |= int64(binary.BigEndian.Uint16(uuid[6:8])&0xfff) << 48
t = Time(time)
}
return t
}
// ClockSequence returns the clock sequence encoded in uuid.
// The clock sequence is only well defined for version 1 and 2 UUIDs.
func (uuid UUID) ClockSequence() int {
return int(binary.BigEndian.Uint16(uuid[8:10])) & 0x3fff
}

43
vendor/github.com/google/uuid/util.go generated vendored Normal file
View File

@@ -0,0 +1,43 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"io"
)
// randomBits completely fills slice b with random data.
func randomBits(b []byte) {
if _, err := io.ReadFull(rander, b); err != nil {
panic(err.Error()) // rand should never fail
}
}
// xvalues returns the value of a byte as a hexadecimal digit or 255.
var xvalues = [256]byte{
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 255, 255, 255, 255, 255, 255,
255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
}
// xtob converts hex characters x1 and x2 into a byte.
func xtob(x1, x2 byte) (byte, bool) {
b1 := xvalues[x1]
b2 := xvalues[x2]
return (b1 << 4) | b2, b1 != 255 && b2 != 255
}

365
vendor/github.com/google/uuid/uuid.go generated vendored Normal file
View File

@@ -0,0 +1,365 @@
// Copyright 2018 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"bytes"
"crypto/rand"
"encoding/hex"
"errors"
"fmt"
"io"
"strings"
"sync"
)
// A UUID is a 128 bit (16 byte) Universal Unique IDentifier as defined in RFC
// 4122.
type UUID [16]byte
// A Version represents a UUID's version.
type Version byte
// A Variant represents a UUID's variant.
type Variant byte
// Constants returned by Variant.
const (
Invalid = Variant(iota) // Invalid UUID
RFC4122 // The variant specified in RFC4122
Reserved // Reserved, NCS backward compatibility.
Microsoft // Reserved, Microsoft Corporation backward compatibility.
Future // Reserved for future definition.
)
const randPoolSize = 16 * 16
var (
rander = rand.Reader // random function
poolEnabled = false
poolMu sync.Mutex
poolPos = randPoolSize // protected with poolMu
pool [randPoolSize]byte // protected with poolMu
)
type invalidLengthError struct{ len int }
func (err invalidLengthError) Error() string {
return fmt.Sprintf("invalid UUID length: %d", err.len)
}
// IsInvalidLengthError is matcher function for custom error invalidLengthError
func IsInvalidLengthError(err error) bool {
_, ok := err.(invalidLengthError)
return ok
}
// Parse decodes s into a UUID or returns an error if it cannot be parsed. Both
// the standard UUID forms defined in RFC 4122
// (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and
// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx) are decoded. In addition,
// Parse accepts non-standard strings such as the raw hex encoding
// xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx and 38 byte "Microsoft style" encodings,
// e.g. {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}. Only the middle 36 bytes are
// examined in the latter case. Parse should not be used to validate strings as
// it parses non-standard encodings as indicated above.
func Parse(s string) (UUID, error) {
var uuid UUID
switch len(s) {
// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
case 36:
// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
case 36 + 9:
if !strings.EqualFold(s[:9], "urn:uuid:") {
return uuid, fmt.Errorf("invalid urn prefix: %q", s[:9])
}
s = s[9:]
// {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
case 36 + 2:
s = s[1:]
// xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
case 32:
var ok bool
for i := range uuid {
uuid[i], ok = xtob(s[i*2], s[i*2+1])
if !ok {
return uuid, errors.New("invalid UUID format")
}
}
return uuid, nil
default:
return uuid, invalidLengthError{len(s)}
}
// s is now at least 36 bytes long
// it must be of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
return uuid, errors.New("invalid UUID format")
}
for i, x := range [16]int{
0, 2, 4, 6,
9, 11,
14, 16,
19, 21,
24, 26, 28, 30, 32, 34,
} {
v, ok := xtob(s[x], s[x+1])
if !ok {
return uuid, errors.New("invalid UUID format")
}
uuid[i] = v
}
return uuid, nil
}
// ParseBytes is like Parse, except it parses a byte slice instead of a string.
func ParseBytes(b []byte) (UUID, error) {
var uuid UUID
switch len(b) {
case 36: // xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
case 36 + 9: // urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
if !bytes.EqualFold(b[:9], []byte("urn:uuid:")) {
return uuid, fmt.Errorf("invalid urn prefix: %q", b[:9])
}
b = b[9:]
case 36 + 2: // {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
b = b[1:]
case 32: // xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
var ok bool
for i := 0; i < 32; i += 2 {
uuid[i/2], ok = xtob(b[i], b[i+1])
if !ok {
return uuid, errors.New("invalid UUID format")
}
}
return uuid, nil
default:
return uuid, invalidLengthError{len(b)}
}
// s is now at least 36 bytes long
// it must be of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
if b[8] != '-' || b[13] != '-' || b[18] != '-' || b[23] != '-' {
return uuid, errors.New("invalid UUID format")
}
for i, x := range [16]int{
0, 2, 4, 6,
9, 11,
14, 16,
19, 21,
24, 26, 28, 30, 32, 34,
} {
v, ok := xtob(b[x], b[x+1])
if !ok {
return uuid, errors.New("invalid UUID format")
}
uuid[i] = v
}
return uuid, nil
}
// MustParse is like Parse but panics if the string cannot be parsed.
// It simplifies safe initialization of global variables holding compiled UUIDs.
func MustParse(s string) UUID {
uuid, err := Parse(s)
if err != nil {
panic(`uuid: Parse(` + s + `): ` + err.Error())
}
return uuid
}
// FromBytes creates a new UUID from a byte slice. Returns an error if the slice
// does not have a length of 16. The bytes are copied from the slice.
func FromBytes(b []byte) (uuid UUID, err error) {
err = uuid.UnmarshalBinary(b)
return uuid, err
}
// Must returns uuid if err is nil and panics otherwise.
func Must(uuid UUID, err error) UUID {
if err != nil {
panic(err)
}
return uuid
}
// Validate returns an error if s is not a properly formatted UUID in one of the following formats:
// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
// xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
// {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
// It returns an error if the format is invalid, otherwise nil.
func Validate(s string) error {
switch len(s) {
// Standard UUID format
case 36:
// UUID with "urn:uuid:" prefix
case 36 + 9:
if !strings.EqualFold(s[:9], "urn:uuid:") {
return fmt.Errorf("invalid urn prefix: %q", s[:9])
}
s = s[9:]
// UUID enclosed in braces
case 36 + 2:
if s[0] != '{' || s[len(s)-1] != '}' {
return fmt.Errorf("invalid bracketed UUID format")
}
s = s[1 : len(s)-1]
// UUID without hyphens
case 32:
for i := 0; i < len(s); i += 2 {
_, ok := xtob(s[i], s[i+1])
if !ok {
return errors.New("invalid UUID format")
}
}
default:
return invalidLengthError{len(s)}
}
// Check for standard UUID format
if len(s) == 36 {
if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
return errors.New("invalid UUID format")
}
for _, x := range []int{0, 2, 4, 6, 9, 11, 14, 16, 19, 21, 24, 26, 28, 30, 32, 34} {
if _, ok := xtob(s[x], s[x+1]); !ok {
return errors.New("invalid UUID format")
}
}
}
return nil
}
// String returns the string form of uuid, xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
// , or "" if uuid is invalid.
func (uuid UUID) String() string {
var buf [36]byte
encodeHex(buf[:], uuid)
return string(buf[:])
}
// URN returns the RFC 2141 URN form of uuid,
// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, or "" if uuid is invalid.
func (uuid UUID) URN() string {
var buf [36 + 9]byte
copy(buf[:], "urn:uuid:")
encodeHex(buf[9:], uuid)
return string(buf[:])
}
func encodeHex(dst []byte, uuid UUID) {
hex.Encode(dst, uuid[:4])
dst[8] = '-'
hex.Encode(dst[9:13], uuid[4:6])
dst[13] = '-'
hex.Encode(dst[14:18], uuid[6:8])
dst[18] = '-'
hex.Encode(dst[19:23], uuid[8:10])
dst[23] = '-'
hex.Encode(dst[24:], uuid[10:])
}
// Variant returns the variant encoded in uuid.
func (uuid UUID) Variant() Variant {
switch {
case (uuid[8] & 0xc0) == 0x80:
return RFC4122
case (uuid[8] & 0xe0) == 0xc0:
return Microsoft
case (uuid[8] & 0xe0) == 0xe0:
return Future
default:
return Reserved
}
}
// Version returns the version of uuid.
func (uuid UUID) Version() Version {
return Version(uuid[6] >> 4)
}
func (v Version) String() string {
if v > 15 {
return fmt.Sprintf("BAD_VERSION_%d", v)
}
return fmt.Sprintf("VERSION_%d", v)
}
func (v Variant) String() string {
switch v {
case RFC4122:
return "RFC4122"
case Reserved:
return "Reserved"
case Microsoft:
return "Microsoft"
case Future:
return "Future"
case Invalid:
return "Invalid"
}
return fmt.Sprintf("BadVariant%d", int(v))
}
// SetRand sets the random number generator to r, which implements io.Reader.
// If r.Read returns an error when the package requests random data then
// a panic will be issued.
//
// Calling SetRand with nil sets the random number generator to the default
// generator.
func SetRand(r io.Reader) {
if r == nil {
rander = rand.Reader
return
}
rander = r
}
// EnableRandPool enables internal randomness pool used for Random
// (Version 4) UUID generation. The pool contains random bytes read from
// the random number generator on demand in batches. Enabling the pool
// may improve the UUID generation throughput significantly.
//
// Since the pool is stored on the Go heap, this feature may be a bad fit
// for security sensitive applications.
//
// Both EnableRandPool and DisableRandPool are not thread-safe and should
// only be called when there is no possibility that New or any other
// UUID Version 4 generation function will be called concurrently.
func EnableRandPool() {
poolEnabled = true
}
// DisableRandPool disables the randomness pool if it was previously
// enabled with EnableRandPool.
//
// Both EnableRandPool and DisableRandPool are not thread-safe and should
// only be called when there is no possibility that New or any other
// UUID Version 4 generation function will be called concurrently.
func DisableRandPool() {
poolEnabled = false
defer poolMu.Unlock()
poolMu.Lock()
poolPos = randPoolSize
}
// UUIDs is a slice of UUID types.
type UUIDs []UUID
// Strings returns a string slice containing the string form of each UUID in uuids.
func (uuids UUIDs) Strings() []string {
var uuidStrs = make([]string, len(uuids))
for i, uuid := range uuids {
uuidStrs[i] = uuid.String()
}
return uuidStrs
}

44
vendor/github.com/google/uuid/version1.go generated vendored Normal file
View File

@@ -0,0 +1,44 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"encoding/binary"
)
// NewUUID returns a Version 1 UUID based on the current NodeID and clock
// sequence, and the current time. If the NodeID has not been set by SetNodeID
// or SetNodeInterface then it will be set automatically. If the NodeID cannot
// be set NewUUID returns nil. If clock sequence has not been set by
// SetClockSequence then it will be set automatically. If GetTime fails to
// return the current NewUUID returns nil and an error.
//
// In most cases, New should be used.
func NewUUID() (UUID, error) {
var uuid UUID
now, seq, err := GetTime()
if err != nil {
return uuid, err
}
timeLow := uint32(now & 0xffffffff)
timeMid := uint16((now >> 32) & 0xffff)
timeHi := uint16((now >> 48) & 0x0fff)
timeHi |= 0x1000 // Version 1
binary.BigEndian.PutUint32(uuid[0:], timeLow)
binary.BigEndian.PutUint16(uuid[4:], timeMid)
binary.BigEndian.PutUint16(uuid[6:], timeHi)
binary.BigEndian.PutUint16(uuid[8:], seq)
nodeMu.Lock()
if nodeID == zeroID {
setNodeInterface("")
}
copy(uuid[10:], nodeID[:])
nodeMu.Unlock()
return uuid, nil
}

76
vendor/github.com/google/uuid/version4.go generated vendored Normal file
View File

@@ -0,0 +1,76 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import "io"
// New creates a new random UUID or panics. New is equivalent to
// the expression
//
// uuid.Must(uuid.NewRandom())
func New() UUID {
return Must(NewRandom())
}
// NewString creates a new random UUID and returns it as a string or panics.
// NewString is equivalent to the expression
//
// uuid.New().String()
func NewString() string {
return Must(NewRandom()).String()
}
// NewRandom returns a Random (Version 4) UUID.
//
// The strength of the UUIDs is based on the strength of the crypto/rand
// package.
//
// Uses the randomness pool if it was enabled with EnableRandPool.
//
// A note about uniqueness derived from the UUID Wikipedia entry:
//
// Randomly generated UUIDs have 122 random bits. One's annual risk of being
// hit by a meteorite is estimated to be one chance in 17 billion, that
// means the probability is about 0.00000000006 (6 × 1011),
// equivalent to the odds of creating a few tens of trillions of UUIDs in a
// year and having one duplicate.
func NewRandom() (UUID, error) {
if !poolEnabled {
return NewRandomFromReader(rander)
}
return newRandomFromPool()
}
// NewRandomFromReader returns a UUID based on bytes read from a given io.Reader.
func NewRandomFromReader(r io.Reader) (UUID, error) {
var uuid UUID
_, err := io.ReadFull(r, uuid[:])
if err != nil {
return Nil, err
}
uuid[6] = (uuid[6] & 0x0f) | 0x40 // Version 4
uuid[8] = (uuid[8] & 0x3f) | 0x80 // Variant is 10
return uuid, nil
}
func newRandomFromPool() (UUID, error) {
var uuid UUID
poolMu.Lock()
if poolPos == randPoolSize {
_, err := io.ReadFull(rander, pool[:])
if err != nil {
poolMu.Unlock()
return Nil, err
}
poolPos = 0
}
copy(uuid[:], pool[poolPos:(poolPos+16)])
poolPos += 16
poolMu.Unlock()
uuid[6] = (uuid[6] & 0x0f) | 0x40 // Version 4
uuid[8] = (uuid[8] & 0x3f) | 0x80 // Variant is 10
return uuid, nil
}

56
vendor/github.com/google/uuid/version6.go generated vendored Normal file
View File

@@ -0,0 +1,56 @@
// Copyright 2023 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import "encoding/binary"
// UUID version 6 is a field-compatible version of UUIDv1, reordered for improved DB locality.
// It is expected that UUIDv6 will primarily be used in contexts where there are existing v1 UUIDs.
// Systems that do not involve legacy UUIDv1 SHOULD consider using UUIDv7 instead.
//
// see https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-03#uuidv6
//
// NewV6 returns a Version 6 UUID based on the current NodeID and clock
// sequence, and the current time. If the NodeID has not been set by SetNodeID
// or SetNodeInterface then it will be set automatically. If the NodeID cannot
// be set NewV6 set NodeID is random bits automatically . If clock sequence has not been set by
// SetClockSequence then it will be set automatically. If GetTime fails to
// return the current NewV6 returns Nil and an error.
func NewV6() (UUID, error) {
var uuid UUID
now, seq, err := GetTime()
if err != nil {
return uuid, err
}
/*
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| time_high |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| time_mid | time_low_and_version |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|clk_seq_hi_res | clk_seq_low | node (0-1) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| node (2-5) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
*/
binary.BigEndian.PutUint64(uuid[0:], uint64(now))
binary.BigEndian.PutUint16(uuid[8:], seq)
uuid[6] = 0x60 | (uuid[6] & 0x0F)
uuid[8] = 0x80 | (uuid[8] & 0x3F)
nodeMu.Lock()
if nodeID == zeroID {
setNodeInterface("")
}
copy(uuid[10:], nodeID[:])
nodeMu.Unlock()
return uuid, nil
}

104
vendor/github.com/google/uuid/version7.go generated vendored Normal file
View File

@@ -0,0 +1,104 @@
// Copyright 2023 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"io"
)
// UUID version 7 features a time-ordered value field derived from the widely
// implemented and well known Unix Epoch timestamp source,
// the number of milliseconds seconds since midnight 1 Jan 1970 UTC, leap seconds excluded.
// As well as improved entropy characteristics over versions 1 or 6.
//
// see https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-03#name-uuid-version-7
//
// Implementations SHOULD utilize UUID version 7 over UUID version 1 and 6 if possible.
//
// NewV7 returns a Version 7 UUID based on the current time(Unix Epoch).
// Uses the randomness pool if it was enabled with EnableRandPool.
// On error, NewV7 returns Nil and an error
func NewV7() (UUID, error) {
uuid, err := NewRandom()
if err != nil {
return uuid, err
}
makeV7(uuid[:])
return uuid, nil
}
// NewV7FromReader returns a Version 7 UUID based on the current time(Unix Epoch).
// it use NewRandomFromReader fill random bits.
// On error, NewV7FromReader returns Nil and an error.
func NewV7FromReader(r io.Reader) (UUID, error) {
uuid, err := NewRandomFromReader(r)
if err != nil {
return uuid, err
}
makeV7(uuid[:])
return uuid, nil
}
// makeV7 fill 48 bits time (uuid[0] - uuid[5]), set version b0111 (uuid[6])
// uuid[8] already has the right version number (Variant is 10)
// see function NewV7 and NewV7FromReader
func makeV7(uuid []byte) {
/*
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| unix_ts_ms |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| unix_ts_ms | ver | rand_a (12 bit seq) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|var| rand_b |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| rand_b |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
*/
_ = uuid[15] // bounds check
t, s := getV7Time()
uuid[0] = byte(t >> 40)
uuid[1] = byte(t >> 32)
uuid[2] = byte(t >> 24)
uuid[3] = byte(t >> 16)
uuid[4] = byte(t >> 8)
uuid[5] = byte(t)
uuid[6] = 0x70 | (0x0F & byte(s>>8))
uuid[7] = byte(s)
}
// lastV7time is the last time we returned stored as:
//
// 52 bits of time in milliseconds since epoch
// 12 bits of (fractional nanoseconds) >> 8
var lastV7time int64
const nanoPerMilli = 1000000
// getV7Time returns the time in milliseconds and nanoseconds / 256.
// The returned (milli << 12 + seq) is guarenteed to be greater than
// (milli << 12 + seq) returned by any previous call to getV7Time.
func getV7Time() (milli, seq int64) {
timeMu.Lock()
defer timeMu.Unlock()
nano := timeNow().UnixNano()
milli = nano / nanoPerMilli
// Sequence number is between 0 and 3906 (nanoPerMilli>>8)
seq = (nano - milli*nanoPerMilli) >> 8
now := milli<<12 + seq
if now <= lastV7time {
now = lastV7time + 1
milli = now >> 12
seq = now & 0xfff
}
lastV7time = now
return milli, seq
}

201
vendor/github.com/inconshreveable/mousetrap/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 Alan Shreve (@inconshreveable)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

23
vendor/github.com/inconshreveable/mousetrap/README.md generated vendored Normal file
View File

@@ -0,0 +1,23 @@
# mousetrap
mousetrap is a tiny library that answers a single question.
On a Windows machine, was the process invoked by someone double clicking on
the executable file while browsing in explorer?
### Motivation
Windows developers unfamiliar with command line tools will often "double-click"
the executable for a tool. Because most CLI tools print the help and then exit
when invoked without arguments, this is often very frustrating for those users.
mousetrap provides a way to detect these invocations so that you can provide
more helpful behavior and instructions on how to run the CLI tool. To see what
this looks like, both from an organizational and a technical perspective, see
https://inconshreveable.com/09-09-2014/sweat-the-small-stuff/
### The interface
The library exposes a single interface:
func StartedByExplorer() (bool)

View File

@@ -0,0 +1,16 @@
//go:build !windows
// +build !windows
package mousetrap
// StartedByExplorer returns true if the program was invoked by the user
// double-clicking on the executable from explorer.exe
//
// It is conservative and returns false if any of the internal calls fail.
// It does not guarantee that the program was run from a terminal. It only can tell you
// whether it was launched from explorer.exe
//
// On non-Windows platforms, it always returns false.
func StartedByExplorer() bool {
return false
}

View File

@@ -0,0 +1,42 @@
package mousetrap
import (
"syscall"
"unsafe"
)
func getProcessEntry(pid int) (*syscall.ProcessEntry32, error) {
snapshot, err := syscall.CreateToolhelp32Snapshot(syscall.TH32CS_SNAPPROCESS, 0)
if err != nil {
return nil, err
}
defer syscall.CloseHandle(snapshot)
var procEntry syscall.ProcessEntry32
procEntry.Size = uint32(unsafe.Sizeof(procEntry))
if err = syscall.Process32First(snapshot, &procEntry); err != nil {
return nil, err
}
for {
if procEntry.ProcessID == uint32(pid) {
return &procEntry, nil
}
err = syscall.Process32Next(snapshot, &procEntry)
if err != nil {
return nil, err
}
}
}
// StartedByExplorer returns true if the program was invoked by the user double-clicking
// on the executable from explorer.exe
//
// It is conservative and returns false if any of the internal calls fail.
// It does not guarantee that the program was run from a terminal. It only can tell you
// whether it was launched from explorer.exe
func StartedByExplorer() bool {
pe, err := getProcessEntry(syscall.Getppid())
if err != nil {
return false
}
return "explorer.exe" == syscall.UTF16ToString(pe.ExeFile[:])
}

9
vendor/github.com/mattn/go-isatty/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,9 @@
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
MIT License (Expat)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

50
vendor/github.com/mattn/go-isatty/README.md generated vendored Normal file
View File

@@ -0,0 +1,50 @@
# go-isatty
[![Godoc Reference](https://godoc.org/github.com/mattn/go-isatty?status.svg)](http://godoc.org/github.com/mattn/go-isatty)
[![Codecov](https://codecov.io/gh/mattn/go-isatty/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-isatty)
[![Coverage Status](https://coveralls.io/repos/github/mattn/go-isatty/badge.svg?branch=master)](https://coveralls.io/github/mattn/go-isatty?branch=master)
[![Go Report Card](https://goreportcard.com/badge/mattn/go-isatty)](https://goreportcard.com/report/mattn/go-isatty)
isatty for golang
## Usage
```go
package main
import (
"fmt"
"github.com/mattn/go-isatty"
"os"
)
func main() {
if isatty.IsTerminal(os.Stdout.Fd()) {
fmt.Println("Is Terminal")
} else if isatty.IsCygwinTerminal(os.Stdout.Fd()) {
fmt.Println("Is Cygwin/MSYS2 Terminal")
} else {
fmt.Println("Is Not Terminal")
}
}
```
## Installation
```
$ go get github.com/mattn/go-isatty
```
## License
MIT
## Author
Yasuhiro Matsumoto (a.k.a mattn)
## Thanks
* k-takata: base idea for IsCygwinTerminal
https://github.com/k-takata/go-iscygpty

2
vendor/github.com/mattn/go-isatty/doc.go generated vendored Normal file
View File

@@ -0,0 +1,2 @@
// Package isatty implements interface to isatty
package isatty

12
vendor/github.com/mattn/go-isatty/go.test.sh generated vendored Normal file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
go test -race -coverprofile=profile.out -covermode=atomic "$d"
if [ -f profile.out ]; then
cat profile.out >> coverage.txt
rm profile.out
fi
done

20
vendor/github.com/mattn/go-isatty/isatty_bsd.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
//go:build (darwin || freebsd || openbsd || netbsd || dragonfly || hurd) && !appengine && !tinygo
// +build darwin freebsd openbsd netbsd dragonfly hurd
// +build !appengine
// +build !tinygo
package isatty
import "golang.org/x/sys/unix"
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermios(int(fd), unix.TIOCGETA)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

17
vendor/github.com/mattn/go-isatty/isatty_others.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
//go:build (appengine || js || nacl || tinygo || wasm) && !windows
// +build appengine js nacl tinygo wasm
// +build !windows
package isatty
// IsTerminal returns true if the file descriptor is terminal which
// is always false on js and appengine classic which is a sandboxed PaaS.
func IsTerminal(fd uintptr) bool {
return false
}
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

23
vendor/github.com/mattn/go-isatty/isatty_plan9.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
//go:build plan9
// +build plan9
package isatty
import (
"syscall"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal(fd uintptr) bool {
path, err := syscall.Fd2path(int(fd))
if err != nil {
return false
}
return path == "/dev/cons" || path == "/mnt/term/dev/cons"
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

21
vendor/github.com/mattn/go-isatty/isatty_solaris.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
//go:build solaris && !appengine
// +build solaris,!appengine
package isatty
import (
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
// see: https://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libc/port/gen/isatty.c
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermio(int(fd), unix.TCGETA)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

20
vendor/github.com/mattn/go-isatty/isatty_tcgets.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
//go:build (linux || aix || zos) && !appengine && !tinygo
// +build linux aix zos
// +build !appengine
// +build !tinygo
package isatty
import "golang.org/x/sys/unix"
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermios(int(fd), unix.TCGETS)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

125
vendor/github.com/mattn/go-isatty/isatty_windows.go generated vendored Normal file
View File

@@ -0,0 +1,125 @@
//go:build windows && !appengine
// +build windows,!appengine
package isatty
import (
"errors"
"strings"
"syscall"
"unicode/utf16"
"unsafe"
)
const (
objectNameInfo uintptr = 1
fileNameInfo = 2
fileTypePipe = 3
)
var (
kernel32 = syscall.NewLazyDLL("kernel32.dll")
ntdll = syscall.NewLazyDLL("ntdll.dll")
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
procGetFileInformationByHandleEx = kernel32.NewProc("GetFileInformationByHandleEx")
procGetFileType = kernel32.NewProc("GetFileType")
procNtQueryObject = ntdll.NewProc("NtQueryObject")
)
func init() {
// Check if GetFileInformationByHandleEx is available.
if procGetFileInformationByHandleEx.Find() != nil {
procGetFileInformationByHandleEx = nil
}
}
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, fd, uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}
// Check pipe name is used for cygwin/msys2 pty.
// Cygwin/MSYS2 PTY has a name like:
// \{cygwin,msys}-XXXXXXXXXXXXXXXX-ptyN-{from,to}-master
func isCygwinPipeName(name string) bool {
token := strings.Split(name, "-")
if len(token) < 5 {
return false
}
if token[0] != `\msys` &&
token[0] != `\cygwin` &&
token[0] != `\Device\NamedPipe\msys` &&
token[0] != `\Device\NamedPipe\cygwin` {
return false
}
if token[1] == "" {
return false
}
if !strings.HasPrefix(token[2], "pty") {
return false
}
if token[3] != `from` && token[3] != `to` {
return false
}
if token[4] != "master" {
return false
}
return true
}
// getFileNameByHandle use the undocomented ntdll NtQueryObject to get file full name from file handler
// since GetFileInformationByHandleEx is not available under windows Vista and still some old fashion
// guys are using Windows XP, this is a workaround for those guys, it will also work on system from
// Windows vista to 10
// see https://stackoverflow.com/a/18792477 for details
func getFileNameByHandle(fd uintptr) (string, error) {
if procNtQueryObject == nil {
return "", errors.New("ntdll.dll: NtQueryObject not supported")
}
var buf [4 + syscall.MAX_PATH]uint16
var result int
r, _, e := syscall.Syscall6(procNtQueryObject.Addr(), 5,
fd, objectNameInfo, uintptr(unsafe.Pointer(&buf)), uintptr(2*len(buf)), uintptr(unsafe.Pointer(&result)), 0)
if r != 0 {
return "", e
}
return string(utf16.Decode(buf[4 : 4+buf[0]/2])), nil
}
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
// terminal.
func IsCygwinTerminal(fd uintptr) bool {
if procGetFileInformationByHandleEx == nil {
name, err := getFileNameByHandle(fd)
if err != nil {
return false
}
return isCygwinPipeName(name)
}
// Cygwin/msys's pty is a pipe.
ft, _, e := syscall.Syscall(procGetFileType.Addr(), 1, fd, 0, 0)
if ft != fileTypePipe || e != 0 {
return false
}
var buf [2 + syscall.MAX_PATH]uint16
r, _, e := syscall.Syscall6(procGetFileInformationByHandleEx.Addr(),
4, fd, fileNameInfo, uintptr(unsafe.Pointer(&buf)),
uintptr(len(buf)*2), 0, 0)
if r == 0 || e != 0 {
return false
}
l := *(*uint32)(unsafe.Pointer(&buf))
return isCygwinPipeName(string(utf16.Decode(buf[2 : 2+l/2])))
}

15
vendor/github.com/ncruces/go-strftime/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,15 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/

21
vendor/github.com/ncruces/go-strftime/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022 Nuno Cruces
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

5
vendor/github.com/ncruces/go-strftime/README.md generated vendored Normal file
View File

@@ -0,0 +1,5 @@
# `strftime`/`strptime` compatible time formatting and parsing for Go
[![Go Reference](https://pkg.go.dev/badge/image)](https://pkg.go.dev/github.com/ncruces/go-strftime)
[![Go Report](https://goreportcard.com/badge/github.com/ncruces/go-strftime)](https://goreportcard.com/report/github.com/ncruces/go-strftime)
[![Go Coverage](https://github.com/ncruces/go-strftime/wiki/coverage.svg)](https://raw.githack.com/wiki/ncruces/go-strftime/coverage.html)

107
vendor/github.com/ncruces/go-strftime/parser.go generated vendored Normal file
View File

@@ -0,0 +1,107 @@
package strftime
import "unicode/utf8"
type parser struct {
format func(spec, flag byte) error
literal func(byte) error
}
func (p *parser) parse(fmt string) error {
const (
initial = iota
percent
flagged
modified
)
var flag, modifier byte
var err error
state := initial
start := 0
for i, b := range []byte(fmt) {
switch state {
default:
if b == '%' {
state = percent
start = i
continue
}
err = p.literal(b)
case percent:
if b == '-' || b == ':' {
state = flagged
flag = b
continue
}
if b == 'E' || b == 'O' {
state = modified
modifier = b
flag = 0
continue
}
err = p.format(b, 0)
state = initial
case flagged:
if b == 'E' || b == 'O' {
state = modified
modifier = b
continue
}
err = p.format(b, flag)
state = initial
case modified:
if okModifier(modifier, b) {
err = p.format(b, flag)
} else {
err = p.literals(fmt[start : i+1])
}
state = initial
}
if err != nil {
if err, ok := err.(formatError); ok {
err.setDirective(fmt, start, i)
return err
}
return err
}
}
if state != initial {
return p.literals(fmt[start:])
}
return nil
}
func (p *parser) literals(literal string) error {
for _, b := range []byte(literal) {
if err := p.literal(b); err != nil {
return err
}
}
return nil
}
type literalErr string
func (e literalErr) Error() string {
return "strftime: unsupported literal: " + string(e)
}
type formatError struct {
message string
directive string
}
func (e formatError) Error() string {
return "strftime: unsupported directive: " + e.directive + " " + e.message
}
func (e *formatError) setDirective(str string, i, j int) {
_, n := utf8.DecodeRuneInString(str[j:])
e.directive = str[i : j+n]
}

96
vendor/github.com/ncruces/go-strftime/pkg.go generated vendored Normal file
View File

@@ -0,0 +1,96 @@
/*
Package strftime provides strftime/strptime compatible time formatting and parsing.
The following formatting specifiers are available:
Date (Year, Month, Day):
%Y - Year with century (can be negative, 4 digits at least)
-0001, 0000, 1995, 2009, 14292, etc.
%C - year / 100 (round down, 20 in 2009)
%y - year % 100 (00..99)
%m - Month of the year, zero-padded (01..12)
%-m no-padded (1..12)
%B - Full month name (January)
%b - Abbreviated month name (Jan)
%h - Equivalent to %b
%d - Day of the month, zero-padded (01..31)
%-d no-padded (1..31)
%e - Day of the month, blank-padded ( 1..31)
%j - Day of the year (001..366)
%-j no-padded (1..366)
Time (Hour, Minute, Second, Subsecond):
%H - Hour of the day, 24-hour clock, zero-padded (00..23)
%-H no-padded (0..23)
%k - Hour of the day, 24-hour clock, blank-padded ( 0..23)
%I - Hour of the day, 12-hour clock, zero-padded (01..12)
%-I no-padded (1..12)
%l - Hour of the day, 12-hour clock, blank-padded ( 1..12)
%P - Meridian indicator, lowercase (am or pm)
%p - Meridian indicator, uppercase (AM or PM)
%M - Minute of the hour (00..59)
%-M no-padded (0..59)
%S - Second of the minute (00..60)
%-S no-padded (0..60)
%L - Millisecond of the second (000..999)
%f - Microsecond of the second (000000..999999)
%N - Nanosecond of the second (000000000..999999999)
Time zone:
%z - Time zone as hour and minute offset from UTC (e.g. +0900)
%:z - hour and minute offset from UTC with a colon (e.g. +09:00)
%Z - Time zone abbreviation (e.g. MST)
Weekday:
%A - Full weekday name (Sunday)
%a - Abbreviated weekday name (Sun)
%u - Day of the week (Monday is 1, 1..7)
%w - Day of the week (Sunday is 0, 0..6)
ISO 8601 week-based year and week number:
Week 1 of YYYY starts with a Monday and includes YYYY-01-04.
The days in the year before the first week are in the last week of
the previous year.
%G - Week-based year
%g - Last 2 digits of the week-based year (00..99)
%V - Week number of the week-based year (01..53)
%-V no-padded (1..53)
Week number:
Week 1 of YYYY starts with a Sunday or Monday (according to %U or %W).
The days in the year before the first week are in week 0.
%U - Week number of the year. The week starts with Sunday. (00..53)
%-U no-padded (0..53)
%W - Week number of the year. The week starts with Monday. (00..53)
%-W no-padded (0..53)
Seconds since the Unix Epoch:
%s - Number of seconds since 1970-01-01 00:00:00 UTC.
%Q - Number of milliseconds since 1970-01-01 00:00:00 UTC.
Literal string:
%n - Newline character (\n)
%t - Tab character (\t)
%% - Literal % character
Combination:
%c - date and time (%a %b %e %T %Y)
%D - Date (%m/%d/%y)
%F - ISO 8601 date format (%Y-%m-%d)
%v - VMS date (%e-%b-%Y)
%x - Same as %D
%X - Same as %T
%r - 12-hour time (%I:%M:%S %p)
%R - 24-hour time (%H:%M)
%T - 24-hour time (%H:%M:%S)
%+ - date(1) (%a %b %e %H:%M:%S %Z %Y)
The modifiers “E” and “O” are ignored.
*/
package strftime

241
vendor/github.com/ncruces/go-strftime/specifiers.go generated vendored Normal file
View File

@@ -0,0 +1,241 @@
package strftime
import "strings"
// https://strftime.org/
func goLayout(spec, flag byte, parsing bool) string {
switch spec {
default:
return ""
case 'B':
return "January"
case 'b', 'h':
return "Jan"
case 'm':
if flag == '-' || parsing {
return "1"
}
return "01"
case 'A':
return "Monday"
case 'a':
return "Mon"
case 'e':
return "_2"
case 'd':
if flag == '-' || parsing {
return "2"
}
return "02"
case 'j':
if flag == '-' {
if parsing {
return "__2"
}
return ""
}
return "002"
case 'I':
if flag == '-' || parsing {
return "3"
}
return "03"
case 'H':
if flag == '-' && !parsing {
return ""
}
return "15"
case 'M':
if flag == '-' || parsing {
return "4"
}
return "04"
case 'S':
if flag == '-' || parsing {
return "5"
}
return "05"
case 'y':
return "06"
case 'Y':
return "2006"
case 'p':
return "PM"
case 'P':
return "pm"
case 'Z':
return "MST"
case 'z':
if flag == ':' {
if parsing {
return "Z07:00"
}
return "-07:00"
}
if parsing {
return "Z0700"
}
return "-0700"
case '+':
if parsing {
return "Mon Jan _2 15:4:5 MST 2006"
}
return "Mon Jan _2 15:04:05 MST 2006"
case 'c':
if parsing {
return "Mon Jan _2 15:4:5 2006"
}
return "Mon Jan _2 15:04:05 2006"
case 'v':
return "_2-Jan-2006"
case 'F':
if parsing {
return "2006-1-2"
}
return "2006-01-02"
case 'D', 'x':
if parsing {
return "1/2/06"
}
return "01/02/06"
case 'r':
if parsing {
return "3:4:5 PM"
}
return "03:04:05 PM"
case 'T', 'X':
if parsing {
return "15:4:5"
}
return "15:04:05"
case 'R':
if parsing {
return "15:4"
}
return "15:04"
case '%':
return "%"
case 't':
return "\t"
case 'n':
return "\n"
}
}
// https://nsdateformatter.com/
func uts35Pattern(spec, flag byte) string {
switch spec {
default:
return ""
case 'B':
return "MMMM"
case 'b', 'h':
return "MMM"
case 'm':
if flag == '-' {
return "M"
}
return "MM"
case 'A':
return "EEEE"
case 'a':
return "E"
case 'd':
if flag == '-' {
return "d"
}
return "dd"
case 'j':
if flag == '-' {
return "D"
}
return "DDD"
case 'I':
if flag == '-' {
return "h"
}
return "hh"
case 'H':
if flag == '-' {
return "H"
}
return "HH"
case 'M':
if flag == '-' {
return "m"
}
return "mm"
case 'S':
if flag == '-' {
return "s"
}
return "ss"
case 'y':
return "yy"
case 'Y':
return "yyyy"
case 'g':
return "YY"
case 'G':
return "YYYY"
case 'V':
if flag == '-' {
return "w"
}
return "ww"
case 'p':
return "a"
case 'Z':
return "zzz"
case 'z':
if flag == ':' {
return "xxx"
}
return "xx"
case 'L':
return "SSS"
case 'f':
return "SSSSSS"
case 'N':
return "SSSSSSSSS"
case '+':
return "E MMM d HH:mm:ss zzz yyyy"
case 'c':
return "E MMM d HH:mm:ss yyyy"
case 'v':
return "d-MMM-yyyy"
case 'F':
return "yyyy-MM-dd"
case 'D', 'x':
return "MM/dd/yy"
case 'r':
return "hh:mm:ss a"
case 'T', 'X':
return "HH:mm:ss"
case 'R':
return "HH:mm"
case '%':
return "%"
case 't':
return "\t"
case 'n':
return "\n"
}
}
// http://man.he.net/man3/strftime
func okModifier(mod, spec byte) bool {
if mod == 'E' {
return strings.Contains("cCxXyY", string(spec))
}
if mod == 'O' {
return strings.Contains("deHImMSuUVwWy", string(spec))
}
return false
}

346
vendor/github.com/ncruces/go-strftime/strftime.go generated vendored Normal file
View File

@@ -0,0 +1,346 @@
package strftime
import (
"bytes"
"strconv"
"time"
)
// Format returns a textual representation of the time value
// formatted according to the strftime format specification.
func Format(fmt string, t time.Time) string {
buf := buffer(fmt)
return string(AppendFormat(buf, fmt, t))
}
// AppendFormat is like Format, but appends the textual representation
// to dst and returns the extended buffer.
func AppendFormat(dst []byte, fmt string, t time.Time) []byte {
var parser parser
parser.literal = func(b byte) error {
dst = append(dst, b)
return nil
}
parser.format = func(spec, flag byte) error {
switch spec {
case 'A':
dst = append(dst, t.Weekday().String()...)
return nil
case 'a':
dst = append(dst, t.Weekday().String()[:3]...)
return nil
case 'B':
dst = append(dst, t.Month().String()...)
return nil
case 'b', 'h':
dst = append(dst, t.Month().String()[:3]...)
return nil
case 'm':
dst = appendInt2(dst, int(t.Month()), flag)
return nil
case 'd':
dst = appendInt2(dst, int(t.Day()), flag)
return nil
case 'e':
dst = appendInt2(dst, int(t.Day()), ' ')
return nil
case 'I':
dst = append12Hour(dst, t, flag)
return nil
case 'l':
dst = append12Hour(dst, t, ' ')
return nil
case 'H':
dst = appendInt2(dst, t.Hour(), flag)
return nil
case 'k':
dst = appendInt2(dst, t.Hour(), ' ')
return nil
case 'M':
dst = appendInt2(dst, t.Minute(), flag)
return nil
case 'S':
dst = appendInt2(dst, t.Second(), flag)
return nil
case 'L':
dst = append(dst, t.Format(".000")[1:]...)
return nil
case 'f':
dst = append(dst, t.Format(".000000")[1:]...)
return nil
case 'N':
dst = append(dst, t.Format(".000000000")[1:]...)
return nil
case 'y':
dst = t.AppendFormat(dst, "06")
return nil
case 'Y':
dst = t.AppendFormat(dst, "2006")
return nil
case 'C':
dst = t.AppendFormat(dst, "2006")
dst = dst[:len(dst)-2]
return nil
case 'U':
dst = appendWeekNumber(dst, t, flag, true)
return nil
case 'W':
dst = appendWeekNumber(dst, t, flag, false)
return nil
case 'V':
_, w := t.ISOWeek()
dst = appendInt2(dst, w, flag)
return nil
case 'g':
y, _ := t.ISOWeek()
dst = year(y).AppendFormat(dst, "06")
return nil
case 'G':
y, _ := t.ISOWeek()
dst = year(y).AppendFormat(dst, "2006")
return nil
case 's':
dst = strconv.AppendInt(dst, t.Unix(), 10)
return nil
case 'Q':
dst = strconv.AppendInt(dst, t.UnixMilli(), 10)
return nil
case 'w':
w := t.Weekday()
dst = appendInt1(dst, int(w))
return nil
case 'u':
if w := t.Weekday(); w == 0 {
dst = append(dst, '7')
} else {
dst = appendInt1(dst, int(w))
}
return nil
case 'j':
if flag == '-' {
dst = strconv.AppendInt(dst, int64(t.YearDay()), 10)
} else {
dst = t.AppendFormat(dst, "002")
}
return nil
}
if layout := goLayout(spec, flag, false); layout != "" {
dst = t.AppendFormat(dst, layout)
return nil
}
dst = append(dst, '%')
if flag != 0 {
dst = append(dst, flag)
}
dst = append(dst, spec)
return nil
}
parser.parse(fmt)
return dst
}
// Parse converts a textual representation of time to the time value it represents
// according to the strptime format specification.
//
// The following specifiers are not supported for parsing:
//
// %g %k %l %s %u %w %C %G %Q %U %V %W
//
// You must also avoid digits and these letter sequences
// in fmt literals:
//
// Jan Mon MST PM pm
func Parse(fmt, value string) (time.Time, error) {
pattern, err := layout(fmt, true)
if err != nil {
return time.Time{}, err
}
return time.Parse(pattern, value)
}
// Layout converts a strftime format specification
// to a Go time pattern specification.
//
// The following specifiers are not supported by Go patterns:
//
// %f %g %k %l %s %u %w %C %G %L %N %Q %U %V %W
//
// You must also avoid digits and these letter sequences
// in fmt literals:
//
// Jan Mon MST PM pm
func Layout(fmt string) (string, error) {
return layout(fmt, false)
}
func layout(fmt string, parsing bool) (string, error) {
dst := buffer(fmt)
var parser parser
parser.literal = func(b byte) error {
if '0' <= b && b <= '9' {
return literalErr(b)
}
dst = append(dst, b)
if b == 'M' || b == 'T' || b == 'm' || b == 'n' {
switch {
case bytes.HasSuffix(dst, []byte("Jan")):
return literalErr("Jan")
case bytes.HasSuffix(dst, []byte("Mon")):
return literalErr("Mon")
case bytes.HasSuffix(dst, []byte("MST")):
return literalErr("MST")
case bytes.HasSuffix(dst, []byte("PM")):
return literalErr("PM")
case bytes.HasSuffix(dst, []byte("pm")):
return literalErr("pm")
}
}
return nil
}
parser.format = func(spec, flag byte) error {
if layout := goLayout(spec, flag, parsing); layout != "" {
dst = append(dst, layout...)
return nil
}
switch spec {
default:
return formatError{}
case 'L', 'f', 'N':
if bytes.HasSuffix(dst, []byte(".")) || bytes.HasSuffix(dst, []byte(",")) {
switch spec {
default:
dst = append(dst, "000"...)
case 'f':
dst = append(dst, "000000"...)
case 'N':
dst = append(dst, "000000000"...)
}
return nil
}
return formatError{message: "must follow '.' or ','"}
}
}
if err := parser.parse(fmt); err != nil {
return "", err
}
return string(dst), nil
}
// UTS35 converts a strftime format specification
// to a Unicode Technical Standard #35 Date Format Pattern.
//
// The following specifiers are not supported by UTS35:
//
// %e %k %l %u %w %C %P %U %W
func UTS35(fmt string) (string, error) {
const quote = '\''
var quoted bool
dst := buffer(fmt)
var parser parser
parser.literal = func(b byte) error {
if b == quote {
dst = append(dst, quote, quote)
return nil
}
if !quoted && ('a' <= b && b <= 'z' || 'A' <= b && b <= 'Z') {
dst = append(dst, quote)
quoted = true
}
dst = append(dst, b)
return nil
}
parser.format = func(spec, flag byte) error {
if quoted {
dst = append(dst, quote)
quoted = false
}
if pattern := uts35Pattern(spec, flag); pattern != "" {
dst = append(dst, pattern...)
return nil
}
return formatError{}
}
if err := parser.parse(fmt); err != nil {
return "", err
}
if quoted {
dst = append(dst, quote)
}
return string(dst), nil
}
func buffer(format string) (buf []byte) {
const bufSize = 64
max := len(format) + 10
if max < bufSize {
var b [bufSize]byte
buf = b[:0]
} else {
buf = make([]byte, 0, max)
}
return
}
func year(y int) time.Time {
return time.Date(y, time.January, 1, 0, 0, 0, 0, time.UTC)
}
func appendWeekNumber(dst []byte, t time.Time, flag byte, sunday bool) []byte {
offset := int(t.Weekday())
if sunday {
offset = 6 - offset
} else if offset != 0 {
offset = 7 - offset
}
return appendInt2(dst, (t.YearDay()+offset)/7, flag)
}
func append12Hour(dst []byte, t time.Time, flag byte) []byte {
h := t.Hour()
if h == 0 {
h = 12
} else if h > 12 {
h -= 12
}
return appendInt2(dst, h, flag)
}
func appendInt1(dst []byte, i int) []byte {
return append(dst, byte('0'+i))
}
func appendInt2(dst []byte, i int, flag byte) []byte {
if flag == 0 || i >= 10 {
return append(dst, smallsString[i*2:i*2+2]...)
}
if flag == ' ' {
dst = append(dst, flag)
}
return appendInt1(dst, i)
}
const smallsString = "" +
"00010203040506070809" +
"10111213141516171819" +
"20212223242526272829" +
"30313233343536373839" +
"40414243444546474849" +
"50515253545556575859" +
"60616263646566676869" +
"70717273747576777879" +
"80818283848586878889" +
"90919293949596979899"

2
vendor/github.com/pelletier/go-toml/v2/.dockerignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
cmd/tomll/tomll
cmd/tomljson/tomljson

View File

@@ -0,0 +1,4 @@
* text=auto
benchmark/benchmark.toml text eol=lf
testdata/** text eol=lf

8
vendor/github.com/pelletier/go-toml/v2/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,8 @@
test_program/test_program_bin
fuzz/
cmd/tomll/tomll
cmd/tomljson/tomljson
cmd/tomltestgen/tomltestgen
dist
tests/
test-results

76
vendor/github.com/pelletier/go-toml/v2/.golangci.toml generated vendored Normal file
View File

@@ -0,0 +1,76 @@
version = "2"
[linters]
default = "none"
enable = [
"asciicheck",
"bodyclose",
"dogsled",
"dupl",
"durationcheck",
"errcheck",
"errorlint",
"exhaustive",
"forbidigo",
"gochecknoinits",
"goconst",
"gocritic",
"godoclint",
"goheader",
"gomodguard",
"goprintffuncname",
"gosec",
"govet",
"importas",
"ineffassign",
"lll",
"makezero",
"mirror",
"misspell",
"nakedret",
"nilerr",
"noctx",
"nolintlint",
"perfsprint",
"prealloc",
"predeclared",
"revive",
"rowserrcheck",
"sqlclosecheck",
"staticcheck",
"thelper",
"tparallel",
"unconvert",
"unparam",
"unused",
"usetesting",
"wastedassign",
"whitespace",
]
[linters.settings.exhaustive]
default-signifies-exhaustive = true
[linters.settings.lll]
line-length = 150
[[linters.exclusions.rules]]
path = ".test.go"
linters = ["goconst", "gosec"]
[[linters.exclusions.rules]]
path = "main.go"
linters = ["forbidigo"]
[[linters.exclusions.rules]]
path = "internal"
linters = ["revive"]
text = "(exported|indent-error-flow): "
[formatters]
enable = [
"gci",
"gofmt",
"gofumpt",
"goimports",
]

124
vendor/github.com/pelletier/go-toml/v2/.goreleaser.yaml generated vendored Normal file
View File

@@ -0,0 +1,124 @@
version: 2
before:
hooks:
- go mod tidy
- go fmt ./...
- go test ./...
builds:
- id: tomll
main: ./cmd/tomll
binary: tomll
env:
- CGO_ENABLED=0
flags:
- -trimpath
ldflags:
- -X main.version={{.Version}} -X main.commit={{.Commit}} -X main.date={{.CommitDate}}
mod_timestamp: '{{ .CommitTimestamp }}'
targets:
- linux_amd64
- linux_arm64
- linux_arm
- linux_riscv64
- windows_amd64
- windows_arm64
- darwin_amd64
- darwin_arm64
- id: tomljson
main: ./cmd/tomljson
binary: tomljson
env:
- CGO_ENABLED=0
flags:
- -trimpath
ldflags:
- -X main.version={{.Version}} -X main.commit={{.Commit}} -X main.date={{.CommitDate}}
mod_timestamp: '{{ .CommitTimestamp }}'
targets:
- linux_amd64
- linux_arm64
- linux_arm
- linux_riscv64
- windows_amd64
- windows_arm64
- darwin_amd64
- darwin_arm64
- id: jsontoml
main: ./cmd/jsontoml
binary: jsontoml
env:
- CGO_ENABLED=0
flags:
- -trimpath
ldflags:
- -X main.version={{.Version}} -X main.commit={{.Commit}} -X main.date={{.CommitDate}}
mod_timestamp: '{{ .CommitTimestamp }}'
targets:
- linux_amd64
- linux_arm64
- linux_riscv64
- linux_arm
- windows_amd64
- windows_arm64
- darwin_amd64
- darwin_arm64
universal_binaries:
- id: tomll
replace: true
name_template: tomll
- id: tomljson
replace: true
name_template: tomljson
- id: jsontoml
replace: true
name_template: jsontoml
archives:
- id: jsontoml
format: tar.xz
builds:
- jsontoml
files:
- none*
name_template: "{{ .Binary }}_{{.Version}}_{{ .Os }}_{{ .Arch }}"
- id: tomljson
format: tar.xz
builds:
- tomljson
files:
- none*
name_template: "{{ .Binary }}_{{.Version}}_{{ .Os }}_{{ .Arch }}"
- id: tomll
format: tar.xz
builds:
- tomll
files:
- none*
name_template: "{{ .Binary }}_{{.Version}}_{{ .Os }}_{{ .Arch }}"
dockers:
- id: tools
goos: linux
goarch: amd64
ids:
- jsontoml
- tomljson
- tomll
image_templates:
- "ghcr.io/pelletier/go-toml:latest"
- "ghcr.io/pelletier/go-toml:{{ .Tag }}"
- "ghcr.io/pelletier/go-toml:v{{ .Major }}"
skip_push: false
checksum:
name_template: 'sha256sums.txt'
snapshot:
version_template: "{{ incpatch .Version }}-next"
release:
github:
owner: pelletier
name: go-toml
draft: true
prerelease: auto
mode: replace
changelog:
use: github-native
announce:
skip: true

64
vendor/github.com/pelletier/go-toml/v2/AGENTS.md generated vendored Normal file
View File

@@ -0,0 +1,64 @@
# Agent Guidelines for go-toml
This file provides guidelines for AI agents contributing to go-toml. All agents must follow these rules derived from [CONTRIBUTING.md](./CONTRIBUTING.md).
## Project Overview
go-toml is a TOML library for Go. The goal is to provide an easy-to-use and efficient TOML implementation that gets the job done without getting in the way.
## Code Change Rules
### Backward Compatibility
- **No backward-incompatible changes** unless explicitly discussed and approved
- Avoid breaking people's programs unless absolutely necessary
### Testing Requirements
- **All bug fixes must include regression tests**
- **All new code must be tested**
- Run tests before submitting: `go test -race ./...`
- Test coverage must not decrease. Check with:
```bash
go test -covermode=atomic -coverprofile=coverage.out
go tool cover -func=coverage.out
```
- All lines of code touched by changes should be covered by tests
### Performance Requirements
- go-toml aims to stay efficient; avoid performance regressions
- Run benchmarks to verify: `go test ./... -bench=. -count=10`
- Compare results using [benchstat](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat)
### Documentation
- New features or feature extensions must include documentation
- Documentation lives in [README.md](./README.md) and throughout source code
### Code Style
- Follow existing code format and structure
- Code must pass `go fmt`
- Code must pass linting with the same golangci-lint version as CI (see version in `.github/workflows/lint.yml`):
```bash
# Install specific version (check lint.yml for current version)
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s -- -b $(go env GOPATH)/bin <version>
# Run linter
golangci-lint run ./...
```
### Commit Messages
- Commit messages must explain **why** the change is needed
- Keep messages clear and informative even if details are in the PR description
## Pull Request Checklist
Before submitting:
1. Tests pass (`go test -race ./...`)
2. No backward-incompatible changes (unless discussed)
3. Relevant documentation added/updated
4. No performance regression (verify with benchmarks)
5. Title is clear and understandable for changelog

Some files were not shown because too many files have changed in this diff Show More