13 Commits

Author SHA1 Message Date
33e0f9b8bd Fix SSO clients page template wrapper
Page templates must define a named block that invokes {{template "base"}}
(e.g. {{define "sso_clients"}}{{template "base" .}}{{end}}). Without this,
ExecuteTemplate cannot find the named template and returns an error.

Also add logging to the SSO clients handler for easier diagnosis.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 07:59:06 -07:00
44a1b9ad3a Register SSO client templates
The sso_clients page and sso_client_row fragment were missing from the
template registration lists, causing a 500 on GET /sso-clients.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 07:49:45 -07:00
df7773229c Move SSO clients from config to database
- Add sso_clients table (migration 000010) with client_id, redirect_uri,
  tags (JSON), enabled flag, and audit timestamps
- Add SSOClient model struct and audit events
- Implement DB CRUD with 10 unit tests
- Add REST API: GET/POST/PATCH/DELETE /v1/sso/clients (policy-gated)
- Add gRPC SSOClientService with 5 RPCs (admin-only)
- Add mciasctl sso list/create/get/update/delete commands
- Add web UI admin page at /sso-clients with HTMX create/toggle/delete
- Migrate handleSSOAuthorize and handleSSOTokenExchange to use DB
- Remove SSOConfig, SSOClient struct, lookup methods from config
- Simplify: client_id = service_name for policy evaluation

Security:
- SSO client CRUD is admin-only (policy-gated REST, requireAdmin gRPC)
- redirect_uri must use https:// (validated at DB layer)
- Disabled clients are rejected at both authorize and token exchange
- All mutations write audit events

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 23:47:53 -07:00
4430ce38a4 Allow htmx swap styles in CSP
Add 'unsafe-hashes' with the htmx swap indicator style hash to the
style-src CSP directive. Without this, htmx swap transitions are
blocked by CSP, which can prevent HX-Redirect from being processed
on the SSO login flow.

Security:
- Uses 'unsafe-hashes' (not 'unsafe-inline') so only the specific
  htmx style hash is permitted, not arbitrary inline styles

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 17:43:53 -07:00
4ed2cecec5 Fix SSO redirect failing with htmx login form
The login form uses hx-post, so htmx sends the POST via fetch. A 302
redirect to the cross-origin service callback URL fails silently because
fetch follows the redirect but gets blocked by CORS. Use HX-Redirect
header instead, which tells htmx to perform a full page navigation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 17:32:44 -07:00
9385c3846d Fix protobuf runtime panic on startup
Regenerated protobuf stubs and bumped google.golang.org/protobuf from
v1.36.10 to v1.36.11 to match protoc-gen-go v1.36.11. The version
mismatch caused a panic in filedesc.unmarshalSeed during init().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 15:55:15 -07:00
e450ade988 Add SSO authorization code flow (Phase 1)
MCIAS now acts as an SSO provider for downstream services. Services
redirect users to /sso/authorize, MCIAS handles login (password, TOTP,
or passkey), then redirects back with an authorization code that the
service exchanges for a JWT via POST /v1/sso/token.

- Add SSO client registry to config (client_id, redirect_uri,
  service_name, tags) with validation
- Add internal/sso package: authorization code and session stores
  using sync.Map with TTL, single-use LoadAndDelete, cleanup goroutines
- Add GET /sso/authorize endpoint (validates client, creates session,
  redirects to /login?sso=<nonce>)
- Add POST /v1/sso/token endpoint (exchanges code for JWT with policy
  evaluation using client's service_name/tags from config)
- Thread SSO nonce through password→TOTP and WebAuthn login flows
- Update login.html, totp_step.html, and webauthn.js for SSO nonce
  passthrough

Security:
- Authorization codes are 256-bit random, single-use, 60-second TTL
- redirect_uri validated as exact match against registered config
- Policy context comes from MCIAS config, not the calling service
- SSO sessions are server-side only; nonce is the sole client-visible value
- WebAuthn SSO returns redirect URL as JSON (not HTTP redirect) for JS compat

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 15:21:48 -07:00
5b5e1a7ed6 Use mcdsl/terminal for all password prompts
Replace direct golang.org/x/term calls with mcdsl/terminal.ReadPassword
across mciasctl (6 sites), mciasgrpcctl (1 site), and mciasdb (1 site).
Aligns with the new CLI security standard in engineering-standards.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 11:40:11 -07:00
e4220b840e flake: install shell completions for both mciasctl and mciasgrpcctl
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 21:49:10 -07:00
cff7276293 mciasctl: convert from flag to cobra
Adds shell completion support (zsh, bash, fish) via cobra's built-in
completion command. All existing behavior and security measures are
preserved.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 21:48:24 -07:00
be3bc807b7 mciasgrpcctl: convert from flag to cobra
Adds shell completion support (zsh, bash, fish) via cobra's built-in
completion command. All existing behavior and security measures are
preserved.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 21:48:02 -07:00
ead32f72f8 Add MCR registry tagging and push target to Makefile
Add MCR variable. Replace local mcias:VERSION tags with MCR registry
URL. Remove :latest tag. Add push target.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 14:32:21 -07:00
d7d80c0f25 Bump flake.nix version to match latest tag
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 02:16:34 -07:00
267 changed files with 28542 additions and 9224 deletions

View File

@@ -20,6 +20,7 @@
# Variables # Variables
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
MODULE := git.wntrmute.dev/mc/mcias MODULE := git.wntrmute.dev/mc/mcias
MCR := mcr.svc.mcp.metacircular.net:8443
BINARIES := mciassrv mciasctl mciasdb mciasgrpcctl BINARIES := mciassrv mciasctl mciasdb mciasgrpcctl
BIN_DIR := bin BIN_DIR := bin
MAN_DIR := man/man1 MAN_DIR := man/man1
@@ -163,9 +164,12 @@ dist: man
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# docker — build the Docker image # docker — build the Docker image
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
.PHONY: docker .PHONY: docker push
docker: docker:
docker build --force-rm -t mcias:$(VERSION) -t mcias:latest . docker build --force-rm -t $(MCR)/mcias:$(VERSION) .
push: docker
docker push $(MCR)/mcias:$(VERSION)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# docker-clean — remove local mcias Docker images # docker-clean — remove local mcias Docker images

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,8 @@ import (
"git.wntrmute.dev/mc/mcias/internal/auth" "git.wntrmute.dev/mc/mcias/internal/auth"
"git.wntrmute.dev/mc/mcias/internal/model" "git.wntrmute.dev/mc/mcias/internal/model"
"golang.org/x/term"
"git.wntrmute.dev/mc/mcdsl/terminal"
) )
func (t *tool) runAccount(args []string) { func (t *tool) runAccount(args []string) {
@@ -233,20 +234,14 @@ func (t *tool) accountResetTOTP(args []string) {
// readPassword reads a password from the terminal without echo. // readPassword reads a password from the terminal without echo.
// Falls back to a regular line read if stdin is not a terminal (e.g. in tests). // Falls back to a regular line read if stdin is not a terminal (e.g. in tests).
func readPassword(prompt string) (string, error) { func readPassword(prompt string) (string, error) {
pw, err := terminal.ReadPassword(prompt)
if err == nil {
return pw, nil
}
// Fallback for piped input (e.g. tests).
fmt.Fprint(os.Stderr, prompt) fmt.Fprint(os.Stderr, prompt)
fd := int(os.Stdin.Fd()) //nolint:gosec // G115: file descriptors are non-negative and fit in int on all supported platforms
if term.IsTerminal(fd) {
pw, err := term.ReadPassword(fd)
fmt.Fprintln(os.Stderr) // newline after hidden input
if err != nil {
return "", fmt.Errorf("read password from terminal: %w", err)
}
return string(pw), nil
}
// Not a terminal: read a plain line (for piped input in tests).
var line string var line string
_, err := fmt.Fscanln(os.Stdin, &line) if _, err := fmt.Fscanln(os.Stdin, &line); err != nil {
if err != nil {
return "", fmt.Errorf("read password: %w", err) return "", fmt.Errorf("read password: %w", err)
} }
return line, nil return line, nil

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,7 @@
let let
system = "x86_64-linux"; system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system}; pkgs = nixpkgs.legacyPackages.${system};
version = "1.7.0"; version = "1.8.0";
in in
{ {
packages.${system} = { packages.${system} = {
@@ -28,6 +28,17 @@
"-w" "-w"
"-X main.version=${version}" "-X main.version=${version}"
]; ];
postInstall = ''
mkdir -p $out/share/zsh/site-functions
mkdir -p $out/share/bash-completion/completions
mkdir -p $out/share/fish/vendor_completions.d
$out/bin/mciasctl completion zsh > $out/share/zsh/site-functions/_mciasctl
$out/bin/mciasctl completion bash > $out/share/bash-completion/completions/mciasctl
$out/bin/mciasctl completion fish > $out/share/fish/vendor_completions.d/mciasctl.fish
$out/bin/mciasgrpcctl completion zsh > $out/share/zsh/site-functions/_mciasgrpcctl
$out/bin/mciasgrpcctl completion bash > $out/share/bash-completion/completions/mciasgrpcctl
$out/bin/mciasgrpcctl completion fish > $out/share/fish/vendor_completions.d/mciasgrpcctl.fish
'';
}; };
}; };
}; };

View File

@@ -4,7 +4,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/account.proto // source: mcias/v1/account.proto
package mciasv1 package mciasv1
@@ -1080,7 +1080,7 @@ const file_mcias_v1_account_proto_rawDesc = "" +
"\n" + "\n" +
"GetPGCreds\x12\x1b.mcias.v1.GetPGCredsRequest\x1a\x1c.mcias.v1.GetPGCredsResponse\x12G\n" + "GetPGCreds\x12\x1b.mcias.v1.GetPGCredsRequest\x1a\x1c.mcias.v1.GetPGCredsResponse\x12G\n" +
"\n" + "\n" +
"SetPGCreds\x12\x1b.mcias.v1.SetPGCredsRequest\x1a\x1c.mcias.v1.SetPGCredsResponseB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "SetPGCreds\x12\x1b.mcias.v1.SetPGCredsRequest\x1a\x1c.mcias.v1.SetPGCredsResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_account_proto_rawDescOnce sync.Once file_mcias_v1_account_proto_rawDescOnce sync.Once

View File

@@ -4,7 +4,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.1 // - protoc-gen-go-grpc v1.6.1
// - protoc v3.20.3 // - protoc v6.32.1
// source: mcias/v1/account.proto // source: mcias/v1/account.proto
package mciasv1 package mciasv1

View File

@@ -4,7 +4,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/admin.proto // source: mcias/v1/admin.proto
package mciasv1 package mciasv1
@@ -238,7 +238,7 @@ const file_mcias_v1_admin_proto_rawDesc = "" +
"\x01x\x18\x05 \x01(\tR\x01x2\x9a\x01\n" + "\x01x\x18\x05 \x01(\tR\x01x2\x9a\x01\n" +
"\fAdminService\x12;\n" + "\fAdminService\x12;\n" +
"\x06Health\x12\x17.mcias.v1.HealthRequest\x1a\x18.mcias.v1.HealthResponse\x12M\n" + "\x06Health\x12\x17.mcias.v1.HealthRequest\x1a\x18.mcias.v1.HealthResponse\x12M\n" +
"\fGetPublicKey\x12\x1d.mcias.v1.GetPublicKeyRequest\x1a\x1e.mcias.v1.GetPublicKeyResponseB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "\fGetPublicKey\x12\x1d.mcias.v1.GetPublicKeyRequest\x1a\x1e.mcias.v1.GetPublicKeyResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_admin_proto_rawDescOnce sync.Once file_mcias_v1_admin_proto_rawDescOnce sync.Once

View File

@@ -4,7 +4,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.1 // - protoc-gen-go-grpc v1.6.1
// - protoc v3.20.3 // - protoc v6.32.1
// source: mcias/v1/admin.proto // source: mcias/v1/admin.proto
package mciasv1 package mciasv1

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/auth.proto // source: mcias/v1/auth.proto
package mciasv1 package mciasv1
@@ -919,7 +919,7 @@ const file_mcias_v1_auth_proto_rawDesc = "" +
"\n" + "\n" +
"RemoveTOTP\x12\x1b.mcias.v1.RemoveTOTPRequest\x1a\x1c.mcias.v1.RemoveTOTPResponse\x12n\n" + "RemoveTOTP\x12\x1b.mcias.v1.RemoveTOTPRequest\x1a\x1c.mcias.v1.RemoveTOTPResponse\x12n\n" +
"\x17ListWebAuthnCredentials\x12(.mcias.v1.ListWebAuthnCredentialsRequest\x1a).mcias.v1.ListWebAuthnCredentialsResponse\x12q\n" + "\x17ListWebAuthnCredentials\x12(.mcias.v1.ListWebAuthnCredentialsRequest\x1a).mcias.v1.ListWebAuthnCredentialsResponse\x12q\n" +
"\x18RemoveWebAuthnCredential\x12).mcias.v1.RemoveWebAuthnCredentialRequest\x1a*.mcias.v1.RemoveWebAuthnCredentialResponseB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "\x18RemoveWebAuthnCredential\x12).mcias.v1.RemoveWebAuthnCredentialRequest\x1a*.mcias.v1.RemoveWebAuthnCredentialResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_auth_proto_rawDescOnce sync.Once file_mcias_v1_auth_proto_rawDescOnce sync.Once

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.1 // - protoc-gen-go-grpc v1.6.1
// - protoc v3.20.3 // - protoc v6.32.1
// source: mcias/v1/auth.proto // source: mcias/v1/auth.proto
package mciasv1 package mciasv1

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/common.proto // source: mcias/v1/common.proto
package mciasv1 package mciasv1
@@ -349,7 +349,7 @@ const file_mcias_v1_common_proto_rawDesc = "" +
"\x04port\x18\x05 \x01(\x05R\x04port\"5\n" + "\x04port\x18\x05 \x01(\x05R\x04port\"5\n" +
"\x05Error\x12\x18\n" + "\x05Error\x12\x18\n" +
"\amessage\x18\x01 \x01(\tR\amessage\x12\x12\n" + "\amessage\x18\x01 \x01(\tR\amessage\x12\x12\n" +
"\x04code\x18\x02 \x01(\tR\x04codeB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "\x04code\x18\x02 \x01(\tR\x04codeB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_common_proto_rawDescOnce sync.Once file_mcias_v1_common_proto_rawDescOnce sync.Once

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/policy.proto // source: mcias/v1/policy.proto
package mciasv1 package mciasv1
@@ -703,7 +703,7 @@ const file_mcias_v1_policy_proto_rawDesc = "" +
"\x10CreatePolicyRule\x12!.mcias.v1.CreatePolicyRuleRequest\x1a\".mcias.v1.CreatePolicyRuleResponse\x12P\n" + "\x10CreatePolicyRule\x12!.mcias.v1.CreatePolicyRuleRequest\x1a\".mcias.v1.CreatePolicyRuleResponse\x12P\n" +
"\rGetPolicyRule\x12\x1e.mcias.v1.GetPolicyRuleRequest\x1a\x1f.mcias.v1.GetPolicyRuleResponse\x12Y\n" + "\rGetPolicyRule\x12\x1e.mcias.v1.GetPolicyRuleRequest\x1a\x1f.mcias.v1.GetPolicyRuleResponse\x12Y\n" +
"\x10UpdatePolicyRule\x12!.mcias.v1.UpdatePolicyRuleRequest\x1a\".mcias.v1.UpdatePolicyRuleResponse\x12Y\n" + "\x10UpdatePolicyRule\x12!.mcias.v1.UpdatePolicyRuleRequest\x1a\".mcias.v1.UpdatePolicyRuleResponse\x12Y\n" +
"\x10DeletePolicyRule\x12!.mcias.v1.DeletePolicyRuleRequest\x1a\".mcias.v1.DeletePolicyRuleResponseB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "\x10DeletePolicyRule\x12!.mcias.v1.DeletePolicyRuleRequest\x1a\".mcias.v1.DeletePolicyRuleResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_policy_proto_rawDescOnce sync.Once file_mcias_v1_policy_proto_rawDescOnce sync.Once

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.1 // - protoc-gen-go-grpc v1.6.1
// - protoc v3.20.3 // - protoc v6.32.1
// source: mcias/v1/policy.proto // source: mcias/v1/policy.proto
package mciasv1 package mciasv1

View File

@@ -0,0 +1,703 @@
// SSOClientService: CRUD management of SSO client registrations.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc v6.32.1
// source: mcias/v1/sso_client.proto
package mciasv1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// SSOClient is the wire representation of an SSO client registration.
type SSOClient struct {
state protoimpl.MessageState `protogen:"open.v1"`
ClientId string `protobuf:"bytes,1,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"`
RedirectUri string `protobuf:"bytes,2,opt,name=redirect_uri,json=redirectUri,proto3" json:"redirect_uri,omitempty"`
Tags []string `protobuf:"bytes,3,rep,name=tags,proto3" json:"tags,omitempty"`
Enabled bool `protobuf:"varint,4,opt,name=enabled,proto3" json:"enabled,omitempty"`
CreatedAt string `protobuf:"bytes,5,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"` // RFC3339
UpdatedAt string `protobuf:"bytes,6,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"` // RFC3339
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SSOClient) Reset() {
*x = SSOClient{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SSOClient) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SSOClient) ProtoMessage() {}
func (x *SSOClient) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SSOClient.ProtoReflect.Descriptor instead.
func (*SSOClient) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{0}
}
func (x *SSOClient) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
func (x *SSOClient) GetRedirectUri() string {
if x != nil {
return x.RedirectUri
}
return ""
}
func (x *SSOClient) GetTags() []string {
if x != nil {
return x.Tags
}
return nil
}
func (x *SSOClient) GetEnabled() bool {
if x != nil {
return x.Enabled
}
return false
}
func (x *SSOClient) GetCreatedAt() string {
if x != nil {
return x.CreatedAt
}
return ""
}
func (x *SSOClient) GetUpdatedAt() string {
if x != nil {
return x.UpdatedAt
}
return ""
}
type ListSSOClientsRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListSSOClientsRequest) Reset() {
*x = ListSSOClientsRequest{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListSSOClientsRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListSSOClientsRequest) ProtoMessage() {}
func (x *ListSSOClientsRequest) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListSSOClientsRequest.ProtoReflect.Descriptor instead.
func (*ListSSOClientsRequest) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{1}
}
type ListSSOClientsResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Clients []*SSOClient `protobuf:"bytes,1,rep,name=clients,proto3" json:"clients,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListSSOClientsResponse) Reset() {
*x = ListSSOClientsResponse{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListSSOClientsResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListSSOClientsResponse) ProtoMessage() {}
func (x *ListSSOClientsResponse) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListSSOClientsResponse.ProtoReflect.Descriptor instead.
func (*ListSSOClientsResponse) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{2}
}
func (x *ListSSOClientsResponse) GetClients() []*SSOClient {
if x != nil {
return x.Clients
}
return nil
}
type CreateSSOClientRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
ClientId string `protobuf:"bytes,1,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"`
RedirectUri string `protobuf:"bytes,2,opt,name=redirect_uri,json=redirectUri,proto3" json:"redirect_uri,omitempty"`
Tags []string `protobuf:"bytes,3,rep,name=tags,proto3" json:"tags,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateSSOClientRequest) Reset() {
*x = CreateSSOClientRequest{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateSSOClientRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateSSOClientRequest) ProtoMessage() {}
func (x *CreateSSOClientRequest) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateSSOClientRequest.ProtoReflect.Descriptor instead.
func (*CreateSSOClientRequest) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{3}
}
func (x *CreateSSOClientRequest) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
func (x *CreateSSOClientRequest) GetRedirectUri() string {
if x != nil {
return x.RedirectUri
}
return ""
}
func (x *CreateSSOClientRequest) GetTags() []string {
if x != nil {
return x.Tags
}
return nil
}
type CreateSSOClientResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Client *SSOClient `protobuf:"bytes,1,opt,name=client,proto3" json:"client,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateSSOClientResponse) Reset() {
*x = CreateSSOClientResponse{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateSSOClientResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateSSOClientResponse) ProtoMessage() {}
func (x *CreateSSOClientResponse) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateSSOClientResponse.ProtoReflect.Descriptor instead.
func (*CreateSSOClientResponse) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{4}
}
func (x *CreateSSOClientResponse) GetClient() *SSOClient {
if x != nil {
return x.Client
}
return nil
}
type GetSSOClientRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
ClientId string `protobuf:"bytes,1,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetSSOClientRequest) Reset() {
*x = GetSSOClientRequest{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetSSOClientRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetSSOClientRequest) ProtoMessage() {}
func (x *GetSSOClientRequest) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetSSOClientRequest.ProtoReflect.Descriptor instead.
func (*GetSSOClientRequest) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{5}
}
func (x *GetSSOClientRequest) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
type GetSSOClientResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Client *SSOClient `protobuf:"bytes,1,opt,name=client,proto3" json:"client,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetSSOClientResponse) Reset() {
*x = GetSSOClientResponse{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetSSOClientResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetSSOClientResponse) ProtoMessage() {}
func (x *GetSSOClientResponse) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetSSOClientResponse.ProtoReflect.Descriptor instead.
func (*GetSSOClientResponse) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{6}
}
func (x *GetSSOClientResponse) GetClient() *SSOClient {
if x != nil {
return x.Client
}
return nil
}
type UpdateSSOClientRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
ClientId string `protobuf:"bytes,1,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"`
RedirectUri *string `protobuf:"bytes,2,opt,name=redirect_uri,json=redirectUri,proto3,oneof" json:"redirect_uri,omitempty"`
Tags []string `protobuf:"bytes,3,rep,name=tags,proto3" json:"tags,omitempty"`
Enabled *bool `protobuf:"varint,4,opt,name=enabled,proto3,oneof" json:"enabled,omitempty"`
UpdateTags bool `protobuf:"varint,5,opt,name=update_tags,json=updateTags,proto3" json:"update_tags,omitempty"` // when true, tags field is applied (allows clearing)
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UpdateSSOClientRequest) Reset() {
*x = UpdateSSOClientRequest{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *UpdateSSOClientRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*UpdateSSOClientRequest) ProtoMessage() {}
func (x *UpdateSSOClientRequest) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use UpdateSSOClientRequest.ProtoReflect.Descriptor instead.
func (*UpdateSSOClientRequest) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{7}
}
func (x *UpdateSSOClientRequest) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
func (x *UpdateSSOClientRequest) GetRedirectUri() string {
if x != nil && x.RedirectUri != nil {
return *x.RedirectUri
}
return ""
}
func (x *UpdateSSOClientRequest) GetTags() []string {
if x != nil {
return x.Tags
}
return nil
}
func (x *UpdateSSOClientRequest) GetEnabled() bool {
if x != nil && x.Enabled != nil {
return *x.Enabled
}
return false
}
func (x *UpdateSSOClientRequest) GetUpdateTags() bool {
if x != nil {
return x.UpdateTags
}
return false
}
type UpdateSSOClientResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Client *SSOClient `protobuf:"bytes,1,opt,name=client,proto3" json:"client,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UpdateSSOClientResponse) Reset() {
*x = UpdateSSOClientResponse{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *UpdateSSOClientResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*UpdateSSOClientResponse) ProtoMessage() {}
func (x *UpdateSSOClientResponse) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use UpdateSSOClientResponse.ProtoReflect.Descriptor instead.
func (*UpdateSSOClientResponse) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{8}
}
func (x *UpdateSSOClientResponse) GetClient() *SSOClient {
if x != nil {
return x.Client
}
return nil
}
type DeleteSSOClientRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
ClientId string `protobuf:"bytes,1,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteSSOClientRequest) Reset() {
*x = DeleteSSOClientRequest{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteSSOClientRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteSSOClientRequest) ProtoMessage() {}
func (x *DeleteSSOClientRequest) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[9]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteSSOClientRequest.ProtoReflect.Descriptor instead.
func (*DeleteSSOClientRequest) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{9}
}
func (x *DeleteSSOClientRequest) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
type DeleteSSOClientResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteSSOClientResponse) Reset() {
*x = DeleteSSOClientResponse{}
mi := &file_mcias_v1_sso_client_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteSSOClientResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteSSOClientResponse) ProtoMessage() {}
func (x *DeleteSSOClientResponse) ProtoReflect() protoreflect.Message {
mi := &file_mcias_v1_sso_client_proto_msgTypes[10]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteSSOClientResponse.ProtoReflect.Descriptor instead.
func (*DeleteSSOClientResponse) Descriptor() ([]byte, []int) {
return file_mcias_v1_sso_client_proto_rawDescGZIP(), []int{10}
}
var File_mcias_v1_sso_client_proto protoreflect.FileDescriptor
const file_mcias_v1_sso_client_proto_rawDesc = "" +
"\n" +
"\x19mcias/v1/sso_client.proto\x12\bmcias.v1\"\xb7\x01\n" +
"\tSSOClient\x12\x1b\n" +
"\tclient_id\x18\x01 \x01(\tR\bclientId\x12!\n" +
"\fredirect_uri\x18\x02 \x01(\tR\vredirectUri\x12\x12\n" +
"\x04tags\x18\x03 \x03(\tR\x04tags\x12\x18\n" +
"\aenabled\x18\x04 \x01(\bR\aenabled\x12\x1d\n" +
"\n" +
"created_at\x18\x05 \x01(\tR\tcreatedAt\x12\x1d\n" +
"\n" +
"updated_at\x18\x06 \x01(\tR\tupdatedAt\"\x17\n" +
"\x15ListSSOClientsRequest\"G\n" +
"\x16ListSSOClientsResponse\x12-\n" +
"\aclients\x18\x01 \x03(\v2\x13.mcias.v1.SSOClientR\aclients\"l\n" +
"\x16CreateSSOClientRequest\x12\x1b\n" +
"\tclient_id\x18\x01 \x01(\tR\bclientId\x12!\n" +
"\fredirect_uri\x18\x02 \x01(\tR\vredirectUri\x12\x12\n" +
"\x04tags\x18\x03 \x03(\tR\x04tags\"F\n" +
"\x17CreateSSOClientResponse\x12+\n" +
"\x06client\x18\x01 \x01(\v2\x13.mcias.v1.SSOClientR\x06client\"2\n" +
"\x13GetSSOClientRequest\x12\x1b\n" +
"\tclient_id\x18\x01 \x01(\tR\bclientId\"C\n" +
"\x14GetSSOClientResponse\x12+\n" +
"\x06client\x18\x01 \x01(\v2\x13.mcias.v1.SSOClientR\x06client\"\xce\x01\n" +
"\x16UpdateSSOClientRequest\x12\x1b\n" +
"\tclient_id\x18\x01 \x01(\tR\bclientId\x12&\n" +
"\fredirect_uri\x18\x02 \x01(\tH\x00R\vredirectUri\x88\x01\x01\x12\x12\n" +
"\x04tags\x18\x03 \x03(\tR\x04tags\x12\x1d\n" +
"\aenabled\x18\x04 \x01(\bH\x01R\aenabled\x88\x01\x01\x12\x1f\n" +
"\vupdate_tags\x18\x05 \x01(\bR\n" +
"updateTagsB\x0f\n" +
"\r_redirect_uriB\n" +
"\n" +
"\b_enabled\"F\n" +
"\x17UpdateSSOClientResponse\x12+\n" +
"\x06client\x18\x01 \x01(\v2\x13.mcias.v1.SSOClientR\x06client\"5\n" +
"\x16DeleteSSOClientRequest\x12\x1b\n" +
"\tclient_id\x18\x01 \x01(\tR\bclientId\"\x19\n" +
"\x17DeleteSSOClientResponse2\xbe\x03\n" +
"\x10SSOClientService\x12S\n" +
"\x0eListSSOClients\x12\x1f.mcias.v1.ListSSOClientsRequest\x1a .mcias.v1.ListSSOClientsResponse\x12V\n" +
"\x0fCreateSSOClient\x12 .mcias.v1.CreateSSOClientRequest\x1a!.mcias.v1.CreateSSOClientResponse\x12M\n" +
"\fGetSSOClient\x12\x1d.mcias.v1.GetSSOClientRequest\x1a\x1e.mcias.v1.GetSSOClientResponse\x12V\n" +
"\x0fUpdateSSOClient\x12 .mcias.v1.UpdateSSOClientRequest\x1a!.mcias.v1.UpdateSSOClientResponse\x12V\n" +
"\x0fDeleteSSOClient\x12 .mcias.v1.DeleteSSOClientRequest\x1a!.mcias.v1.DeleteSSOClientResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var (
file_mcias_v1_sso_client_proto_rawDescOnce sync.Once
file_mcias_v1_sso_client_proto_rawDescData []byte
)
func file_mcias_v1_sso_client_proto_rawDescGZIP() []byte {
file_mcias_v1_sso_client_proto_rawDescOnce.Do(func() {
file_mcias_v1_sso_client_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_mcias_v1_sso_client_proto_rawDesc), len(file_mcias_v1_sso_client_proto_rawDesc)))
})
return file_mcias_v1_sso_client_proto_rawDescData
}
var file_mcias_v1_sso_client_proto_msgTypes = make([]protoimpl.MessageInfo, 11)
var file_mcias_v1_sso_client_proto_goTypes = []any{
(*SSOClient)(nil), // 0: mcias.v1.SSOClient
(*ListSSOClientsRequest)(nil), // 1: mcias.v1.ListSSOClientsRequest
(*ListSSOClientsResponse)(nil), // 2: mcias.v1.ListSSOClientsResponse
(*CreateSSOClientRequest)(nil), // 3: mcias.v1.CreateSSOClientRequest
(*CreateSSOClientResponse)(nil), // 4: mcias.v1.CreateSSOClientResponse
(*GetSSOClientRequest)(nil), // 5: mcias.v1.GetSSOClientRequest
(*GetSSOClientResponse)(nil), // 6: mcias.v1.GetSSOClientResponse
(*UpdateSSOClientRequest)(nil), // 7: mcias.v1.UpdateSSOClientRequest
(*UpdateSSOClientResponse)(nil), // 8: mcias.v1.UpdateSSOClientResponse
(*DeleteSSOClientRequest)(nil), // 9: mcias.v1.DeleteSSOClientRequest
(*DeleteSSOClientResponse)(nil), // 10: mcias.v1.DeleteSSOClientResponse
}
var file_mcias_v1_sso_client_proto_depIdxs = []int32{
0, // 0: mcias.v1.ListSSOClientsResponse.clients:type_name -> mcias.v1.SSOClient
0, // 1: mcias.v1.CreateSSOClientResponse.client:type_name -> mcias.v1.SSOClient
0, // 2: mcias.v1.GetSSOClientResponse.client:type_name -> mcias.v1.SSOClient
0, // 3: mcias.v1.UpdateSSOClientResponse.client:type_name -> mcias.v1.SSOClient
1, // 4: mcias.v1.SSOClientService.ListSSOClients:input_type -> mcias.v1.ListSSOClientsRequest
3, // 5: mcias.v1.SSOClientService.CreateSSOClient:input_type -> mcias.v1.CreateSSOClientRequest
5, // 6: mcias.v1.SSOClientService.GetSSOClient:input_type -> mcias.v1.GetSSOClientRequest
7, // 7: mcias.v1.SSOClientService.UpdateSSOClient:input_type -> mcias.v1.UpdateSSOClientRequest
9, // 8: mcias.v1.SSOClientService.DeleteSSOClient:input_type -> mcias.v1.DeleteSSOClientRequest
2, // 9: mcias.v1.SSOClientService.ListSSOClients:output_type -> mcias.v1.ListSSOClientsResponse
4, // 10: mcias.v1.SSOClientService.CreateSSOClient:output_type -> mcias.v1.CreateSSOClientResponse
6, // 11: mcias.v1.SSOClientService.GetSSOClient:output_type -> mcias.v1.GetSSOClientResponse
8, // 12: mcias.v1.SSOClientService.UpdateSSOClient:output_type -> mcias.v1.UpdateSSOClientResponse
10, // 13: mcias.v1.SSOClientService.DeleteSSOClient:output_type -> mcias.v1.DeleteSSOClientResponse
9, // [9:14] is the sub-list for method output_type
4, // [4:9] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_mcias_v1_sso_client_proto_init() }
func file_mcias_v1_sso_client_proto_init() {
if File_mcias_v1_sso_client_proto != nil {
return
}
file_mcias_v1_sso_client_proto_msgTypes[7].OneofWrappers = []any{}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_mcias_v1_sso_client_proto_rawDesc), len(file_mcias_v1_sso_client_proto_rawDesc)),
NumEnums: 0,
NumMessages: 11,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_mcias_v1_sso_client_proto_goTypes,
DependencyIndexes: file_mcias_v1_sso_client_proto_depIdxs,
MessageInfos: file_mcias_v1_sso_client_proto_msgTypes,
}.Build()
File_mcias_v1_sso_client_proto = out.File
file_mcias_v1_sso_client_proto_goTypes = nil
file_mcias_v1_sso_client_proto_depIdxs = nil
}

View File

@@ -0,0 +1,289 @@
// SSOClientService: CRUD management of SSO client registrations.
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.6.1
// - protoc v6.32.1
// source: mcias/v1/sso_client.proto
package mciasv1
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
SSOClientService_ListSSOClients_FullMethodName = "/mcias.v1.SSOClientService/ListSSOClients"
SSOClientService_CreateSSOClient_FullMethodName = "/mcias.v1.SSOClientService/CreateSSOClient"
SSOClientService_GetSSOClient_FullMethodName = "/mcias.v1.SSOClientService/GetSSOClient"
SSOClientService_UpdateSSOClient_FullMethodName = "/mcias.v1.SSOClientService/UpdateSSOClient"
SSOClientService_DeleteSSOClient_FullMethodName = "/mcias.v1.SSOClientService/DeleteSSOClient"
)
// SSOClientServiceClient is the client API for SSOClientService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
//
// SSOClientService manages SSO client registrations (admin only).
type SSOClientServiceClient interface {
// ListSSOClients returns all registered SSO clients.
ListSSOClients(ctx context.Context, in *ListSSOClientsRequest, opts ...grpc.CallOption) (*ListSSOClientsResponse, error)
// CreateSSOClient registers a new SSO client.
CreateSSOClient(ctx context.Context, in *CreateSSOClientRequest, opts ...grpc.CallOption) (*CreateSSOClientResponse, error)
// GetSSOClient returns a single SSO client by client_id.
GetSSOClient(ctx context.Context, in *GetSSOClientRequest, opts ...grpc.CallOption) (*GetSSOClientResponse, error)
// UpdateSSOClient applies a partial update to an SSO client.
UpdateSSOClient(ctx context.Context, in *UpdateSSOClientRequest, opts ...grpc.CallOption) (*UpdateSSOClientResponse, error)
// DeleteSSOClient removes an SSO client registration.
DeleteSSOClient(ctx context.Context, in *DeleteSSOClientRequest, opts ...grpc.CallOption) (*DeleteSSOClientResponse, error)
}
type sSOClientServiceClient struct {
cc grpc.ClientConnInterface
}
func NewSSOClientServiceClient(cc grpc.ClientConnInterface) SSOClientServiceClient {
return &sSOClientServiceClient{cc}
}
func (c *sSOClientServiceClient) ListSSOClients(ctx context.Context, in *ListSSOClientsRequest, opts ...grpc.CallOption) (*ListSSOClientsResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListSSOClientsResponse)
err := c.cc.Invoke(ctx, SSOClientService_ListSSOClients_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *sSOClientServiceClient) CreateSSOClient(ctx context.Context, in *CreateSSOClientRequest, opts ...grpc.CallOption) (*CreateSSOClientResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CreateSSOClientResponse)
err := c.cc.Invoke(ctx, SSOClientService_CreateSSOClient_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *sSOClientServiceClient) GetSSOClient(ctx context.Context, in *GetSSOClientRequest, opts ...grpc.CallOption) (*GetSSOClientResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(GetSSOClientResponse)
err := c.cc.Invoke(ctx, SSOClientService_GetSSOClient_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *sSOClientServiceClient) UpdateSSOClient(ctx context.Context, in *UpdateSSOClientRequest, opts ...grpc.CallOption) (*UpdateSSOClientResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(UpdateSSOClientResponse)
err := c.cc.Invoke(ctx, SSOClientService_UpdateSSOClient_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *sSOClientServiceClient) DeleteSSOClient(ctx context.Context, in *DeleteSSOClientRequest, opts ...grpc.CallOption) (*DeleteSSOClientResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeleteSSOClientResponse)
err := c.cc.Invoke(ctx, SSOClientService_DeleteSSOClient_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// SSOClientServiceServer is the server API for SSOClientService service.
// All implementations must embed UnimplementedSSOClientServiceServer
// for forward compatibility.
//
// SSOClientService manages SSO client registrations (admin only).
type SSOClientServiceServer interface {
// ListSSOClients returns all registered SSO clients.
ListSSOClients(context.Context, *ListSSOClientsRequest) (*ListSSOClientsResponse, error)
// CreateSSOClient registers a new SSO client.
CreateSSOClient(context.Context, *CreateSSOClientRequest) (*CreateSSOClientResponse, error)
// GetSSOClient returns a single SSO client by client_id.
GetSSOClient(context.Context, *GetSSOClientRequest) (*GetSSOClientResponse, error)
// UpdateSSOClient applies a partial update to an SSO client.
UpdateSSOClient(context.Context, *UpdateSSOClientRequest) (*UpdateSSOClientResponse, error)
// DeleteSSOClient removes an SSO client registration.
DeleteSSOClient(context.Context, *DeleteSSOClientRequest) (*DeleteSSOClientResponse, error)
mustEmbedUnimplementedSSOClientServiceServer()
}
// UnimplementedSSOClientServiceServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedSSOClientServiceServer struct{}
func (UnimplementedSSOClientServiceServer) ListSSOClients(context.Context, *ListSSOClientsRequest) (*ListSSOClientsResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ListSSOClients not implemented")
}
func (UnimplementedSSOClientServiceServer) CreateSSOClient(context.Context, *CreateSSOClientRequest) (*CreateSSOClientResponse, error) {
return nil, status.Error(codes.Unimplemented, "method CreateSSOClient not implemented")
}
func (UnimplementedSSOClientServiceServer) GetSSOClient(context.Context, *GetSSOClientRequest) (*GetSSOClientResponse, error) {
return nil, status.Error(codes.Unimplemented, "method GetSSOClient not implemented")
}
func (UnimplementedSSOClientServiceServer) UpdateSSOClient(context.Context, *UpdateSSOClientRequest) (*UpdateSSOClientResponse, error) {
return nil, status.Error(codes.Unimplemented, "method UpdateSSOClient not implemented")
}
func (UnimplementedSSOClientServiceServer) DeleteSSOClient(context.Context, *DeleteSSOClientRequest) (*DeleteSSOClientResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DeleteSSOClient not implemented")
}
func (UnimplementedSSOClientServiceServer) mustEmbedUnimplementedSSOClientServiceServer() {}
func (UnimplementedSSOClientServiceServer) testEmbeddedByValue() {}
// UnsafeSSOClientServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to SSOClientServiceServer will
// result in compilation errors.
type UnsafeSSOClientServiceServer interface {
mustEmbedUnimplementedSSOClientServiceServer()
}
func RegisterSSOClientServiceServer(s grpc.ServiceRegistrar, srv SSOClientServiceServer) {
// If the following call panics, it indicates UnimplementedSSOClientServiceServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&SSOClientService_ServiceDesc, srv)
}
func _SSOClientService_ListSSOClients_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListSSOClientsRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SSOClientServiceServer).ListSSOClients(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: SSOClientService_ListSSOClients_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SSOClientServiceServer).ListSSOClients(ctx, req.(*ListSSOClientsRequest))
}
return interceptor(ctx, in, info, handler)
}
func _SSOClientService_CreateSSOClient_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CreateSSOClientRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SSOClientServiceServer).CreateSSOClient(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: SSOClientService_CreateSSOClient_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SSOClientServiceServer).CreateSSOClient(ctx, req.(*CreateSSOClientRequest))
}
return interceptor(ctx, in, info, handler)
}
func _SSOClientService_GetSSOClient_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetSSOClientRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SSOClientServiceServer).GetSSOClient(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: SSOClientService_GetSSOClient_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SSOClientServiceServer).GetSSOClient(ctx, req.(*GetSSOClientRequest))
}
return interceptor(ctx, in, info, handler)
}
func _SSOClientService_UpdateSSOClient_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(UpdateSSOClientRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SSOClientServiceServer).UpdateSSOClient(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: SSOClientService_UpdateSSOClient_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SSOClientServiceServer).UpdateSSOClient(ctx, req.(*UpdateSSOClientRequest))
}
return interceptor(ctx, in, info, handler)
}
func _SSOClientService_DeleteSSOClient_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteSSOClientRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SSOClientServiceServer).DeleteSSOClient(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: SSOClientService_DeleteSSOClient_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SSOClientServiceServer).DeleteSSOClient(ctx, req.(*DeleteSSOClientRequest))
}
return interceptor(ctx, in, info, handler)
}
// SSOClientService_ServiceDesc is the grpc.ServiceDesc for SSOClientService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var SSOClientService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "mcias.v1.SSOClientService",
HandlerType: (*SSOClientServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "ListSSOClients",
Handler: _SSOClientService_ListSSOClients_Handler,
},
{
MethodName: "CreateSSOClient",
Handler: _SSOClientService_CreateSSOClient_Handler,
},
{
MethodName: "GetSSOClient",
Handler: _SSOClientService_GetSSOClient_Handler,
},
{
MethodName: "UpdateSSOClient",
Handler: _SSOClientService_UpdateSSOClient_Handler,
},
{
MethodName: "DeleteSSOClient",
Handler: _SSOClientService_DeleteSSOClient_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "mcias/v1/sso_client.proto",
}

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.11 // protoc-gen-go v1.36.11
// protoc v3.20.3 // protoc v6.32.1
// source: mcias/v1/token.proto // source: mcias/v1/token.proto
package mciasv1 package mciasv1
@@ -346,7 +346,7 @@ const file_mcias_v1_token_proto_rawDesc = "" +
"\fTokenService\x12P\n" + "\fTokenService\x12P\n" +
"\rValidateToken\x12\x1e.mcias.v1.ValidateTokenRequest\x1a\x1f.mcias.v1.ValidateTokenResponse\x12\\\n" + "\rValidateToken\x12\x1e.mcias.v1.ValidateTokenRequest\x1a\x1f.mcias.v1.ValidateTokenResponse\x12\\\n" +
"\x11IssueServiceToken\x12\".mcias.v1.IssueServiceTokenRequest\x1a#.mcias.v1.IssueServiceTokenResponse\x12J\n" + "\x11IssueServiceToken\x12\".mcias.v1.IssueServiceTokenRequest\x1a#.mcias.v1.IssueServiceTokenResponse\x12J\n" +
"\vRevokeToken\x12\x1c.mcias.v1.RevokeTokenRequest\x1a\x1d.mcias.v1.RevokeTokenResponseB2Z0git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3" "\vRevokeToken\x12\x1c.mcias.v1.RevokeTokenRequest\x1a\x1d.mcias.v1.RevokeTokenResponseB0Z.git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1b\x06proto3"
var ( var (
file_mcias_v1_token_proto_rawDescOnce sync.Once file_mcias_v1_token_proto_rawDescOnce sync.Once

View File

@@ -3,7 +3,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.1 // - protoc-gen-go-grpc v1.6.1
// - protoc v3.20.3 // - protoc v6.32.1
// source: mcias/v1/token.proto // source: mcias/v1/token.proto
package mciasv1 package mciasv1

19
go.mod
View File

@@ -3,17 +3,18 @@ module git.wntrmute.dev/mc/mcias
go 1.26.0 go 1.26.0
require ( require (
git.wntrmute.dev/mc/mcdsl v1.4.0
github.com/go-webauthn/webauthn v0.16.1 github.com/go-webauthn/webauthn v0.16.1
github.com/golang-jwt/jwt/v5 v5.3.1 github.com/golang-jwt/jwt/v5 v5.3.1
github.com/golang-migrate/migrate/v4 v4.19.1 github.com/golang-migrate/migrate/v4 v4.19.1
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/pelletier/go-toml/v2 v2.2.4 github.com/pelletier/go-toml/v2 v2.3.0
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e
github.com/spf13/cobra v1.10.2
golang.org/x/crypto v0.49.0 golang.org/x/crypto v0.49.0
golang.org/x/term v0.41.0 google.golang.org/grpc v1.79.3
google.golang.org/grpc v1.74.2 google.golang.org/protobuf v1.36.11
google.golang.org/protobuf v1.36.7 modernc.org/sqlite v1.47.0
modernc.org/sqlite v1.46.1
) )
require ( require (
@@ -22,16 +23,18 @@ require (
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/go-webauthn/x v0.2.2 // indirect github.com/go-webauthn/x v0.2.2 // indirect
github.com/google/go-tpm v0.9.8 // indirect github.com/google/go-tpm v0.9.8 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/spf13/pflag v1.0.9 // indirect
github.com/x448/float16 v0.8.4 // indirect github.com/x448/float16 v0.8.4 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/net v0.51.0 // indirect golang.org/x/net v0.51.0 // indirect
golang.org/x/sys v0.42.0 // indirect golang.org/x/sys v0.42.0 // indirect
golang.org/x/term v0.41.0 // indirect
golang.org/x/text v0.35.0 // indirect golang.org/x/text v0.35.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
modernc.org/libc v1.67.6 // indirect modernc.org/libc v1.70.0 // indirect
modernc.org/mathutil v1.7.1 // indirect modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect modernc.org/memory v1.11.0 // indirect
) )

78
go.sum
View File

@@ -1,3 +1,8 @@
git.wntrmute.dev/mc/mcdsl v1.4.0 h1:PsEIyskcjBduwHSRwNB/U/uSeU/cv3C8MVr0SRjBRLg=
git.wntrmute.dev/mc/mcdsl v1.4.0/go.mod h1:MhYahIu7Sg53lE2zpQ20nlrsoNRjQzOJBAlCmom2wJc=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
@@ -32,42 +37,48 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw= github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w= github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls= github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0= github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0=
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M= github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I= go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE= go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E= go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs= go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY= go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis= go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4= go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y= go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU= go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA= golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8= golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w= golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo= golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=
@@ -83,28 +94,31 @@ golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA= golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k= golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0= golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c h1:qXWI/sQtv5UKboZ/zUk7h+mrf/lXORyI+n9DKDAusdg= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c/go.mod h1:gw1tLEfykwDz2ET4a12jcXt4couGAm7IwsVaTy0Sflo= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/grpc v1.74.2 h1:WoosgB65DlWVC9FqI82dGsZhWFNBSLjQ84bjROOpMu4= google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
google.golang.org/grpc v1.74.2/go.mod h1:CtQ+BGjaAIXHs/5YS3i473GqwBBa1zGQNevxdeBEXrM= google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/protobuf v1.36.7 h1:IgrO7UwFQGJdRNXH/sQux4R1Dj1WAKcLElzeeRaXV2A= google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
google.golang.org/protobuf v1.36.7/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY= google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis= modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0= modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc= modernc.org/ccgo/v4 v4.32.0 h1:hjG66bI/kqIPX1b2yT6fr/jt+QedtP2fqojG2VrFuVw=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM= modernc.org/ccgo/v4 v4.32.0/go.mod h1:6F08EBCx5uQc38kMGl+0Nm0oWczoo1c7cgpzEry7Uc0=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA= modernc.org/fileutil v1.4.0 h1:j6ZzNTftVS054gi281TyLjHPp6CPHr2KCxEXjEbD6SM=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc= modernc.org/fileutil v1.4.0/go.mod h1:EqdKFDxiByqxLk8ozOxObDSfcVOv/54xDs/DUHdvCUU=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI= modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito= modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE= modernc.org/gc/v3 v3.1.2 h1:ZtDCnhonXSZexk/AYsegNRV1lJGgaNZJuKjJSWKyEqo=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY= modernc.org/gc/v3 v3.1.2/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks= modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI= modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI= modernc.org/libc v1.70.0 h1:U58NawXqXbgpZ/dcdS9kMshu08aiA6b7gusEusqzNkw=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE= modernc.org/libc v1.70.0/go.mod h1:OVmxFGP1CI/Z4L3E0Q3Mf1PDE0BucwMkcXjjLntvHJo=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU= modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg= modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI= modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -113,8 +127,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns= modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w= modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE= modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.46.1 h1:eFJ2ShBLIEnUWlLy12raN0Z1plqmFX9Qe3rjQTKt6sU= modernc.org/sqlite v1.47.0 h1:R1XyaNpoW4Et9yly+I2EeX7pBza/w+pmYee/0HJDyKk=
modernc.org/sqlite v1.46.1/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA= modernc.org/sqlite v1.47.0/go.mod h1:hWjRO6Tj/5Ik8ieqxQybiEOUXy0NJFNp2tpvVpKlvig=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0= modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A= modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -22,7 +22,7 @@ var migrationsFS embed.FS
// LatestSchemaVersion is the highest migration version defined in the // LatestSchemaVersion is the highest migration version defined in the
// migrations/ directory. Update this constant whenever a new migration file // migrations/ directory. Update this constant whenever a new migration file
// is added. // is added.
const LatestSchemaVersion = 9 const LatestSchemaVersion = 10
// newMigrate constructs a migrate.Migrate instance backed by the embedded SQL // newMigrate constructs a migrate.Migrate instance backed by the embedded SQL
// files. It opens a dedicated *sql.DB using the same DSN as the main // files. It opens a dedicated *sql.DB using the same DSN as the main

View File

@@ -0,0 +1,10 @@
CREATE TABLE sso_clients (
id INTEGER PRIMARY KEY AUTOINCREMENT,
client_id TEXT NOT NULL UNIQUE,
redirect_uri TEXT NOT NULL,
tags_json TEXT NOT NULL DEFAULT '[]',
enabled INTEGER NOT NULL DEFAULT 1 CHECK (enabled IN (0,1)),
created_by INTEGER REFERENCES accounts(id),
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%SZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%SZ','now'))
);

206
internal/db/sso_clients.go Normal file
View File

@@ -0,0 +1,206 @@
package db
import (
"database/sql"
"encoding/json"
"errors"
"fmt"
"strings"
"git.wntrmute.dev/mc/mcias/internal/model"
)
const ssoClientCols = `id, client_id, redirect_uri, tags_json, enabled, created_by, created_at, updated_at`
// CreateSSOClient inserts a new SSO client. The client_id must be unique
// and the redirect_uri must start with "https://".
func (db *DB) CreateSSOClient(clientID, redirectURI string, tags []string, createdBy *int64) (*model.SSOClient, error) {
if clientID == "" {
return nil, fmt.Errorf("db: client_id is required")
}
if !strings.HasPrefix(redirectURI, "https://") {
return nil, fmt.Errorf("db: redirect_uri must start with https://")
}
if tags == nil {
tags = []string{}
}
tagsJSON, err := json.Marshal(tags)
if err != nil {
return nil, fmt.Errorf("db: marshal tags: %w", err)
}
n := now()
result, err := db.sql.Exec(`
INSERT INTO sso_clients (client_id, redirect_uri, tags_json, enabled, created_by, created_at, updated_at)
VALUES (?, ?, ?, 1, ?, ?, ?)
`, clientID, redirectURI, string(tagsJSON), createdBy, n, n)
if err != nil {
return nil, fmt.Errorf("db: create SSO client: %w", err)
}
id, err := result.LastInsertId()
if err != nil {
return nil, fmt.Errorf("db: create SSO client last insert id: %w", err)
}
createdAt, err := parseTime(n)
if err != nil {
return nil, err
}
return &model.SSOClient{
ID: id,
ClientID: clientID,
RedirectURI: redirectURI,
Tags: tags,
Enabled: true,
CreatedBy: createdBy,
CreatedAt: createdAt,
UpdatedAt: createdAt,
}, nil
}
// GetSSOClient retrieves an SSO client by client_id.
// Returns ErrNotFound if no such client exists.
func (db *DB) GetSSOClient(clientID string) (*model.SSOClient, error) {
return scanSSOClient(db.sql.QueryRow(`
SELECT `+ssoClientCols+`
FROM sso_clients WHERE client_id = ?
`, clientID))
}
// ListSSOClients returns all SSO clients ordered by client_id.
func (db *DB) ListSSOClients() ([]*model.SSOClient, error) {
rows, err := db.sql.Query(`
SELECT ` + ssoClientCols + `
FROM sso_clients ORDER BY client_id ASC
`)
if err != nil {
return nil, fmt.Errorf("db: list SSO clients: %w", err)
}
defer func() { _ = rows.Close() }()
var clients []*model.SSOClient
for rows.Next() {
c, err := scanSSOClientRow(rows)
if err != nil {
return nil, err
}
clients = append(clients, c)
}
return clients, rows.Err()
}
// UpdateSSOClient updates the mutable fields of an SSO client.
// Only non-nil fields are changed.
func (db *DB) UpdateSSOClient(clientID string, redirectURI *string, tags *[]string, enabled *bool) error {
n := now()
setClauses := "updated_at = ?"
args := []interface{}{n}
if redirectURI != nil {
if !strings.HasPrefix(*redirectURI, "https://") {
return fmt.Errorf("db: redirect_uri must start with https://")
}
setClauses += ", redirect_uri = ?"
args = append(args, *redirectURI)
}
if tags != nil {
tagsJSON, err := json.Marshal(*tags)
if err != nil {
return fmt.Errorf("db: marshal tags: %w", err)
}
setClauses += ", tags_json = ?"
args = append(args, string(tagsJSON))
}
if enabled != nil {
enabledInt := 0
if *enabled {
enabledInt = 1
}
setClauses += ", enabled = ?"
args = append(args, enabledInt)
}
args = append(args, clientID)
res, err := db.sql.Exec(`UPDATE sso_clients SET `+setClauses+` WHERE client_id = ?`, args...)
if err != nil {
return fmt.Errorf("db: update SSO client %s: %w", clientID, err)
}
n2, _ := res.RowsAffected()
if n2 == 0 {
return ErrNotFound
}
return nil
}
// DeleteSSOClient removes an SSO client by client_id.
func (db *DB) DeleteSSOClient(clientID string) error {
res, err := db.sql.Exec(`DELETE FROM sso_clients WHERE client_id = ?`, clientID)
if err != nil {
return fmt.Errorf("db: delete SSO client %s: %w", clientID, err)
}
n, _ := res.RowsAffected()
if n == 0 {
return ErrNotFound
}
return nil
}
// scanSSOClient scans a single SSO client from a *sql.Row.
func scanSSOClient(row *sql.Row) (*model.SSOClient, error) {
var c model.SSOClient
var enabledInt int
var tagsJSON, createdAtStr, updatedAtStr string
var createdBy *int64
err := row.Scan(&c.ID, &c.ClientID, &c.RedirectURI, &tagsJSON,
&enabledInt, &createdBy, &createdAtStr, &updatedAtStr)
if errors.Is(err, sql.ErrNoRows) {
return nil, ErrNotFound
}
if err != nil {
return nil, fmt.Errorf("db: scan SSO client: %w", err)
}
return finishSSOClientScan(&c, enabledInt, createdBy, tagsJSON, createdAtStr, updatedAtStr)
}
// scanSSOClientRow scans a single SSO client from *sql.Rows.
func scanSSOClientRow(rows *sql.Rows) (*model.SSOClient, error) {
var c model.SSOClient
var enabledInt int
var tagsJSON, createdAtStr, updatedAtStr string
var createdBy *int64
err := rows.Scan(&c.ID, &c.ClientID, &c.RedirectURI, &tagsJSON,
&enabledInt, &createdBy, &createdAtStr, &updatedAtStr)
if err != nil {
return nil, fmt.Errorf("db: scan SSO client row: %w", err)
}
return finishSSOClientScan(&c, enabledInt, createdBy, tagsJSON, createdAtStr, updatedAtStr)
}
func finishSSOClientScan(c *model.SSOClient, enabledInt int, createdBy *int64, tagsJSON, createdAtStr, updatedAtStr string) (*model.SSOClient, error) {
c.Enabled = enabledInt == 1
c.CreatedBy = createdBy
var err error
if c.CreatedAt, err = parseTime(createdAtStr); err != nil {
return nil, err
}
if c.UpdatedAt, err = parseTime(updatedAtStr); err != nil {
return nil, err
}
if err := json.Unmarshal([]byte(tagsJSON), &c.Tags); err != nil {
return nil, fmt.Errorf("db: unmarshal SSO client tags: %w", err)
}
if c.Tags == nil {
c.Tags = []string{}
}
return c, nil
}

View File

@@ -0,0 +1,192 @@
package db
import (
"errors"
"testing"
)
func TestCreateAndGetSSOClient(t *testing.T) {
db := openTestDB(t)
c, err := db.CreateSSOClient("mcr", "https://mcr.example.com/sso/callback", []string{"env:prod"}, nil)
if err != nil {
t.Fatalf("CreateSSOClient: %v", err)
}
if c.ID == 0 {
t.Error("expected non-zero ID")
}
if c.ClientID != "mcr" {
t.Errorf("client_id = %q, want %q", c.ClientID, "mcr")
}
if !c.Enabled {
t.Error("new client should be enabled by default")
}
if len(c.Tags) != 1 || c.Tags[0] != "env:prod" {
t.Errorf("tags = %v, want [env:prod]", c.Tags)
}
got, err := db.GetSSOClient("mcr")
if err != nil {
t.Fatalf("GetSSOClient: %v", err)
}
if got.RedirectURI != "https://mcr.example.com/sso/callback" {
t.Errorf("redirect_uri = %q", got.RedirectURI)
}
if len(got.Tags) != 1 || got.Tags[0] != "env:prod" {
t.Errorf("tags = %v after round-trip", got.Tags)
}
}
func TestCreateSSOClient_DuplicateClientID(t *testing.T) {
db := openTestDB(t)
_, err := db.CreateSSOClient("mcr", "https://mcr.example.com/cb", nil, nil)
if err != nil {
t.Fatalf("first create: %v", err)
}
_, err = db.CreateSSOClient("mcr", "https://other.example.com/cb", nil, nil)
if err == nil {
t.Error("expected error for duplicate client_id")
}
}
func TestCreateSSOClient_Validation(t *testing.T) {
db := openTestDB(t)
_, err := db.CreateSSOClient("", "https://example.com/cb", nil, nil)
if err == nil {
t.Error("expected error for empty client_id")
}
_, err = db.CreateSSOClient("mcr", "http://example.com/cb", nil, nil)
if err == nil {
t.Error("expected error for non-https redirect_uri")
}
}
func TestGetSSOClient_NotFound(t *testing.T) {
db := openTestDB(t)
_, err := db.GetSSOClient("nonexistent")
if !errors.Is(err, ErrNotFound) {
t.Errorf("expected ErrNotFound, got %v", err)
}
}
func TestListSSOClients(t *testing.T) {
db := openTestDB(t)
clients, err := db.ListSSOClients()
if err != nil {
t.Fatalf("ListSSOClients (empty): %v", err)
}
if len(clients) != 0 {
t.Errorf("expected 0 clients, got %d", len(clients))
}
_, _ = db.CreateSSOClient("mcat", "https://mcat.example.com/cb", nil, nil)
_, _ = db.CreateSSOClient("mcr", "https://mcr.example.com/cb", nil, nil)
clients, err = db.ListSSOClients()
if err != nil {
t.Fatalf("ListSSOClients: %v", err)
}
if len(clients) != 2 {
t.Fatalf("expected 2 clients, got %d", len(clients))
}
// Ordered by client_id ASC.
if clients[0].ClientID != "mcat" {
t.Errorf("first client = %q, want %q", clients[0].ClientID, "mcat")
}
}
func TestUpdateSSOClient(t *testing.T) {
db := openTestDB(t)
_, err := db.CreateSSOClient("mcr", "https://mcr.example.com/cb", []string{"a"}, nil)
if err != nil {
t.Fatalf("create: %v", err)
}
newURI := "https://mcr.example.com/sso/callback"
newTags := []string{"b", "c"}
disabled := false
if err := db.UpdateSSOClient("mcr", &newURI, &newTags, &disabled); err != nil {
t.Fatalf("UpdateSSOClient: %v", err)
}
got, err := db.GetSSOClient("mcr")
if err != nil {
t.Fatalf("get after update: %v", err)
}
if got.RedirectURI != newURI {
t.Errorf("redirect_uri = %q, want %q", got.RedirectURI, newURI)
}
if len(got.Tags) != 2 || got.Tags[0] != "b" {
t.Errorf("tags = %v, want [b c]", got.Tags)
}
if got.Enabled {
t.Error("expected enabled=false after update")
}
}
func TestUpdateSSOClient_NotFound(t *testing.T) {
db := openTestDB(t)
uri := "https://x.example.com/cb"
err := db.UpdateSSOClient("nonexistent", &uri, nil, nil)
if !errors.Is(err, ErrNotFound) {
t.Errorf("expected ErrNotFound, got %v", err)
}
}
func TestDeleteSSOClient(t *testing.T) {
db := openTestDB(t)
_, err := db.CreateSSOClient("mcr", "https://mcr.example.com/cb", nil, nil)
if err != nil {
t.Fatalf("create: %v", err)
}
if err := db.DeleteSSOClient("mcr"); err != nil {
t.Fatalf("DeleteSSOClient: %v", err)
}
_, err = db.GetSSOClient("mcr")
if !errors.Is(err, ErrNotFound) {
t.Errorf("expected ErrNotFound after delete, got %v", err)
}
}
func TestDeleteSSOClient_NotFound(t *testing.T) {
db := openTestDB(t)
err := db.DeleteSSOClient("nonexistent")
if !errors.Is(err, ErrNotFound) {
t.Errorf("expected ErrNotFound, got %v", err)
}
}
func TestCreateSSOClient_NilTags(t *testing.T) {
db := openTestDB(t)
c, err := db.CreateSSOClient("mcr", "https://mcr.example.com/cb", nil, nil)
if err != nil {
t.Fatalf("create: %v", err)
}
if c.Tags == nil {
t.Error("Tags should be empty slice, not nil")
}
if len(c.Tags) != 0 {
t.Errorf("expected 0 tags, got %d", len(c.Tags))
}
got, err := db.GetSSOClient("mcr")
if err != nil {
t.Fatalf("get: %v", err)
}
if got.Tags == nil || len(got.Tags) != 0 {
t.Errorf("Tags round-trip: got %v", got.Tags)
}
}

View File

@@ -118,6 +118,7 @@ func (s *Server) buildServer(extra ...grpc.ServerOption) *grpc.Server {
mciasv1.RegisterAccountServiceServer(srv, &accountServiceServer{s: s}) mciasv1.RegisterAccountServiceServer(srv, &accountServiceServer{s: s})
mciasv1.RegisterCredentialServiceServer(srv, &credentialServiceServer{s: s}) mciasv1.RegisterCredentialServiceServer(srv, &credentialServiceServer{s: s})
mciasv1.RegisterPolicyServiceServer(srv, &policyServiceServer{s: s}) mciasv1.RegisterPolicyServiceServer(srv, &policyServiceServer{s: s})
mciasv1.RegisterSSOClientServiceServer(srv, &ssoClientServiceServer{s: s})
return srv return srv
} }

View File

@@ -0,0 +1,187 @@
// ssoclientservice implements mciasv1.SSOClientServiceServer.
// All handlers are admin-only and delegate to the same db package used by
// the REST SSO client handlers in internal/server/handlers_sso_clients.go.
package grpcserver
import (
"context"
"errors"
"fmt"
"time"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
mciasv1 "git.wntrmute.dev/mc/mcias/gen/mcias/v1"
"git.wntrmute.dev/mc/mcias/internal/db"
"git.wntrmute.dev/mc/mcias/internal/model"
)
type ssoClientServiceServer struct {
mciasv1.UnimplementedSSOClientServiceServer
s *Server
}
func ssoClientToProto(c *model.SSOClient) *mciasv1.SSOClient {
return &mciasv1.SSOClient{
ClientId: c.ClientID,
RedirectUri: c.RedirectURI,
Tags: c.Tags,
Enabled: c.Enabled,
CreatedAt: c.CreatedAt.UTC().Format(time.RFC3339),
UpdatedAt: c.UpdatedAt.UTC().Format(time.RFC3339),
}
}
func (ss *ssoClientServiceServer) ListSSOClients(ctx context.Context, _ *mciasv1.ListSSOClientsRequest) (*mciasv1.ListSSOClientsResponse, error) {
if err := ss.s.requireAdmin(ctx); err != nil {
return nil, err
}
clients, err := ss.s.db.ListSSOClients()
if err != nil {
ss.s.logger.Error("list SSO clients", "error", err)
return nil, status.Error(codes.Internal, "internal error")
}
resp := &mciasv1.ListSSOClientsResponse{
Clients: make([]*mciasv1.SSOClient, 0, len(clients)),
}
for _, c := range clients {
resp.Clients = append(resp.Clients, ssoClientToProto(c))
}
return resp, nil
}
func (ss *ssoClientServiceServer) CreateSSOClient(ctx context.Context, req *mciasv1.CreateSSOClientRequest) (*mciasv1.CreateSSOClientResponse, error) {
if err := ss.s.requireAdmin(ctx); err != nil {
return nil, err
}
if req.ClientId == "" {
return nil, status.Error(codes.InvalidArgument, "client_id is required")
}
if req.RedirectUri == "" {
return nil, status.Error(codes.InvalidArgument, "redirect_uri is required")
}
claims := claimsFromContext(ctx)
var createdBy *int64
if claims != nil {
if actor, err := ss.s.db.GetAccountByUUID(claims.Subject); err == nil {
createdBy = &actor.ID
}
}
c, err := ss.s.db.CreateSSOClient(req.ClientId, req.RedirectUri, req.Tags, createdBy)
if err != nil {
ss.s.logger.Error("create SSO client", "error", err)
return nil, status.Error(codes.InvalidArgument, err.Error())
}
ss.s.db.WriteAuditEvent(model.EventSSOClientCreated, createdBy, nil, peerIP(ctx), //nolint:errcheck
fmt.Sprintf(`{"client_id":%q}`, c.ClientID))
return &mciasv1.CreateSSOClientResponse{Client: ssoClientToProto(c)}, nil
}
func (ss *ssoClientServiceServer) GetSSOClient(ctx context.Context, req *mciasv1.GetSSOClientRequest) (*mciasv1.GetSSOClientResponse, error) {
if err := ss.s.requireAdmin(ctx); err != nil {
return nil, err
}
if req.ClientId == "" {
return nil, status.Error(codes.InvalidArgument, "client_id is required")
}
c, err := ss.s.db.GetSSOClient(req.ClientId)
if errors.Is(err, db.ErrNotFound) {
return nil, status.Error(codes.NotFound, "SSO client not found")
}
if err != nil {
ss.s.logger.Error("get SSO client", "error", err)
return nil, status.Error(codes.Internal, "internal error")
}
return &mciasv1.GetSSOClientResponse{Client: ssoClientToProto(c)}, nil
}
func (ss *ssoClientServiceServer) UpdateSSOClient(ctx context.Context, req *mciasv1.UpdateSSOClientRequest) (*mciasv1.UpdateSSOClientResponse, error) {
if err := ss.s.requireAdmin(ctx); err != nil {
return nil, err
}
if req.ClientId == "" {
return nil, status.Error(codes.InvalidArgument, "client_id is required")
}
var redirectURI *string
if req.RedirectUri != nil {
v := req.GetRedirectUri()
redirectURI = &v
}
var tags *[]string
if req.UpdateTags {
t := req.Tags
tags = &t
}
var enabled *bool
if req.Enabled != nil {
v := req.GetEnabled()
enabled = &v
}
err := ss.s.db.UpdateSSOClient(req.ClientId, redirectURI, tags, enabled)
if errors.Is(err, db.ErrNotFound) {
return nil, status.Error(codes.NotFound, "SSO client not found")
}
if err != nil {
ss.s.logger.Error("update SSO client", "error", err)
return nil, status.Error(codes.InvalidArgument, err.Error())
}
claims := claimsFromContext(ctx)
var actorID *int64
if claims != nil {
if actor, err := ss.s.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &actor.ID
}
}
ss.s.db.WriteAuditEvent(model.EventSSOClientUpdated, actorID, nil, peerIP(ctx), //nolint:errcheck
fmt.Sprintf(`{"client_id":%q}`, req.ClientId))
updated, err := ss.s.db.GetSSOClient(req.ClientId)
if err != nil {
return nil, status.Error(codes.Internal, "internal error")
}
return &mciasv1.UpdateSSOClientResponse{Client: ssoClientToProto(updated)}, nil
}
func (ss *ssoClientServiceServer) DeleteSSOClient(ctx context.Context, req *mciasv1.DeleteSSOClientRequest) (*mciasv1.DeleteSSOClientResponse, error) {
if err := ss.s.requireAdmin(ctx); err != nil {
return nil, err
}
if req.ClientId == "" {
return nil, status.Error(codes.InvalidArgument, "client_id is required")
}
err := ss.s.db.DeleteSSOClient(req.ClientId)
if errors.Is(err, db.ErrNotFound) {
return nil, status.Error(codes.NotFound, "SSO client not found")
}
if err != nil {
ss.s.logger.Error("delete SSO client", "error", err)
return nil, status.Error(codes.Internal, "internal error")
}
claims := claimsFromContext(ctx)
var actorID *int64
if claims != nil {
if actor, err := ss.s.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &actor.ID
}
}
ss.s.db.WriteAuditEvent(model.EventSSOClientDeleted, actorID, nil, peerIP(ctx), //nolint:errcheck
fmt.Sprintf(`{"client_id":%q}`, req.ClientId))
return &mciasv1.DeleteSSOClientResponse{}, nil
}

View File

@@ -218,8 +218,29 @@ const (
EventWebAuthnRemoved = "webauthn_removed" EventWebAuthnRemoved = "webauthn_removed"
EventWebAuthnLoginOK = "webauthn_login_ok" EventWebAuthnLoginOK = "webauthn_login_ok"
EventWebAuthnLoginFail = "webauthn_login_fail" EventWebAuthnLoginFail = "webauthn_login_fail"
EventSSOAuthorize = "sso_authorize"
EventSSOLoginOK = "sso_login_ok"
EventSSOClientCreated = "sso_client_created"
EventSSOClientUpdated = "sso_client_updated"
EventSSOClientDeleted = "sso_client_deleted"
) )
// SSOClient represents a registered relying-party application that may use
// the MCIAS SSO authorization code flow. The ClientID serves as both the
// unique identifier and the service_name for policy evaluation.
type SSOClient struct {
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy *int64 `json:"-"`
ClientID string `json:"client_id"`
RedirectURI string `json:"redirect_uri"`
Tags []string `json:"tags"`
ID int64 `json:"-"`
Enabled bool `json:"enabled"`
}
// ServiceAccountDelegate records that a specific account has been granted // ServiceAccountDelegate records that a specific account has been granted
// permission to issue tokens for a given system account. Only admins can // permission to issue tokens for a given system account. Only admins can
// add or remove delegates; delegates can issue/rotate tokens for that specific // add or remove delegates; delegates can issue/rotate tokens for that specific

View File

@@ -51,6 +51,8 @@ const (
ActionEnrollWebAuthn Action = "webauthn:enroll" // self-service ActionEnrollWebAuthn Action = "webauthn:enroll" // self-service
ActionRemoveWebAuthn Action = "webauthn:remove" // admin ActionRemoveWebAuthn Action = "webauthn:remove" // admin
ActionManageSSOClients Action = "sso_clients:manage" // admin
) )
// ResourceType identifies what kind of object a request targets. // ResourceType identifies what kind of object a request targets.
@@ -64,6 +66,7 @@ const (
ResourceTOTP ResourceType = "totp" ResourceTOTP ResourceType = "totp"
ResourcePolicy ResourceType = "policy" ResourcePolicy ResourceType = "policy"
ResourceWebAuthn ResourceType = "webauthn" ResourceWebAuthn ResourceType = "webauthn"
ResourceSSOClient ResourceType = "sso_client"
) )
// Effect is the outcome of policy evaluation. // Effect is the outcome of policy evaluation.

View File

@@ -0,0 +1,145 @@
package server
import (
"net/http"
"git.wntrmute.dev/mc/mcias/internal/audit"
"git.wntrmute.dev/mc/mcias/internal/middleware"
"git.wntrmute.dev/mc/mcias/internal/model"
"git.wntrmute.dev/mc/mcias/internal/policy"
"git.wntrmute.dev/mc/mcias/internal/sso"
"git.wntrmute.dev/mc/mcias/internal/token"
)
// ssoTokenRequest is the request body for POST /v1/sso/token.
type ssoTokenRequest struct {
Code string `json:"code"`
ClientID string `json:"client_id"`
RedirectURI string `json:"redirect_uri"`
}
// handleSSOTokenExchange exchanges an SSO authorization code for a JWT token.
//
// Security design:
// - The authorization code is single-use (consumed via LoadAndDelete).
// - The client_id and redirect_uri must match the values stored when the code
// was issued, preventing a stolen code from being exchanged by a different
// service.
// - Policy evaluation uses the service_name and tags from the registered SSO
// client config (not from the request), preventing identity spoofing.
// - The code expires after 60 seconds to limit the interception window.
func (s *Server) handleSSOTokenExchange(w http.ResponseWriter, r *http.Request) {
var req ssoTokenRequest
if !decodeJSON(w, r, &req) {
return
}
if req.Code == "" || req.ClientID == "" || req.RedirectURI == "" {
middleware.WriteError(w, http.StatusBadRequest, "code, client_id, and redirect_uri are required", "bad_request")
return
}
// Consume the authorization code (single-use).
ac, ok := sso.Consume(req.Code)
if !ok {
middleware.WriteError(w, http.StatusUnauthorized, "invalid or expired authorization code", "invalid_code")
return
}
// Security: verify client_id and redirect_uri match the stored values.
if ac.ClientID != req.ClientID || ac.RedirectURI != req.RedirectURI {
s.logger.Warn("sso: token exchange parameter mismatch",
"expected_client", ac.ClientID, "got_client", req.ClientID)
middleware.WriteError(w, http.StatusUnauthorized, "invalid or expired authorization code", "invalid_code")
return
}
// Look up the registered SSO client from the database for policy context.
client, clientErr := s.db.GetSSOClient(req.ClientID)
if clientErr != nil {
// Should not happen if the authorize endpoint validated, but defend in depth.
middleware.WriteError(w, http.StatusUnauthorized, "unknown client", "invalid_code")
return
}
if !client.Enabled {
middleware.WriteError(w, http.StatusForbidden, "SSO client is disabled", "client_disabled")
return
}
// Load account.
acct, err := s.db.GetAccountByID(ac.AccountID)
if err != nil {
s.logger.Error("sso: load account for token exchange", "error", err, "account_id", ac.AccountID)
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
if acct.Status != model.AccountStatusActive {
middleware.WriteError(w, http.StatusForbidden, "account is not active", "account_inactive")
return
}
// Load roles for policy evaluation and expiry decision.
roles, err := s.db.GetRoles(acct.ID)
if err != nil {
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
// Policy evaluation: client_id serves as both identifier and service_name.
{
input := policy.PolicyInput{
Subject: acct.UUID,
AccountType: string(acct.AccountType),
Roles: roles,
Action: policy.ActionLogin,
Resource: policy.Resource{
ServiceName: client.ClientID,
Tags: client.Tags,
},
}
if effect, _ := s.polEng.Evaluate(input); effect == policy.Deny {
s.writeAudit(r, model.EventLoginFail, &acct.ID, nil,
audit.JSON("reason", "policy_deny", "service_name", client.ClientID, "via", "sso"))
middleware.WriteError(w, http.StatusForbidden, "access denied by policy", "policy_denied")
return
}
}
// Determine expiry.
expiry := s.cfg.DefaultExpiry()
for _, rol := range roles {
if rol == "admin" {
expiry = s.cfg.AdminExpiry()
break
}
}
privKey, err := s.vault.PrivKey()
if err != nil {
middleware.WriteError(w, http.StatusServiceUnavailable, "vault sealed", "vault_sealed")
return
}
tokenStr, claims, err := token.IssueToken(privKey, s.cfg.Tokens.Issuer, acct.UUID, roles, expiry)
if err != nil {
s.logger.Error("sso: issue token", "error", err)
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
if err := s.db.TrackToken(claims.JTI, acct.ID, claims.IssuedAt, claims.ExpiresAt); err != nil {
s.logger.Error("sso: track token", "error", err)
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
s.writeAudit(r, model.EventSSOLoginOK, &acct.ID, nil,
audit.JSON("jti", claims.JTI, "client_id", client.ClientID))
s.writeAudit(r, model.EventTokenIssued, &acct.ID, nil,
audit.JSON("jti", claims.JTI, "via", "sso"))
writeJSON(w, http.StatusOK, loginResponse{
Token: tokenStr,
ExpiresAt: claims.ExpiresAt.Format("2006-01-02T15:04:05Z"),
})
}

View File

@@ -0,0 +1,175 @@
package server
import (
"errors"
"fmt"
"net/http"
"time"
"git.wntrmute.dev/mc/mcias/internal/db"
"git.wntrmute.dev/mc/mcias/internal/middleware"
"git.wntrmute.dev/mc/mcias/internal/model"
)
type ssoClientResponse struct {
ClientID string `json:"client_id"`
RedirectURI string `json:"redirect_uri"`
Tags []string `json:"tags"`
Enabled bool `json:"enabled"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
}
func ssoClientToResponse(c *model.SSOClient) ssoClientResponse {
return ssoClientResponse{
ClientID: c.ClientID,
RedirectURI: c.RedirectURI,
Tags: c.Tags,
Enabled: c.Enabled,
CreatedAt: c.CreatedAt.Format(time.RFC3339),
UpdatedAt: c.UpdatedAt.Format(time.RFC3339),
}
}
func (s *Server) handleListSSOClients(w http.ResponseWriter, r *http.Request) {
clients, err := s.db.ListSSOClients()
if err != nil {
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
resp := make([]ssoClientResponse, 0, len(clients))
for _, c := range clients {
resp = append(resp, ssoClientToResponse(c))
}
writeJSON(w, http.StatusOK, resp)
}
type createSSOClientRequest struct {
ClientID string `json:"client_id"`
RedirectURI string `json:"redirect_uri"`
Tags []string `json:"tags"`
}
func (s *Server) handleCreateSSOClient(w http.ResponseWriter, r *http.Request) {
var req createSSOClientRequest
if !decodeJSON(w, r, &req) {
return
}
if req.ClientID == "" {
middleware.WriteError(w, http.StatusBadRequest, "client_id is required", "bad_request")
return
}
if req.RedirectURI == "" {
middleware.WriteError(w, http.StatusBadRequest, "redirect_uri is required", "bad_request")
return
}
claims := middleware.ClaimsFromContext(r.Context())
var createdBy *int64
if claims != nil {
if actor, err := s.db.GetAccountByUUID(claims.Subject); err == nil {
createdBy = &actor.ID
}
}
c, err := s.db.CreateSSOClient(req.ClientID, req.RedirectURI, req.Tags, createdBy)
if err != nil {
s.logger.Error("create SSO client", "error", err)
middleware.WriteError(w, http.StatusBadRequest, err.Error(), "bad_request")
return
}
s.writeAudit(r, model.EventSSOClientCreated, createdBy, nil,
fmt.Sprintf(`{"client_id":%q}`, c.ClientID))
writeJSON(w, http.StatusCreated, ssoClientToResponse(c))
}
func (s *Server) handleGetSSOClient(w http.ResponseWriter, r *http.Request) {
clientID := r.PathValue("clientId")
c, err := s.db.GetSSOClient(clientID)
if errors.Is(err, db.ErrNotFound) {
middleware.WriteError(w, http.StatusNotFound, "SSO client not found", "not_found")
return
}
if err != nil {
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
writeJSON(w, http.StatusOK, ssoClientToResponse(c))
}
type updateSSOClientRequest struct {
RedirectURI *string `json:"redirect_uri,omitempty"`
Tags *[]string `json:"tags,omitempty"`
Enabled *bool `json:"enabled,omitempty"`
}
func (s *Server) handleUpdateSSOClient(w http.ResponseWriter, r *http.Request) {
clientID := r.PathValue("clientId")
var req updateSSOClientRequest
if !decodeJSON(w, r, &req) {
return
}
err := s.db.UpdateSSOClient(clientID, req.RedirectURI, req.Tags, req.Enabled)
if errors.Is(err, db.ErrNotFound) {
middleware.WriteError(w, http.StatusNotFound, "SSO client not found", "not_found")
return
}
if err != nil {
s.logger.Error("update SSO client", "error", err)
middleware.WriteError(w, http.StatusBadRequest, err.Error(), "bad_request")
return
}
claims := middleware.ClaimsFromContext(r.Context())
var actorID *int64
if claims != nil {
if actor, err := s.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &actor.ID
}
}
s.writeAudit(r, model.EventSSOClientUpdated, actorID, nil,
fmt.Sprintf(`{"client_id":%q}`, clientID))
c, err := s.db.GetSSOClient(clientID)
if err != nil {
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
writeJSON(w, http.StatusOK, ssoClientToResponse(c))
}
func (s *Server) handleDeleteSSOClient(w http.ResponseWriter, r *http.Request) {
clientID := r.PathValue("clientId")
err := s.db.DeleteSSOClient(clientID)
if errors.Is(err, db.ErrNotFound) {
middleware.WriteError(w, http.StatusNotFound, "SSO client not found", "not_found")
return
}
if err != nil {
middleware.WriteError(w, http.StatusInternalServerError, "internal error", "internal_error")
return
}
claims := middleware.ClaimsFromContext(r.Context())
var actorID *int64
if claims != nil {
if actor, err := s.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &actor.ID
}
}
s.writeAudit(r, model.EventSSOClientDeleted, actorID, nil,
fmt.Sprintf(`{"client_id":%q}`, clientID))
w.WriteHeader(http.StatusNoContent)
}

View File

@@ -215,6 +215,7 @@ func (s *Server) Handler() http.Handler {
mux.HandleFunc("GET /v1/health", s.handleHealth) mux.HandleFunc("GET /v1/health", s.handleHealth)
mux.HandleFunc("GET /v1/keys/public", s.handlePublicKey) mux.HandleFunc("GET /v1/keys/public", s.handlePublicKey)
mux.Handle("POST /v1/auth/login", loginRateLimit(http.HandlerFunc(s.handleLogin))) mux.Handle("POST /v1/auth/login", loginRateLimit(http.HandlerFunc(s.handleLogin)))
mux.Handle("POST /v1/sso/token", loginRateLimit(http.HandlerFunc(s.handleSSOTokenExchange)))
mux.Handle("POST /v1/token/validate", loginRateLimit(http.HandlerFunc(s.handleTokenValidate))) mux.Handle("POST /v1/token/validate", loginRateLimit(http.HandlerFunc(s.handleTokenValidate)))
// API documentation: Swagger UI at /docs and raw spec at /docs/openapi.yaml. // API documentation: Swagger UI at /docs and raw spec at /docs/openapi.yaml.
@@ -372,6 +373,18 @@ func (s *Server) Handler() http.Handler {
mux.Handle("DELETE /v1/policy/rules/{id}", mux.Handle("DELETE /v1/policy/rules/{id}",
requirePolicy(policy.ActionManageRules, policy.ResourcePolicy, nil)(http.HandlerFunc(s.handleDeletePolicyRule))) requirePolicy(policy.ActionManageRules, policy.ResourcePolicy, nil)(http.HandlerFunc(s.handleDeletePolicyRule)))
// SSO client management (admin-only).
mux.Handle("GET /v1/sso/clients",
requirePolicy(policy.ActionManageSSOClients, policy.ResourceSSOClient, nil)(http.HandlerFunc(s.handleListSSOClients)))
mux.Handle("POST /v1/sso/clients",
requirePolicy(policy.ActionManageSSOClients, policy.ResourceSSOClient, nil)(http.HandlerFunc(s.handleCreateSSOClient)))
mux.Handle("GET /v1/sso/clients/{clientId}",
requirePolicy(policy.ActionManageSSOClients, policy.ResourceSSOClient, nil)(http.HandlerFunc(s.handleGetSSOClient)))
mux.Handle("PATCH /v1/sso/clients/{clientId}",
requirePolicy(policy.ActionManageSSOClients, policy.ResourceSSOClient, nil)(http.HandlerFunc(s.handleUpdateSSOClient)))
mux.Handle("DELETE /v1/sso/clients/{clientId}",
requirePolicy(policy.ActionManageSSOClients, policy.ResourceSSOClient, nil)(http.HandlerFunc(s.handleDeleteSSOClient)))
// UI routes (HTMX-based management frontend). // UI routes (HTMX-based management frontend).
uiSrv, err := ui.New(s.db, s.cfg, s.vault, s.logger) uiSrv, err := ui.New(s.db, s.cfg, s.vault, s.logger)
if err != nil { if err != nil {

91
internal/sso/session.go Normal file
View File

@@ -0,0 +1,91 @@
package sso
import (
"fmt"
"sync"
"time"
"git.wntrmute.dev/mc/mcias/internal/crypto"
)
const (
sessionTTL = 5 * time.Minute
sessionBytes = 16 // 128 bits of entropy for the nonce
)
// Session holds the SSO parameters between /sso/authorize and login completion.
// The nonce is embedded as a hidden form field in the login page and carried
// through the multi-step login flow (password → TOTP, or WebAuthn).
type Session struct { //nolint:govet // fieldalignment: field order matches logical grouping
ClientID string
RedirectURI string
State string
ExpiresAt time.Time
}
// pendingSessions stores SSO sessions created at /sso/authorize.
var pendingSessions sync.Map //nolint:gochecknoglobals
func init() {
go cleanupSessions()
}
func cleanupSessions() {
ticker := time.NewTicker(cleanupPeriod)
defer ticker.Stop()
for range ticker.C {
now := time.Now()
pendingSessions.Range(func(key, value any) bool {
s, ok := value.(*Session)
if !ok || now.After(s.ExpiresAt) {
pendingSessions.Delete(key)
}
return true
})
}
}
// StoreSession creates and stores a new SSO session, returning the hex-encoded
// nonce that should be embedded in the login form.
func StoreSession(clientID, redirectURI, state string) (string, error) {
raw, err := crypto.RandomBytes(sessionBytes)
if err != nil {
return "", fmt.Errorf("sso: generate session nonce: %w", err)
}
nonce := fmt.Sprintf("%x", raw)
pendingSessions.Store(nonce, &Session{
ClientID: clientID,
RedirectURI: redirectURI,
State: state,
ExpiresAt: time.Now().Add(sessionTTL),
})
return nonce, nil
}
// ConsumeSession retrieves and deletes an SSO session by nonce.
// Returns the Session and true if valid, or (nil, false) if unknown or expired.
func ConsumeSession(nonce string) (*Session, bool) {
v, ok := pendingSessions.LoadAndDelete(nonce)
if !ok {
return nil, false
}
s, ok2 := v.(*Session)
if !ok2 || time.Now().After(s.ExpiresAt) {
return nil, false
}
return s, true
}
// GetSession retrieves an SSO session without consuming it (for read-only checks
// during multi-step login). Returns nil if unknown or expired.
func GetSession(nonce string) *Session {
v, ok := pendingSessions.Load(nonce)
if !ok {
return nil
}
s, ok2 := v.(*Session)
if !ok2 || time.Now().After(s.ExpiresAt) {
return nil
}
return s
}

93
internal/sso/store.go Normal file
View File

@@ -0,0 +1,93 @@
// Package sso implements the authorization code store for the SSO redirect flow.
//
// MCIAS acts as the SSO provider: downstream services (MCR, MCAT, Metacrypt)
// redirect users to MCIAS for login, and MCIAS issues a short-lived, single-use
// authorization code that the service exchanges for a JWT token.
//
// Security design:
// - Authorization codes are 32 random bytes (256 bits), hex-encoded.
// - Codes are single-use: consumed via sync.Map LoadAndDelete on first exchange.
// - Codes expire after 60 seconds to limit the window for interception.
// - A background goroutine evicts expired codes every 5 minutes.
// - The code is bound to the client_id and redirect_uri presented at authorize
// time; the token exchange endpoint must verify both match.
package sso
import (
"fmt"
"sync"
"time"
"git.wntrmute.dev/mc/mcias/internal/crypto"
)
const (
codeTTL = 60 * time.Second
codeBytes = 32 // 256 bits of entropy
cleanupPeriod = 5 * time.Minute
)
// AuthCode is a pending authorization code awaiting exchange for a JWT.
type AuthCode struct { //nolint:govet // fieldalignment: field order matches logical grouping
ClientID string
RedirectURI string
State string
AccountID int64
ExpiresAt time.Time
}
// pendingCodes stores issued authorization codes awaiting exchange.
var pendingCodes sync.Map //nolint:gochecknoglobals
func init() {
go cleanupCodes()
}
func cleanupCodes() {
ticker := time.NewTicker(cleanupPeriod)
defer ticker.Stop()
for range ticker.C {
now := time.Now()
pendingCodes.Range(func(key, value any) bool {
ac, ok := value.(*AuthCode)
if !ok || now.After(ac.ExpiresAt) {
pendingCodes.Delete(key)
}
return true
})
}
}
// Store creates and stores a new authorization code bound to the given
// client_id, redirect_uri, state, and account. Returns the hex-encoded code.
func Store(clientID, redirectURI, state string, accountID int64) (string, error) {
raw, err := crypto.RandomBytes(codeBytes)
if err != nil {
return "", fmt.Errorf("sso: generate authorization code: %w", err)
}
code := fmt.Sprintf("%x", raw)
pendingCodes.Store(code, &AuthCode{
ClientID: clientID,
RedirectURI: redirectURI,
State: state,
AccountID: accountID,
ExpiresAt: time.Now().Add(codeTTL),
})
return code, nil
}
// Consume retrieves and deletes an authorization code. Returns the AuthCode
// and true if the code was valid and not expired, or (nil, false) otherwise.
//
// Security: LoadAndDelete ensures single-use; the code cannot be replayed.
func Consume(code string) (*AuthCode, bool) {
v, ok := pendingCodes.LoadAndDelete(code)
if !ok {
return nil, false
}
ac, ok2 := v.(*AuthCode)
if !ok2 || time.Now().After(ac.ExpiresAt) {
return nil, false
}
return ac, true
}

132
internal/sso/store_test.go Normal file
View File

@@ -0,0 +1,132 @@
package sso
import (
"testing"
"time"
)
func TestStoreAndConsume(t *testing.T) {
code, err := Store("mcr", "https://mcr.example.com/cb", "state123", 42)
if err != nil {
t.Fatalf("Store: %v", err)
}
if code == "" {
t.Fatal("Store returned empty code")
}
ac, ok := Consume(code)
if !ok {
t.Fatal("Consume returned false for valid code")
}
if ac.ClientID != "mcr" {
t.Errorf("ClientID = %q, want %q", ac.ClientID, "mcr")
}
if ac.RedirectURI != "https://mcr.example.com/cb" {
t.Errorf("RedirectURI = %q", ac.RedirectURI)
}
if ac.State != "state123" {
t.Errorf("State = %q", ac.State)
}
if ac.AccountID != 42 {
t.Errorf("AccountID = %d, want 42", ac.AccountID)
}
}
func TestConsumeSingleUse(t *testing.T) {
code, err := Store("mcr", "https://mcr.example.com/cb", "s", 1)
if err != nil {
t.Fatalf("Store: %v", err)
}
if _, ok := Consume(code); !ok {
t.Fatal("first Consume should succeed")
}
if _, ok := Consume(code); ok {
t.Error("second Consume should fail (single-use)")
}
}
func TestConsumeUnknownCode(t *testing.T) {
if _, ok := Consume("nonexistent"); ok {
t.Error("Consume should fail for unknown code")
}
}
func TestConsumeExpiredCode(t *testing.T) {
code, err := Store("mcr", "https://mcr.example.com/cb", "s", 1)
if err != nil {
t.Fatalf("Store: %v", err)
}
// Manually expire the code.
v, loaded := pendingCodes.Load(code)
if !loaded {
t.Fatal("code not found in pendingCodes")
}
ac, ok := v.(*AuthCode)
if !ok {
t.Fatal("unexpected type in pendingCodes")
}
ac.ExpiresAt = time.Now().Add(-1 * time.Second)
if _, ok := Consume(code); ok {
t.Error("Consume should fail for expired code")
}
}
func TestStoreSessionAndConsume(t *testing.T) {
nonce, err := StoreSession("mcr", "https://mcr.example.com/cb", "state456")
if err != nil {
t.Fatalf("StoreSession: %v", err)
}
if nonce == "" {
t.Fatal("StoreSession returned empty nonce")
}
// GetSession should return it without consuming.
s := GetSession(nonce)
if s == nil {
t.Fatal("GetSession returned nil")
}
if s.ClientID != "mcr" {
t.Errorf("ClientID = %q", s.ClientID)
}
// Still available after GetSession.
s2, ok := ConsumeSession(nonce)
if !ok {
t.Fatal("ConsumeSession returned false")
}
if s2.State != "state456" {
t.Errorf("State = %q", s2.State)
}
// Consumed — should be gone.
if _, ok := ConsumeSession(nonce); ok {
t.Error("second ConsumeSession should fail")
}
if GetSession(nonce) != nil {
t.Error("GetSession should return nil after consume")
}
}
func TestConsumeSessionExpired(t *testing.T) {
nonce, err := StoreSession("mcr", "https://mcr.example.com/cb", "s")
if err != nil {
t.Fatalf("StoreSession: %v", err)
}
v, loaded := pendingSessions.Load(nonce)
if !loaded {
t.Fatal("session not found in pendingSessions")
}
sess, ok := v.(*Session)
if !ok {
t.Fatal("unexpected type in pendingSessions")
}
sess.ExpiresAt = time.Now().Add(-1 * time.Second)
if _, ok := ConsumeSession(nonce); ok {
t.Error("ConsumeSession should fail for expired session")
}
}

View File

@@ -15,6 +15,7 @@ import (
func (u *UIServer) handleLoginPage(w http.ResponseWriter, r *http.Request) { func (u *UIServer) handleLoginPage(w http.ResponseWriter, r *http.Request) {
u.render(w, "login", LoginData{ u.render(w, "login", LoginData{
WebAuthnEnabled: u.cfg.WebAuthnEnabled(), WebAuthnEnabled: u.cfg.WebAuthnEnabled(),
SSONonce: r.URL.Query().Get("sso"),
}) })
} }
@@ -97,6 +98,8 @@ func (u *UIServer) handleLoginPost(w http.ResponseWriter, r *http.Request) {
return return
} }
ssoNonce := r.FormValue("sso_nonce")
// TOTP required: issue a server-side nonce and show the TOTP step form. // TOTP required: issue a server-side nonce and show the TOTP step form.
// Security: the nonce replaces the password hidden field (F-02). The password // Security: the nonce replaces the password hidden field (F-02). The password
// is not stored anywhere after this point; only the account ID is retained. // is not stored anywhere after this point; only the account ID is retained.
@@ -110,11 +113,12 @@ func (u *UIServer) handleLoginPost(w http.ResponseWriter, r *http.Request) {
u.render(w, "totp_step", LoginData{ u.render(w, "totp_step", LoginData{
Username: username, Username: username,
Nonce: nonce, Nonce: nonce,
SSONonce: ssoNonce,
}) })
return return
} }
u.finishLogin(w, r, acct) u.finishLogin(w, r, acct, ssoNonce)
} }
// handleTOTPStep handles the second POST when totp_step=1 is set. // handleTOTPStep handles the second POST when totp_step=1 is set.
@@ -129,6 +133,7 @@ func (u *UIServer) handleTOTPStep(w http.ResponseWriter, r *http.Request) {
username := r.FormValue("username") //nolint:gosec // body already limited by caller username := r.FormValue("username") //nolint:gosec // body already limited by caller
nonce := r.FormValue("totp_nonce") //nolint:gosec // body already limited by caller nonce := r.FormValue("totp_nonce") //nolint:gosec // body already limited by caller
totpCode := r.FormValue("totp_code") //nolint:gosec // body already limited by caller totpCode := r.FormValue("totp_code") //nolint:gosec // body already limited by caller
ssoNonce := r.FormValue("sso_nonce") //nolint:gosec // body already limited by caller
// Security: consume the nonce (single-use); reject if unknown or expired. // Security: consume the nonce (single-use); reject if unknown or expired.
accountID, ok := u.consumeTOTPNonce(nonce) accountID, ok := u.consumeTOTPNonce(nonce)
@@ -172,6 +177,7 @@ func (u *UIServer) handleTOTPStep(w http.ResponseWriter, r *http.Request) {
Error: "invalid TOTP code", Error: "invalid TOTP code",
Username: username, Username: username,
Nonce: newNonce, Nonce: newNonce,
SSONonce: ssoNonce,
}) })
return return
} }
@@ -189,15 +195,36 @@ func (u *UIServer) handleTOTPStep(w http.ResponseWriter, r *http.Request) {
Error: "invalid TOTP code", Error: "invalid TOTP code",
Username: username, Username: username,
Nonce: newNonce, Nonce: newNonce,
SSONonce: ssoNonce,
}) })
return return
} }
u.finishLogin(w, r, acct) u.finishLogin(w, r, acct, ssoNonce)
} }
// finishLogin issues a JWT, sets the session cookie, and redirects to dashboard. // finishLogin issues a JWT, sets the session cookie, and redirects to dashboard.
func (u *UIServer) finishLogin(w http.ResponseWriter, r *http.Request, acct *model.Account) { // When ssoNonce is non-empty, the login is part of an SSO redirect flow: instead
// of setting a session cookie, an authorization code is issued and the user is
// redirected back to the service's callback URL.
func (u *UIServer) finishLogin(w http.ResponseWriter, r *http.Request, acct *model.Account, ssoNonce string) {
// SSO redirect flow: issue authorization code and redirect to service.
if ssoNonce != "" {
if callbackURL, ok := u.buildSSOCallback(r, ssoNonce, acct.ID); ok {
// Security: htmx follows 302 redirects via fetch, which fails
// cross-origin (no CORS on the service callback). Use HX-Redirect
// so htmx performs a full page navigation instead.
if isHTMX(r) {
w.Header().Set("HX-Redirect", callbackURL)
w.WriteHeader(http.StatusOK)
return
}
http.Redirect(w, r, callbackURL, http.StatusFound)
return
}
// SSO session expired/consumed — fall through to normal login.
}
// Determine token expiry based on admin role. // Determine token expiry based on admin role.
expiry := u.cfg.DefaultExpiry() expiry := u.cfg.DefaultExpiry()
roles, err := u.db.GetRoles(acct.ID) roles, err := u.db.GetRoles(acct.ID)

View File

@@ -0,0 +1,90 @@
package ui
import (
"net/http"
"net/url"
"git.wntrmute.dev/mc/mcias/internal/audit"
"git.wntrmute.dev/mc/mcias/internal/model"
"git.wntrmute.dev/mc/mcias/internal/sso"
)
// handleSSOAuthorize validates the SSO request parameters against registered
// clients, creates an SSO session, and redirects to /login with the SSO nonce.
//
// Security: the client_id and redirect_uri are validated against the MCIAS
// config (exact match). The state parameter is opaque and carried through
// unchanged. An SSO session is created server-side so the nonce is the only
// value embedded in the login form.
func (u *UIServer) handleSSOAuthorize(w http.ResponseWriter, r *http.Request) {
clientID := r.URL.Query().Get("client_id")
redirectURI := r.URL.Query().Get("redirect_uri")
state := r.URL.Query().Get("state")
if clientID == "" || redirectURI == "" || state == "" {
http.Error(w, "missing required parameters: client_id, redirect_uri, state", http.StatusBadRequest)
return
}
// Security: validate client_id against registered SSO clients in the database.
client, err := u.db.GetSSOClient(clientID)
if err != nil {
u.logger.Warn("sso: unknown client_id", "client_id", clientID)
http.Error(w, "unknown client_id", http.StatusBadRequest)
return
}
if !client.Enabled {
u.logger.Warn("sso: disabled client", "client_id", clientID)
http.Error(w, "SSO client is disabled", http.StatusForbidden)
return
}
// Security: redirect_uri must exactly match the registered URI to prevent
// open-redirect attacks.
if redirectURI != client.RedirectURI {
u.logger.Warn("sso: redirect_uri mismatch",
"client_id", clientID,
"expected", client.RedirectURI,
"got", redirectURI)
http.Error(w, "redirect_uri does not match registered URI", http.StatusBadRequest)
return
}
nonce, err := sso.StoreSession(clientID, redirectURI, state)
if err != nil {
u.logger.Error("sso: store session", "error", err)
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
u.writeAudit(r, model.EventSSOAuthorize, nil, nil,
audit.JSON("client_id", clientID))
http.Redirect(w, r, "/login?sso="+url.QueryEscape(nonce), http.StatusFound)
}
// buildSSOCallback consumes the SSO session, generates an authorization code,
// and returns the callback URL with code and state parameters. Returns ("", false)
// if the SSO session is expired or already consumed.
//
// Security: the SSO session is consumed (single-use) and the authorization code
// is stored server-side for exchange via POST /v1/sso/token. The state parameter
// is carried through unchanged for the service to validate.
func (u *UIServer) buildSSOCallback(r *http.Request, ssoNonce string, accountID int64) (string, bool) {
sess, ok := sso.ConsumeSession(ssoNonce)
if !ok {
return "", false
}
code, err := sso.Store(sess.ClientID, sess.RedirectURI, sess.State, accountID)
if err != nil {
u.logger.Error("sso: store auth code", "error", err)
return "", false
}
u.writeAudit(r, model.EventSSOLoginOK, &accountID, nil,
audit.JSON("client_id", sess.ClientID))
return sess.RedirectURI + "?code=" + url.QueryEscape(code) + "&state=" + url.QueryEscape(sess.State), true
}

View File

@@ -0,0 +1,131 @@
package ui
import (
"fmt"
"net/http"
"strings"
"git.wntrmute.dev/mc/mcias/internal/audit"
"git.wntrmute.dev/mc/mcias/internal/model"
)
func (u *UIServer) handleSSOClientsPage(w http.ResponseWriter, r *http.Request) {
csrfToken, err := u.setCSRFCookies(w)
if err != nil {
u.logger.Error("sso-clients: set CSRF cookies", "error", err)
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
clients, err := u.db.ListSSOClients()
if err != nil {
u.logger.Error("sso-clients: list clients", "error", err)
u.renderError(w, r, http.StatusInternalServerError, "failed to load SSO clients")
return
}
u.render(w, "sso_clients", SSOClientsData{
PageData: PageData{CSRFToken: csrfToken, ActorName: u.actorName(r), IsAdmin: isAdmin(r)},
Clients: clients,
})
}
func (u *UIServer) handleCreateSSOClientUI(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxFormBytes)
if err := r.ParseForm(); err != nil {
u.renderError(w, r, http.StatusBadRequest, "invalid form submission")
return
}
clientID := strings.TrimSpace(r.FormValue("client_id"))
redirectURI := strings.TrimSpace(r.FormValue("redirect_uri"))
tagsStr := strings.TrimSpace(r.FormValue("tags"))
if clientID == "" || redirectURI == "" {
u.renderError(w, r, http.StatusBadRequest, "client_id and redirect_uri are required")
return
}
var tags []string
if tagsStr != "" {
for _, t := range strings.Split(tagsStr, ",") {
t = strings.TrimSpace(t)
if t != "" {
tags = append(tags, t)
}
}
}
claims := claimsFromContext(r.Context())
var actorID *int64
if claims != nil {
if acct, err := u.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &acct.ID
}
}
c, err := u.db.CreateSSOClient(clientID, redirectURI, tags, actorID)
if err != nil {
u.renderError(w, r, http.StatusBadRequest, err.Error())
return
}
u.writeAudit(r, model.EventSSOClientCreated, actorID, nil,
audit.JSON("client_id", c.ClientID))
u.render(w, "sso_client_row", c)
}
func (u *UIServer) handleToggleSSOClient(w http.ResponseWriter, r *http.Request) {
clientID := r.PathValue("clientId")
r.Body = http.MaxBytesReader(w, r.Body, maxFormBytes)
if err := r.ParseForm(); err != nil {
u.renderError(w, r, http.StatusBadRequest, "invalid form")
return
}
enabled := r.FormValue("enabled") == "true"
if err := u.db.UpdateSSOClient(clientID, nil, nil, &enabled); err != nil {
u.renderError(w, r, http.StatusInternalServerError, "failed to update SSO client")
return
}
claims := claimsFromContext(r.Context())
var actorID *int64
if claims != nil {
if acct, err := u.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &acct.ID
}
}
u.writeAudit(r, model.EventSSOClientUpdated, actorID, nil,
fmt.Sprintf(`{"client_id":%q,"enabled":%v}`, clientID, enabled))
c, err := u.db.GetSSOClient(clientID)
if err != nil {
u.renderError(w, r, http.StatusInternalServerError, "failed to reload SSO client")
return
}
u.render(w, "sso_client_row", c)
}
func (u *UIServer) handleDeleteSSOClientUI(w http.ResponseWriter, r *http.Request) {
clientID := r.PathValue("clientId")
if err := u.db.DeleteSSOClient(clientID); err != nil {
u.renderError(w, r, http.StatusInternalServerError, "failed to delete SSO client")
return
}
claims := claimsFromContext(r.Context())
var actorID *int64
if claims != nil {
if acct, err := u.db.GetAccountByUUID(claims.Subject); err == nil {
actorID = &acct.ID
}
}
u.writeAudit(r, model.EventSSOClientDeleted, actorID, nil,
audit.JSON("client_id", clientID))
// Return empty response so HTMX removes the row.
w.WriteHeader(http.StatusOK)
}

View File

@@ -27,10 +27,11 @@ const (
) )
// webauthnCeremony holds a pending WebAuthn ceremony. // webauthnCeremony holds a pending WebAuthn ceremony.
type webauthnCeremony struct { type webauthnCeremony struct { //nolint:govet // fieldalignment: field order matches logical grouping
expiresAt time.Time expiresAt time.Time
session *libwebauthn.SessionData session *libwebauthn.SessionData
accountID int64 accountID int64
ssoNonce string // non-empty when login is part of an SSO redirect flow
} }
// pendingWebAuthnCeremonies stores in-flight WebAuthn ceremonies for the UI. // pendingWebAuthnCeremonies stores in-flight WebAuthn ceremonies for the UI.
@@ -55,7 +56,7 @@ func cleanupUIWebAuthnCeremonies() {
} }
} }
func storeUICeremony(session *libwebauthn.SessionData, accountID int64) (string, error) { func storeUICeremony(session *libwebauthn.SessionData, accountID int64, ssoNonce string) (string, error) {
raw, err := crypto.RandomBytes(webauthnNonceBytes) raw, err := crypto.RandomBytes(webauthnNonceBytes)
if err != nil { if err != nil {
return "", fmt.Errorf("webauthn: generate ceremony nonce: %w", err) return "", fmt.Errorf("webauthn: generate ceremony nonce: %w", err)
@@ -64,6 +65,7 @@ func storeUICeremony(session *libwebauthn.SessionData, accountID int64) (string,
pendingUIWebAuthnCeremonies.Store(nonce, &webauthnCeremony{ pendingUIWebAuthnCeremonies.Store(nonce, &webauthnCeremony{
session: session, session: session,
accountID: accountID, accountID: accountID,
ssoNonce: ssoNonce,
expiresAt: time.Now().Add(webauthnCeremonyTTL), expiresAt: time.Now().Add(webauthnCeremonyTTL),
}) })
return nonce, nil return nonce, nil
@@ -170,7 +172,7 @@ func (u *UIServer) handleWebAuthnBegin(w http.ResponseWriter, r *http.Request) {
return return
} }
nonce, err := storeUICeremony(session, acct.ID) nonce, err := storeUICeremony(session, acct.ID, "")
if err != nil { if err != nil {
writeJSONError(w, http.StatusInternalServerError, "internal error") writeJSONError(w, http.StatusInternalServerError, "internal error")
return return
@@ -352,6 +354,7 @@ func (u *UIServer) handleWebAuthnLoginBegin(w http.ResponseWriter, r *http.Reque
r.Body = http.MaxBytesReader(w, r.Body, maxFormBytes) r.Body = http.MaxBytesReader(w, r.Body, maxFormBytes)
var req struct { var req struct {
Username string `json:"username"` Username string `json:"username"`
SSONonce string `json:"sso_nonce"`
} }
if err := json.NewDecoder(r.Body).Decode(&req); err != nil { if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeJSONError(w, http.StatusBadRequest, "invalid JSON") writeJSONError(w, http.StatusBadRequest, "invalid JSON")
@@ -413,7 +416,7 @@ func (u *UIServer) handleWebAuthnLoginBegin(w http.ResponseWriter, r *http.Reque
return return
} }
nonce, err := storeUICeremony(session, accountID) nonce, err := storeUICeremony(session, accountID, req.SSONonce)
if err != nil { if err != nil {
writeJSONError(w, http.StatusInternalServerError, "internal error") writeJSONError(w, http.StatusInternalServerError, "internal error")
return return
@@ -582,6 +585,17 @@ func (u *UIServer) handleWebAuthnLoginFinish(w http.ResponseWriter, r *http.Requ
_ = u.db.ClearLoginFailures(acct.ID) _ = u.db.ClearLoginFailures(acct.ID)
// SSO redirect flow: issue authorization code and return redirect URL as JSON.
if ceremony.ssoNonce != "" {
if callbackURL, ok := u.buildSSOCallback(r, ceremony.ssoNonce, acct.ID); ok {
u.writeAudit(r, model.EventWebAuthnLoginOK, &acct.ID, nil, "")
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{"redirect": callbackURL})
return
}
// SSO session expired — fall through to normal login.
}
// Issue JWT and set session cookie. // Issue JWT and set session cookie.
expiry := u.cfg.DefaultExpiry() expiry := u.cfg.DefaultExpiry()
roles, err := u.db.GetRoles(acct.ID) roles, err := u.db.GetRoles(acct.ID)

View File

@@ -267,6 +267,7 @@ func New(database *db.DB, cfg *config.Config, v *vault.Vault, logger *slog.Logge
"templates/fragments/webauthn_enroll.html", "templates/fragments/webauthn_enroll.html",
"templates/fragments/totp_section.html", "templates/fragments/totp_section.html",
"templates/fragments/totp_enroll_qr.html", "templates/fragments/totp_enroll_qr.html",
"templates/fragments/sso_client_row.html",
} }
base, err := template.New("").Funcs(funcMap).ParseFS(web.TemplateFS, sharedFiles...) base, err := template.New("").Funcs(funcMap).ParseFS(web.TemplateFS, sharedFiles...)
if err != nil { if err != nil {
@@ -287,6 +288,7 @@ func New(database *db.DB, cfg *config.Config, v *vault.Vault, logger *slog.Logge
"profile": "templates/profile.html", "profile": "templates/profile.html",
"unseal": "templates/unseal.html", "unseal": "templates/unseal.html",
"service_accounts": "templates/service_accounts.html", "service_accounts": "templates/service_accounts.html",
"sso_clients": "templates/sso_clients.html",
} }
tmpls := make(map[string]*template.Template, len(pageFiles)) tmpls := make(map[string]*template.Template, len(pageFiles))
for name, file := range pageFiles { for name, file := range pageFiles {
@@ -445,6 +447,9 @@ func (u *UIServer) Register(mux *http.ServeMux) {
uiMux.HandleFunc("GET /unseal", u.handleUnsealPage) uiMux.HandleFunc("GET /unseal", u.handleUnsealPage)
uiMux.Handle("POST /unseal", unsealRateLimit(http.HandlerFunc(u.handleUnsealPost))) uiMux.Handle("POST /unseal", unsealRateLimit(http.HandlerFunc(u.handleUnsealPost)))
// SSO authorize route (no session required, rate-limited).
uiMux.Handle("GET /sso/authorize", loginRateLimit(http.HandlerFunc(u.handleSSOAuthorize)))
// Auth routes (no session required). // Auth routes (no session required).
uiMux.HandleFunc("GET /login", u.handleLoginPage) uiMux.HandleFunc("GET /login", u.handleLoginPage)
uiMux.Handle("POST /login", loginRateLimit(http.HandlerFunc(u.handleLoginPost))) uiMux.Handle("POST /login", loginRateLimit(http.HandlerFunc(u.handleLoginPost)))
@@ -498,6 +503,10 @@ func (u *UIServer) Register(mux *http.ServeMux) {
uiMux.Handle("DELETE /policies/{id}", admin(u.handleDeletePolicyRule)) uiMux.Handle("DELETE /policies/{id}", admin(u.handleDeletePolicyRule))
uiMux.Handle("PUT /accounts/{id}/tags", admin(u.handleSetAccountTags)) uiMux.Handle("PUT /accounts/{id}/tags", admin(u.handleSetAccountTags))
uiMux.Handle("PUT /accounts/{id}/password", admin(u.handleAdminResetPassword)) uiMux.Handle("PUT /accounts/{id}/password", admin(u.handleAdminResetPassword))
uiMux.Handle("GET /sso-clients", adminGet(u.handleSSOClientsPage))
uiMux.Handle("POST /sso-clients", admin(u.handleCreateSSOClientUI))
uiMux.Handle("PATCH /sso-clients/{clientId}/toggle", admin(u.handleToggleSSOClient))
uiMux.Handle("DELETE /sso-clients/{clientId}", admin(u.handleDeleteSSOClientUI))
// Service accounts page — accessible to any authenticated user; shows only // Service accounts page — accessible to any authenticated user; shows only
// the service accounts for which the current user is a token-issue delegate. // the service accounts for which the current user is a token-issue delegate.
@@ -746,8 +755,11 @@ func noDirListing(next http.Handler) http.Handler {
func securityHeaders(next http.Handler) http.Handler { func securityHeaders(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h := w.Header() h := w.Header()
// Security: 'unsafe-hashes' with the htmx swap indicator style hash
// allows htmx to apply its settling/swapping CSS transitions without
// opening the door to arbitrary inline styles.
h.Set("Content-Security-Policy", h.Set("Content-Security-Policy",
"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self' data:; font-src 'self'") "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-hashes' 'sha256-bsV5JivYxvGywDAZ22EZJKBFip65Ng9xoJVLbBg7bdo='; img-src 'self' data:; font-src 'self'")
h.Set("X-Content-Type-Options", "nosniff") h.Set("X-Content-Type-Options", "nosniff")
h.Set("X-Frame-Options", "DENY") h.Set("X-Frame-Options", "DENY")
h.Set("Strict-Transport-Security", "max-age=63072000; includeSubDomains") h.Set("Strict-Transport-Security", "max-age=63072000; includeSubDomains")
@@ -810,6 +822,7 @@ type PageData struct {
type LoginData struct { type LoginData struct {
Error string Error string
Username string // pre-filled on TOTP step Username string // pre-filled on TOTP step
SSONonce string // SSO session nonce (hidden field for SSO redirect flow)
// Security (F-02): Password is no longer carried in the HTML form. Instead // Security (F-02): Password is no longer carried in the HTML form. Instead
// a short-lived server-side nonce is issued after successful password // a short-lived server-side nonce is issued after successful password
// verification, and only the nonce is embedded in the TOTP step form. // verification, and only the nonce is embedded in the TOTP step form.
@@ -916,6 +929,12 @@ type PolicyRuleView struct {
IsPending bool // true if not_before is in the future IsPending bool // true if not_before is in the future
} }
// SSOClientsData is the view model for the SSO clients admin page.
type SSOClientsData struct {
PageData
Clients []*model.SSOClient
}
// PoliciesData is the view model for the policies list page. // PoliciesData is the view model for the policies list page.
type PoliciesData struct { type PoliciesData struct {
PageData PageData

View File

@@ -6,5 +6,5 @@
// //
// Prerequisites: protoc, protoc-gen-go, protoc-gen-go-grpc must be in PATH. // Prerequisites: protoc, protoc-gen-go, protoc-gen-go-grpc must be in PATH.
// //
//go:generate protoc --proto_path=../proto --go_out=../gen --go_opt=paths=source_relative --go-grpc_out=../gen --go-grpc_opt=paths=source_relative mcias/v1/common.proto mcias/v1/admin.proto mcias/v1/auth.proto mcias/v1/token.proto mcias/v1/account.proto mcias/v1/policy.proto //go:generate protoc --proto_path=../proto --go_out=../gen --go_opt=paths=source_relative --go-grpc_out=../gen --go-grpc_opt=paths=source_relative mcias/v1/common.proto mcias/v1/admin.proto mcias/v1/auth.proto mcias/v1/token.proto mcias/v1/account.proto mcias/v1/policy.proto mcias/v1/sso_client.proto
package proto package proto

View File

@@ -0,0 +1,86 @@
// SSOClientService: CRUD management of SSO client registrations.
syntax = "proto3";
package mcias.v1;
option go_package = "git.wntrmute.dev/mc/mcias/gen/mcias/v1;mciasv1";
// SSOClient is the wire representation of an SSO client registration.
message SSOClient {
string client_id = 1;
string redirect_uri = 2;
repeated string tags = 3;
bool enabled = 4;
string created_at = 5; // RFC3339
string updated_at = 6; // RFC3339
}
// --- List ---
message ListSSOClientsRequest {}
message ListSSOClientsResponse {
repeated SSOClient clients = 1;
}
// --- Create ---
message CreateSSOClientRequest {
string client_id = 1;
string redirect_uri = 2;
repeated string tags = 3;
}
message CreateSSOClientResponse {
SSOClient client = 1;
}
// --- Get ---
message GetSSOClientRequest {
string client_id = 1;
}
message GetSSOClientResponse {
SSOClient client = 1;
}
// --- Update ---
message UpdateSSOClientRequest {
string client_id = 1;
optional string redirect_uri = 2;
repeated string tags = 3;
optional bool enabled = 4;
bool update_tags = 5; // when true, tags field is applied (allows clearing)
}
message UpdateSSOClientResponse {
SSOClient client = 1;
}
// --- Delete ---
message DeleteSSOClientRequest {
string client_id = 1;
}
message DeleteSSOClientResponse {}
// SSOClientService manages SSO client registrations (admin only).
service SSOClientService {
// ListSSOClients returns all registered SSO clients.
rpc ListSSOClients(ListSSOClientsRequest) returns (ListSSOClientsResponse);
// CreateSSOClient registers a new SSO client.
rpc CreateSSOClient(CreateSSOClientRequest) returns (CreateSSOClientResponse);
// GetSSOClient returns a single SSO client by client_id.
rpc GetSSOClient(GetSSOClientRequest) returns (GetSSOClientResponse);
// UpdateSSOClient applies a partial update to an SSO client.
rpc UpdateSSOClient(UpdateSSOClientRequest) returns (UpdateSSOClientResponse);
// DeleteSSOClient removes an SSO client registration.
rpc DeleteSSOClient(DeleteSSOClientRequest) returns (DeleteSSOClientResponse);
}

View File

@@ -0,0 +1,36 @@
// Package terminal provides secure terminal input helpers for CLI tools.
package terminal
import (
"fmt"
"os"
"golang.org/x/term"
)
// ReadPassword prints the given prompt to stderr and reads a password
// from the terminal with echo disabled. It prints a newline after the
// input is complete so the cursor advances normally.
func ReadPassword(prompt string) (string, error) {
b, err := readRaw(prompt)
if err != nil {
return "", err
}
return string(b), nil
}
// ReadPasswordBytes is like ReadPassword but returns a []byte so the
// caller can zeroize the buffer after use.
func ReadPasswordBytes(prompt string) ([]byte, error) {
return readRaw(prompt)
}
func readRaw(prompt string) ([]byte, error) {
fmt.Fprint(os.Stderr, prompt)
b, err := term.ReadPassword(int(os.Stdin.Fd())) //nolint:gosec // fd fits in int
fmt.Fprintln(os.Stderr)
if err != nil {
return nil, err
}
return b, nil
}

201
vendor/github.com/inconshreveable/mousetrap/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 Alan Shreve (@inconshreveable)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

23
vendor/github.com/inconshreveable/mousetrap/README.md generated vendored Normal file
View File

@@ -0,0 +1,23 @@
# mousetrap
mousetrap is a tiny library that answers a single question.
On a Windows machine, was the process invoked by someone double clicking on
the executable file while browsing in explorer?
### Motivation
Windows developers unfamiliar with command line tools will often "double-click"
the executable for a tool. Because most CLI tools print the help and then exit
when invoked without arguments, this is often very frustrating for those users.
mousetrap provides a way to detect these invocations so that you can provide
more helpful behavior and instructions on how to run the CLI tool. To see what
this looks like, both from an organizational and a technical perspective, see
https://inconshreveable.com/09-09-2014/sweat-the-small-stuff/
### The interface
The library exposes a single interface:
func StartedByExplorer() (bool)

View File

@@ -0,0 +1,16 @@
//go:build !windows
// +build !windows
package mousetrap
// StartedByExplorer returns true if the program was invoked by the user
// double-clicking on the executable from explorer.exe
//
// It is conservative and returns false if any of the internal calls fail.
// It does not guarantee that the program was run from a terminal. It only can tell you
// whether it was launched from explorer.exe
//
// On non-Windows platforms, it always returns false.
func StartedByExplorer() bool {
return false
}

View File

@@ -0,0 +1,42 @@
package mousetrap
import (
"syscall"
"unsafe"
)
func getProcessEntry(pid int) (*syscall.ProcessEntry32, error) {
snapshot, err := syscall.CreateToolhelp32Snapshot(syscall.TH32CS_SNAPPROCESS, 0)
if err != nil {
return nil, err
}
defer syscall.CloseHandle(snapshot)
var procEntry syscall.ProcessEntry32
procEntry.Size = uint32(unsafe.Sizeof(procEntry))
if err = syscall.Process32First(snapshot, &procEntry); err != nil {
return nil, err
}
for {
if procEntry.ProcessID == uint32(pid) {
return &procEntry, nil
}
err = syscall.Process32Next(snapshot, &procEntry)
if err != nil {
return nil, err
}
}
}
// StartedByExplorer returns true if the program was invoked by the user double-clicking
// on the executable from explorer.exe
//
// It is conservative and returns false if any of the internal calls fail.
// It does not guarantee that the program was run from a terminal. It only can tell you
// whether it was launched from explorer.exe
func StartedByExplorer() bool {
pe, err := getProcessEntry(syscall.Getppid())
if err != nil {
return false
}
return "explorer.exe" == syscall.UTF16ToString(pe.ExeFile[:])
}

View File

@@ -5,3 +5,4 @@ cmd/tomljson/tomljson
cmd/tomltestgen/tomltestgen cmd/tomltestgen/tomltestgen
dist dist
tests/ tests/
test-results

View File

@@ -1,84 +1,76 @@
[service] version = "2"
golangci-lint-version = "1.39.0"
[linters-settings.wsl]
allow-assign-and-anything = true
[linters-settings.exhaustive]
default-signifies-exhaustive = true
[linters] [linters]
disable-all = true default = "none"
enable = [ enable = [
"asciicheck", "asciicheck",
"bodyclose", "bodyclose",
"cyclop",
"deadcode",
"depguard",
"dogsled", "dogsled",
"dupl", "dupl",
"durationcheck", "durationcheck",
"errcheck", "errcheck",
"errorlint", "errorlint",
"exhaustive", "exhaustive",
# "exhaustivestruct",
"exportloopref",
"forbidigo", "forbidigo",
# "forcetypeassert",
"funlen",
"gci",
# "gochecknoglobals",
"gochecknoinits", "gochecknoinits",
"gocognit",
"goconst", "goconst",
"gocritic", "gocritic",
"gocyclo", "godoclint",
"godot",
"godox",
# "goerr113",
"gofmt",
"gofumpt",
"goheader", "goheader",
"goimports",
"golint",
"gomnd",
# "gomoddirectives",
"gomodguard", "gomodguard",
"goprintffuncname", "goprintffuncname",
"gosec", "gosec",
"gosimple",
"govet", "govet",
# "ifshort",
"importas", "importas",
"ineffassign", "ineffassign",
"lll", "lll",
"makezero", "makezero",
"mirror",
"misspell", "misspell",
"nakedret", "nakedret",
"nestif",
"nilerr", "nilerr",
# "nlreturn",
"noctx", "noctx",
"nolintlint", "nolintlint",
#"paralleltest", "perfsprint",
"prealloc", "prealloc",
"predeclared", "predeclared",
"revive", "revive",
"rowserrcheck", "rowserrcheck",
"sqlclosecheck", "sqlclosecheck",
"staticcheck", "staticcheck",
"structcheck",
"stylecheck",
# "testpackage",
"thelper", "thelper",
"tparallel", "tparallel",
"typecheck",
"unconvert", "unconvert",
"unparam", "unparam",
"unused", "unused",
"varcheck", "usetesting",
"wastedassign", "wastedassign",
"whitespace", "whitespace",
# "wrapcheck", ]
# "wsl"
[linters.settings.exhaustive]
default-signifies-exhaustive = true
[linters.settings.lll]
line-length = 150
[[linters.exclusions.rules]]
path = ".test.go"
linters = ["goconst", "gosec"]
[[linters.exclusions.rules]]
path = "main.go"
linters = ["forbidigo"]
[[linters.exclusions.rules]]
path = "internal"
linters = ["revive"]
text = "(exported|indent-error-flow): "
[formatters]
enable = [
"gci",
"gofmt",
"gofumpt",
"goimports",
] ]

View File

@@ -22,7 +22,6 @@ builds:
- linux_riscv64 - linux_riscv64
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
- id: tomljson - id: tomljson
@@ -42,7 +41,6 @@ builds:
- linux_riscv64 - linux_riscv64
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
- id: jsontoml - id: jsontoml
@@ -62,7 +60,6 @@ builds:
- linux_arm - linux_arm
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
universal_binaries: universal_binaries:

64
vendor/github.com/pelletier/go-toml/v2/AGENTS.md generated vendored Normal file
View File

@@ -0,0 +1,64 @@
# Agent Guidelines for go-toml
This file provides guidelines for AI agents contributing to go-toml. All agents must follow these rules derived from [CONTRIBUTING.md](./CONTRIBUTING.md).
## Project Overview
go-toml is a TOML library for Go. The goal is to provide an easy-to-use and efficient TOML implementation that gets the job done without getting in the way.
## Code Change Rules
### Backward Compatibility
- **No backward-incompatible changes** unless explicitly discussed and approved
- Avoid breaking people's programs unless absolutely necessary
### Testing Requirements
- **All bug fixes must include regression tests**
- **All new code must be tested**
- Run tests before submitting: `go test -race ./...`
- Test coverage must not decrease. Check with:
```bash
go test -covermode=atomic -coverprofile=coverage.out
go tool cover -func=coverage.out
```
- All lines of code touched by changes should be covered by tests
### Performance Requirements
- go-toml aims to stay efficient; avoid performance regressions
- Run benchmarks to verify: `go test ./... -bench=. -count=10`
- Compare results using [benchstat](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat)
### Documentation
- New features or feature extensions must include documentation
- Documentation lives in [README.md](./README.md) and throughout source code
### Code Style
- Follow existing code format and structure
- Code must pass `go fmt`
- Code must pass linting with the same golangci-lint version as CI (see version in `.github/workflows/lint.yml`):
```bash
# Install specific version (check lint.yml for current version)
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s -- -b $(go env GOPATH)/bin <version>
# Run linter
golangci-lint run ./...
```
### Commit Messages
- Commit messages must explain **why** the change is needed
- Keep messages clear and informative even if details are in the PR description
## Pull Request Checklist
Before submitting:
1. Tests pass (`go test -race ./...`)
2. No backward-incompatible changes (unless discussed)
3. Relevant documentation added/updated
4. No performance regression (verify with benchmarks)
5. Title is clear and understandable for changelog

View File

@@ -33,7 +33,7 @@ The documentation is present in the [README][readme] and thorough the source
code. On release, it gets updated on [pkg.go.dev][pkg.go.dev]. To make a change code. On release, it gets updated on [pkg.go.dev][pkg.go.dev]. To make a change
to the documentation, create a pull request with your proposed changes. For to the documentation, create a pull request with your proposed changes. For
simple changes like that, the easiest way to go is probably the "Fork this simple changes like that, the easiest way to go is probably the "Fork this
project and edit the file" button on Github, displayed at the top right of the project and edit the file" button on GitHub, displayed at the top right of the
file. Unless it's a trivial change (for example a typo), provide a little bit of file. Unless it's a trivial change (for example a typo), provide a little bit of
context in your pull request description or commit message. context in your pull request description or commit message.
@@ -92,6 +92,48 @@ However, given GitHub's new policy to _not_ run Actions on pull requests until a
maintainer clicks on button, it is highly recommended that you run them locally maintainer clicks on button, it is highly recommended that you run them locally
as you make changes. as you make changes.
### Test across Go versions
The repository includes tooling to test go-toml across multiple Go versions
(1.11 through 1.25) both locally and in GitHub Actions.
#### Local testing with Docker
Prerequisites: Docker installed and running, Bash shell, `rsync` command.
```bash
# Test all Go versions in parallel (default)
./test-go-versions.sh
# Test specific versions
./test-go-versions.sh 1.21 1.22 1.23
# Test sequentially (slower but uses less resources)
./test-go-versions.sh --sequential
# Verbose output with custom results directory
./test-go-versions.sh --verbose --output ./my-results 1.24 1.25
# Show all options
./test-go-versions.sh --help
```
The script creates Docker containers for each Go version and runs the full test
suite. Results are saved to a `test-results/` directory with individual logs and
a comprehensive summary report.
The script only exits with a non-zero status code if either of the two most
recent Go versions fail.
#### GitHub Actions testing (maintainers)
1. Go to the **Actions** tab in the GitHub repository
2. Select **"Go Versions Compatibility Test"** from the workflow list
3. Click **"Run workflow"**
4. Optionally customize:
- **Go versions**: Space-separated list (e.g., `1.21 1.22 1.23`)
- **Execution mode**: Parallel (faster) or sequential (more stable)
### Check coverage ### Check coverage
We use `go tool cover` to compute test coverage. Most code editors have a way to We use `go tool cover` to compute test coverage. Most code editors have a way to
@@ -111,7 +153,7 @@ code lowers the coverage.
Go-toml aims to stay efficient. We rely on a set of scenarios executed with Go's Go-toml aims to stay efficient. We rely on a set of scenarios executed with Go's
builtin benchmark systems. Because of their noisy nature, containers provided by builtin benchmark systems. Because of their noisy nature, containers provided by
Github Actions cannot be reliably used for benchmarking. As a result, you are GitHub Actions cannot be reliably used for benchmarking. As a result, you are
responsible for checking that your changes do not incur a performance penalty. responsible for checking that your changes do not incur a performance penalty.
You can run their following to execute benchmarks: You can run their following to execute benchmarks:
@@ -168,13 +210,13 @@ Checklist:
1. Decide on the next version number. Use semver. Review commits since last 1. Decide on the next version number. Use semver. Review commits since last
version to assess. version to assess.
2. Tag release. For example: 2. Tag release. For example:
``` ```
git checkout v2 git checkout v2
git pull git pull
git tag v2.2.0 git tag v2.2.0
git push --tags git push --tags
``` ```
3. CI automatically builds a draft Github release. Review it and edit as 3. CI automatically builds a draft GitHub release. Review it and edit as
necessary. Look for "Other changes". That would indicate a pull request not necessary. Look for "Other changes". That would indicate a pull request not
labeled properly. Tweak labels and pull request titles until changelog looks labeled properly. Tweak labels and pull request titles until changelog looks
good for users. good for users.

View File

@@ -107,7 +107,11 @@ type MyConfig struct {
### Unmarshaling ### Unmarshaling
[`Unmarshal`][unmarshal] reads a TOML document and fills a Go structure with its [`Unmarshal`][unmarshal] reads a TOML document and fills a Go structure with its
content. For example: content.
Note that the struct variable names are _capitalized_, while the variables in the toml document are _lowercase_.
For example:
```go ```go
doc := ` doc := `
@@ -133,6 +137,62 @@ fmt.Println("tags:", cfg.Tags)
[unmarshal]: https://pkg.go.dev/github.com/pelletier/go-toml/v2#Unmarshal [unmarshal]: https://pkg.go.dev/github.com/pelletier/go-toml/v2#Unmarshal
Here is an example using tables with some simple nesting:
```go
doc := `
age = 45
fruits = ["apple", "pear"]
# these are very important!
[my-variables]
first = 1
second = 0.2
third = "abc"
# this is not so important.
[my-variables.b]
bfirst = 123
`
var Document struct {
Age int
Fruits []string
Myvariables struct {
First int
Second float64
Third string
B struct {
Bfirst int
}
} `toml:"my-variables"`
}
err := toml.Unmarshal([]byte(doc), &Document)
if err != nil {
panic(err)
}
fmt.Println("age:", Document.Age)
fmt.Println("fruits:", Document.Fruits)
fmt.Println("my-variables.first:", Document.Myvariables.First)
fmt.Println("my-variables.second:", Document.Myvariables.Second)
fmt.Println("my-variables.third:", Document.Myvariables.Third)
fmt.Println("my-variables.B.Bfirst:", Document.Myvariables.B.Bfirst)
// Output:
// age: 45
// fruits: [apple pear]
// my-variables.first: 1
// my-variables.second: 0.2
// my-variables.third: abc
// my-variables.B.Bfirst: 123
```
### Marshaling ### Marshaling
[`Marshal`][marshal] is the opposite of Unmarshal: it represents a Go structure [`Marshal`][marshal] is the opposite of Unmarshal: it represents a Go structure
@@ -179,12 +239,12 @@ Execution time speedup compared to other Go TOML libraries:
<tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr> <tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr>
</thead> </thead>
<tbody> <tbody>
<tr><td>Marshal/HugoFrontMatter-2</td><td>1.9x</td><td>2.2x</td></tr> <tr><td>Marshal/HugoFrontMatter-2</td><td>2.1x</td><td>2.0x</td></tr>
<tr><td>Marshal/ReferenceFile/map-2</td><td>1.7x</td><td>2.1x</td></tr> <tr><td>Marshal/ReferenceFile/map-2</td><td>2.0x</td><td>2.0x</td></tr>
<tr><td>Marshal/ReferenceFile/struct-2</td><td>2.2x</td><td>3.0x</td></tr> <tr><td>Marshal/ReferenceFile/struct-2</td><td>2.3x</td><td>2.5x</td></tr>
<tr><td>Unmarshal/HugoFrontMatter-2</td><td>2.9x</td><td>2.7x</td></tr> <tr><td>Unmarshal/HugoFrontMatter-2</td><td>3.3x</td><td>2.8x</td></tr>
<tr><td>Unmarshal/ReferenceFile/map-2</td><td>2.6x</td><td>2.7x</td></tr> <tr><td>Unmarshal/ReferenceFile/map-2</td><td>2.9x</td><td>3.0x</td></tr>
<tr><td>Unmarshal/ReferenceFile/struct-2</td><td>4.6x</td><td>5.1x</td></tr> <tr><td>Unmarshal/ReferenceFile/struct-2</td><td>4.8x</td><td>5.0x</td></tr>
</tbody> </tbody>
</table> </table>
<details><summary>See more</summary> <details><summary>See more</summary>
@@ -197,17 +257,17 @@ provided for completeness.</p>
<tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr> <tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr>
</thead> </thead>
<tbody> <tbody>
<tr><td>Marshal/SimpleDocument/map-2</td><td>1.8x</td><td>2.7x</td></tr> <tr><td>Marshal/SimpleDocument/map-2</td><td>2.0x</td><td>2.9x</td></tr>
<tr><td>Marshal/SimpleDocument/struct-2</td><td>2.7x</td><td>3.8x</td></tr> <tr><td>Marshal/SimpleDocument/struct-2</td><td>2.5x</td><td>3.6x</td></tr>
<tr><td>Unmarshal/SimpleDocument/map-2</td><td>3.8x</td><td>3.0x</td></tr> <tr><td>Unmarshal/SimpleDocument/map-2</td><td>4.2x</td><td>3.4x</td></tr>
<tr><td>Unmarshal/SimpleDocument/struct-2</td><td>5.6x</td><td>4.1x</td></tr> <tr><td>Unmarshal/SimpleDocument/struct-2</td><td>5.9x</td><td>4.4x</td></tr>
<tr><td>UnmarshalDataset/example-2</td><td>3.0x</td><td>3.2x</td></tr> <tr><td>UnmarshalDataset/example-2</td><td>3.2x</td><td>2.9x</td></tr>
<tr><td>UnmarshalDataset/code-2</td><td>2.3x</td><td>2.9x</td></tr> <tr><td>UnmarshalDataset/code-2</td><td>2.4x</td><td>2.8x</td></tr>
<tr><td>UnmarshalDataset/twitter-2</td><td>2.6x</td><td>2.7x</td></tr> <tr><td>UnmarshalDataset/twitter-2</td><td>2.7x</td><td>2.5x</td></tr>
<tr><td>UnmarshalDataset/citm_catalog-2</td><td>2.2x</td><td>2.3x</td></tr> <tr><td>UnmarshalDataset/citm_catalog-2</td><td>2.3x</td><td>2.3x</td></tr>
<tr><td>UnmarshalDataset/canada-2</td><td>1.8x</td><td>1.5x</td></tr> <tr><td>UnmarshalDataset/canada-2</td><td>1.9x</td><td>1.5x</td></tr>
<tr><td>UnmarshalDataset/config-2</td><td>4.1x</td><td>2.9x</td></tr> <tr><td>UnmarshalDataset/config-2</td><td>5.4x</td><td>3.0x</td></tr>
<tr><td>geomean</td><td>2.7x</td><td>2.8x</td></tr> <tr><td>geomean</td><td>2.9x</td><td>2.8x</td></tr>
</tbody> </tbody>
</table> </table>
<p>This table can be generated with <code>./ci.sh benchmark -a -html</code>.</p> <p>This table can be generated with <code>./ci.sh benchmark -a -html</code>.</p>

View File

@@ -147,7 +147,7 @@ bench() {
pushd "$dir" pushd "$dir"
if [ "${replace}" != "" ]; then if [ "${replace}" != "" ]; then
find ./benchmark/ -iname '*.go' -exec sed -i -E "s|github.com/pelletier/go-toml/v2|${replace}|g" {} \; find ./benchmark/ -iname '*.go' -exec sed -i -E "s|github.com/pelletier/go-toml/v2\"|${replace}\"|g" {} \;
go get "${replace}" go get "${replace}"
fi fi
@@ -195,6 +195,11 @@ for line in reversed(lines[2:]):
"%.1fx" % (float(line[3])/v2), # v1 "%.1fx" % (float(line[3])/v2), # v1
"%.1fx" % (float(line[7])/v2), # bs "%.1fx" % (float(line[7])/v2), # bs
]) ])
if not results:
print("No benchmark results to display.", file=sys.stderr)
sys.exit(1)
# move geomean to the end # move geomean to the end
results.append(results[0]) results.append(results[0])
del results[0] del results[0]

View File

@@ -230,8 +230,8 @@ func parseLocalTime(b []byte) (LocalTime, []byte, error) {
return t, nil, err return t, nil, err
} }
if t.Second > 60 { if t.Second > 59 {
return t, nil, unstable.NewParserError(b[6:8], "seconds cannot be greater 60") return t, nil, unstable.NewParserError(b[6:8], "seconds cannot be greater than 59")
} }
b = b[8:] b = b[8:]
@@ -279,7 +279,6 @@ func parseLocalTime(b []byte) (LocalTime, []byte, error) {
return t, b, nil return t, b, nil
} }
//nolint:cyclop
func parseFloat(b []byte) (float64, error) { func parseFloat(b []byte) (float64, error) {
if len(b) == 4 && (b[0] == '+' || b[0] == '-') && b[1] == 'n' && b[2] == 'a' && b[3] == 'n' { if len(b) == 4 && (b[0] == '+' || b[0] == '-') && b[1] == 'n' && b[2] == 'a' && b[3] == 'n' {
return math.NaN(), nil return math.NaN(), nil

View File

@@ -2,10 +2,10 @@ package toml
import ( import (
"fmt" "fmt"
"reflect"
"strconv" "strconv"
"strings" "strings"
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -54,6 +54,18 @@ func (s *StrictMissingError) String() string {
return buf.String() return buf.String()
} }
// Unwrap returns wrapped decode errors
//
// Implements errors.Join() interface.
func (s *StrictMissingError) Unwrap() []error {
errs := make([]error, len(s.Errors))
for i := range s.Errors {
errs[i] = &s.Errors[i]
}
return errs
}
// Key represents a TOML key as a sequence of key parts.
type Key []string type Key []string
// Error returns the error message contained in the DecodeError. // Error returns the error message contained in the DecodeError.
@@ -78,7 +90,7 @@ func (e *DecodeError) Key() Key {
return e.key return e.key
} }
// decodeErrorFromHighlight creates a DecodeError referencing a highlighted // wrapDecodeError creates a DecodeError referencing a highlighted
// range of bytes from document. // range of bytes from document.
// //
// highlight needs to be a sub-slice of document, or this function panics. // highlight needs to be a sub-slice of document, or this function panics.
@@ -88,7 +100,7 @@ func (e *DecodeError) Key() Key {
// //
//nolint:funlen //nolint:funlen
func wrapDecodeError(document []byte, de *unstable.ParserError) *DecodeError { func wrapDecodeError(document []byte, de *unstable.ParserError) *DecodeError {
offset := danger.SubsliceOffset(document, de.Highlight) offset := subsliceOffset(document, de.Highlight)
errMessage := de.Error() errMessage := de.Error()
errLine, errColumn := positionAtEnd(document[:offset]) errLine, errColumn := positionAtEnd(document[:offset])
@@ -248,5 +260,24 @@ func positionAtEnd(b []byte) (row int, column int) {
} }
} }
return return row, column
}
// subsliceOffset returns the byte offset of subslice within data.
// subslice must share the same backing array as data.
func subsliceOffset(data []byte, subslice []byte) int {
if len(subslice) == 0 {
return 0
}
// Use reflect to get the data pointers of both slices.
// This is safe because we're only reading the pointer values for comparison.
dataPtr := reflect.ValueOf(data).Pointer()
subPtr := reflect.ValueOf(subslice).Pointer()
offset := int(subPtr - dataPtr)
if offset < 0 || offset > len(data) {
panic("subslice is not within data")
}
return offset
} }

View File

@@ -1,6 +1,6 @@
package characters package characters
var invalidAsciiTable = [256]bool{ var invalidASCIITable = [256]bool{
0x00: true, 0x00: true,
0x01: true, 0x01: true,
0x02: true, 0x02: true,
@@ -37,6 +37,6 @@ var invalidAsciiTable = [256]bool{
0x7F: true, 0x7F: true,
} }
func InvalidAscii(b byte) bool { func InvalidASCII(b byte) bool {
return invalidAsciiTable[b] return invalidASCIITable[b]
} }

View File

@@ -1,20 +1,12 @@
// Package characters provides functions for working with string encodings.
package characters package characters
import ( import (
"unicode/utf8" "unicode/utf8"
) )
type utf8Err struct { // Utf8TomlValidAlreadyEscaped verifies that a given string is only made of
Index int // valid UTF-8 characters allowed by the TOML spec:
Size int
}
func (u utf8Err) Zero() bool {
return u.Size == 0
}
// Verified that a given string is only made of valid UTF-8 characters allowed
// by the TOML spec:
// //
// Any Unicode character may be used except those that must be escaped: // Any Unicode character may be used except those that must be escaped:
// quotation mark, backslash, and the control characters other than tab (U+0000 // quotation mark, backslash, and the control characters other than tab (U+0000
@@ -23,8 +15,8 @@ func (u utf8Err) Zero() bool {
// It is a copy of the Go 1.17 utf8.Valid implementation, tweaked to exit early // It is a copy of the Go 1.17 utf8.Valid implementation, tweaked to exit early
// when a character is not allowed. // when a character is not allowed.
// //
// The returned utf8Err is Zero() if the string is valid, or contains the byte // The returned slice is empty if the string is valid, or contains the bytes
// index and size of the invalid character. // of the invalid character.
// //
// quotation mark => already checked // quotation mark => already checked
// backslash => already checked // backslash => already checked
@@ -32,9 +24,8 @@ func (u utf8Err) Zero() bool {
// 0x9 => tab, ok // 0x9 => tab, ok
// 0xA - 0x1F => invalid // 0xA - 0x1F => invalid
// 0x7F => invalid // 0x7F => invalid
func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) { func Utf8TomlValidAlreadyEscaped(p []byte) []byte {
// Fast path. Check for and skip 8 bytes of ASCII characters per iteration. // Fast path. Check for and skip 8 bytes of ASCII characters per iteration.
offset := 0
for len(p) >= 8 { for len(p) >= 8 {
// Combining two 32 bit loads allows the same code to be used // Combining two 32 bit loads allows the same code to be used
// for 32 and 64 bit platforms. // for 32 and 64 bit platforms.
@@ -48,24 +39,19 @@ func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) {
} }
for i, b := range p[:8] { for i, b := range p[:8] {
if InvalidAscii(b) { if InvalidASCII(b) {
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
} }
p = p[8:] p = p[8:]
offset += 8
} }
n := len(p) n := len(p)
for i := 0; i < n; { for i := 0; i < n; {
pi := p[i] pi := p[i]
if pi < utf8.RuneSelf { if pi < utf8.RuneSelf {
if InvalidAscii(pi) { if InvalidASCII(pi) {
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
i++ i++
continue continue
@@ -73,44 +59,34 @@ func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) {
x := first[pi] x := first[pi]
if x == xx { if x == xx {
// Illegal starter byte. // Illegal starter byte.
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
size := int(x & 7) size := int(x & 7)
if i+size > n { if i+size > n {
// Short or invalid. // Short or invalid.
err.Index = offset + i return p[i:n]
err.Size = n - i
return
} }
accept := acceptRanges[x>>4] accept := acceptRanges[x>>4]
if c := p[i+1]; c < accept.lo || accept.hi < c { if c := p[i+1]; c < accept.lo || accept.hi < c {
err.Index = offset + i return p[i : i+2]
err.Size = 2 } else if size == 2 { //revive:disable:empty-block
return
} else if size == 2 {
} else if c := p[i+2]; c < locb || hicb < c { } else if c := p[i+2]; c < locb || hicb < c {
err.Index = offset + i return p[i : i+3]
err.Size = 3 } else if size == 3 { //revive:disable:empty-block
return
} else if size == 3 {
} else if c := p[i+3]; c < locb || hicb < c { } else if c := p[i+3]; c < locb || hicb < c {
err.Index = offset + i return p[i : i+4]
err.Size = 4
return
} }
i += size i += size
} }
return return nil
} }
// Return the size of the next rune if valid, 0 otherwise. // Utf8ValidNext returns the size of the next rune if valid, 0 otherwise.
func Utf8ValidNext(p []byte) int { func Utf8ValidNext(p []byte) int {
c := p[0] c := p[0]
if c < utf8.RuneSelf { if c < utf8.RuneSelf {
if InvalidAscii(c) { if InvalidASCII(c) {
return 0 return 0
} }
return 1 return 1
@@ -129,10 +105,10 @@ func Utf8ValidNext(p []byte) int {
accept := acceptRanges[x>>4] accept := acceptRanges[x>>4]
if c := p[1]; c < accept.lo || accept.hi < c { if c := p[1]; c < accept.lo || accept.hi < c {
return 0 return 0
} else if size == 2 { } else if size == 2 { //nolint:revive
} else if c := p[2]; c < locb || hicb < c { } else if c := p[2]; c < locb || hicb < c {
return 0 return 0
} else if size == 3 { } else if size == 3 { //nolint:revive
} else if c := p[3]; c < locb || hicb < c { } else if c := p[3]; c < locb || hicb < c {
return 0 return 0
} }

View File

@@ -1,65 +0,0 @@
package danger
import (
"fmt"
"reflect"
"unsafe"
)
const maxInt = uintptr(int(^uint(0) >> 1))
func SubsliceOffset(data []byte, subslice []byte) int {
datap := (*reflect.SliceHeader)(unsafe.Pointer(&data))
hlp := (*reflect.SliceHeader)(unsafe.Pointer(&subslice))
if hlp.Data < datap.Data {
panic(fmt.Errorf("subslice address (%d) is before data address (%d)", hlp.Data, datap.Data))
}
offset := hlp.Data - datap.Data
if offset > maxInt {
panic(fmt.Errorf("slice offset larger than int (%d)", offset))
}
intoffset := int(offset)
if intoffset > datap.Len {
panic(fmt.Errorf("slice offset (%d) is farther than data length (%d)", intoffset, datap.Len))
}
if intoffset+hlp.Len > datap.Len {
panic(fmt.Errorf("slice ends (%d+%d) is farther than data length (%d)", intoffset, hlp.Len, datap.Len))
}
return intoffset
}
func BytesRange(start []byte, end []byte) []byte {
if start == nil || end == nil {
panic("cannot call BytesRange with nil")
}
startp := (*reflect.SliceHeader)(unsafe.Pointer(&start))
endp := (*reflect.SliceHeader)(unsafe.Pointer(&end))
if startp.Data > endp.Data {
panic(fmt.Errorf("start pointer address (%d) is after end pointer address (%d)", startp.Data, endp.Data))
}
l := startp.Len
endLen := int(endp.Data-startp.Data) + endp.Len
if endLen > l {
l = endLen
}
if l > startp.Cap {
panic(fmt.Errorf("range length is larger than capacity"))
}
return start[:l]
}
func Stride(ptr unsafe.Pointer, size uintptr, offset int) unsafe.Pointer {
// TODO: replace with unsafe.Add when Go 1.17 is released
// https://github.com/golang/go/issues/40481
return unsafe.Pointer(uintptr(ptr) + uintptr(int(size)*offset))
}

View File

@@ -1,23 +0,0 @@
package danger
import (
"reflect"
"unsafe"
)
// typeID is used as key in encoder and decoder caches to enable using
// the optimize runtime.mapaccess2_fast64 function instead of the more
// expensive lookup if we were to use reflect.Type as map key.
//
// typeID holds the pointer to the reflect.Type value, which is unique
// in the program.
//
// https://github.com/segmentio/encoding/blob/master/json/codec.go#L59-L61
type TypeID unsafe.Pointer
func MakeTypeID(t reflect.Type) TypeID {
// reflect.Type has the fields:
// typ unsafe.Pointer
// ptr unsafe.Pointer
return TypeID((*[2]unsafe.Pointer)(unsafe.Pointer(&t))[1])
}

View File

@@ -36,7 +36,7 @@ func (t *KeyTracker) Pop(node *unstable.Node) {
} }
} }
// Key returns the current key // Key returns the current key.
func (t *KeyTracker) Key() []string { func (t *KeyTracker) Key() []string {
k := make([]string, len(t.k)) k := make([]string, len(t.k))
copy(k, t.k) copy(k, t.k)

View File

@@ -288,11 +288,12 @@ func (s *SeenTracker) checkKeyValue(node *unstable.Node) (bool, error) {
idx = s.create(parentIdx, k, tableKind, false, true) idx = s.create(parentIdx, k, tableKind, false, true)
} else { } else {
entry := s.entries[idx] entry := s.entries[idx]
if it.IsLast() { switch {
case it.IsLast():
return false, fmt.Errorf("toml: key %s is already defined", string(k)) return false, fmt.Errorf("toml: key %s is already defined", string(k))
} else if entry.kind != tableKind { case entry.kind != tableKind:
return false, fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind) return false, fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind)
} else if entry.explicit { case entry.explicit:
return false, fmt.Errorf("toml: cannot redefine table %s that has already been explicitly defined", string(k)) return false, fmt.Errorf("toml: cannot redefine table %s that has already been explicitly defined", string(k))
} }
} }
@@ -309,16 +310,16 @@ func (s *SeenTracker) checkKeyValue(node *unstable.Node) (bool, error) {
return s.checkInlineTable(value) return s.checkInlineTable(value)
case unstable.Array: case unstable.Array:
return s.checkArray(value) return s.checkArray(value)
} default:
return false, nil return false, nil
}
} }
func (s *SeenTracker) checkArray(node *unstable.Node) (first bool, err error) { func (s *SeenTracker) checkArray(node *unstable.Node) (first bool, err error) {
it := node.Children() it := node.Children()
for it.Next() { for it.Next() {
n := it.Node() n := it.Node()
switch n.Kind { switch n.Kind { //nolint:exhaustive
case unstable.InlineTable: case unstable.InlineTable:
first, err = s.checkInlineTable(n) first, err = s.checkInlineTable(n)
if err != nil { if err != nil {

View File

@@ -1 +1,2 @@
// Package tracker provides functions for keeping track of AST nodes.
package tracker package tracker

View File

@@ -45,7 +45,7 @@ func (d *LocalDate) UnmarshalText(b []byte) error {
type LocalTime struct { type LocalTime struct {
Hour int // Hour of the day: [0; 24[ Hour int // Hour of the day: [0; 24[
Minute int // Minute of the hour: [0; 60[ Minute int // Minute of the hour: [0; 60[
Second int // Second of the minute: [0; 60[ Second int // Second of the minute: [0; 59]
Nanosecond int // Nanoseconds within the second: [0, 1000000000[ Nanosecond int // Nanoseconds within the second: [0, 1000000000[
Precision int // Number of digits to display for Nanosecond. Precision int // Number of digits to display for Nanosecond.
} }

View File

@@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"encoding" "encoding"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"math" "math"
@@ -42,7 +43,7 @@ type Encoder struct {
arraysMultiline bool arraysMultiline bool
indentSymbol string indentSymbol string
indentTables bool indentTables bool
marshalJsonNumbers bool marshalJSONNumbers bool
} }
// NewEncoder returns a new Encoder that writes to w. // NewEncoder returns a new Encoder that writes to w.
@@ -89,14 +90,14 @@ func (enc *Encoder) SetIndentTables(indent bool) *Encoder {
return enc return enc
} }
// SetMarshalJsonNumbers forces the encoder to serialize `json.Number` as a // SetMarshalJSONNumbers forces the encoder to serialize `json.Number` as a
// float or integer instead of relying on TextMarshaler to emit a string. // float or integer instead of relying on TextMarshaler to emit a string.
// //
// *Unstable:* This method does not follow the compatibility guarantees of // *Unstable:* This method does not follow the compatibility guarantees of
// semver. It can be changed or removed without a new major version being // semver. It can be changed or removed without a new major version being
// issued. // issued.
func (enc *Encoder) SetMarshalJsonNumbers(indent bool) *Encoder { func (enc *Encoder) SetMarshalJSONNumbers(indent bool) *Encoder {
enc.marshalJsonNumbers = indent enc.marshalJSONNumbers = indent
return enc return enc
} }
@@ -161,6 +162,8 @@ func (enc *Encoder) SetMarshalJsonNumbers(indent bool) *Encoder {
// //
// The "omitempty" option prevents empty values or groups from being emitted. // The "omitempty" option prevents empty values or groups from being emitted.
// //
// The "omitzero" option prevents zero values or groups from being emitted.
//
// The "commented" option prefixes the value and all its children with a comment // The "commented" option prefixes the value and all its children with a comment
// symbol. // symbol.
// //
@@ -177,7 +180,7 @@ func (enc *Encoder) Encode(v interface{}) error {
ctx.inline = enc.tablesInline ctx.inline = enc.tablesInline
if v == nil { if v == nil {
return fmt.Errorf("toml: cannot encode a nil interface") return errors.New("toml: cannot encode a nil interface")
} }
b, err := enc.encode(b, ctx, reflect.ValueOf(v)) b, err := enc.encode(b, ctx, reflect.ValueOf(v))
@@ -196,6 +199,7 @@ func (enc *Encoder) Encode(v interface{}) error {
type valueOptions struct { type valueOptions struct {
multiline bool multiline bool
omitempty bool omitempty bool
omitzero bool
commented bool commented bool
comment string comment string
} }
@@ -266,16 +270,15 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
case LocalDateTime: case LocalDateTime:
return append(b, x.String()...), nil return append(b, x.String()...), nil
case json.Number: case json.Number:
if enc.marshalJsonNumbers { if enc.marshalJSONNumbers {
if x == "" { /// Useful zero value. if x == "" { /// Useful zero value.
return append(b, "0"...), nil return append(b, "0"...), nil
} else if v, err := x.Int64(); err == nil { } else if v, err := x.Int64(); err == nil {
return enc.encode(b, ctx, reflect.ValueOf(v)) return enc.encode(b, ctx, reflect.ValueOf(v))
} else if f, err := x.Float64(); err == nil { } else if f, err := x.Float64(); err == nil {
return enc.encode(b, ctx, reflect.ValueOf(f)) return enc.encode(b, ctx, reflect.ValueOf(f))
} else {
return nil, fmt.Errorf("toml: unable to convert %q to int64 or float64", x)
} }
return nil, fmt.Errorf("toml: unable to convert %q to int64 or float64", x)
} }
} }
@@ -309,7 +312,7 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
return enc.encodeSlice(b, ctx, v) return enc.encodeSlice(b, ctx, v)
case reflect.Interface: case reflect.Interface:
if v.IsNil() { if v.IsNil() {
return nil, fmt.Errorf("toml: encoding a nil interface is not supported") return nil, errors.New("toml: encoding a nil interface is not supported")
} }
return enc.encode(b, ctx, v.Elem()) return enc.encode(b, ctx, v.Elem())
@@ -326,28 +329,30 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
case reflect.Float32: case reflect.Float32:
f := v.Float() f := v.Float()
if math.IsNaN(f) { switch {
case math.IsNaN(f):
b = append(b, "nan"...) b = append(b, "nan"...)
} else if f > math.MaxFloat32 { case f > math.MaxFloat32:
b = append(b, "inf"...) b = append(b, "inf"...)
} else if f < -math.MaxFloat32 { case f < -math.MaxFloat32:
b = append(b, "-inf"...) b = append(b, "-inf"...)
} else if math.Trunc(f) == f { case math.Trunc(f) == f:
b = strconv.AppendFloat(b, f, 'f', 1, 32) b = strconv.AppendFloat(b, f, 'f', 1, 32)
} else { default:
b = strconv.AppendFloat(b, f, 'f', -1, 32) b = strconv.AppendFloat(b, f, 'f', -1, 32)
} }
case reflect.Float64: case reflect.Float64:
f := v.Float() f := v.Float()
if math.IsNaN(f) { switch {
case math.IsNaN(f):
b = append(b, "nan"...) b = append(b, "nan"...)
} else if f > math.MaxFloat64 { case f > math.MaxFloat64:
b = append(b, "inf"...) b = append(b, "inf"...)
} else if f < -math.MaxFloat64 { case f < -math.MaxFloat64:
b = append(b, "-inf"...) b = append(b, "-inf"...)
} else if math.Trunc(f) == f { case math.Trunc(f) == f:
b = strconv.AppendFloat(b, f, 'f', 1, 64) b = strconv.AppendFloat(b, f, 'f', 1, 64)
} else { default:
b = strconv.AppendFloat(b, f, 'f', -1, 64) b = strconv.AppendFloat(b, f, 'f', -1, 64)
} }
case reflect.Bool: case reflect.Bool:
@@ -384,6 +389,31 @@ func shouldOmitEmpty(options valueOptions, v reflect.Value) bool {
return options.omitempty && isEmptyValue(v) return options.omitempty && isEmptyValue(v)
} }
func shouldOmitZero(options valueOptions, v reflect.Value) bool {
if !options.omitzero {
return false
}
// Check if the type implements isZeroer interface (has a custom IsZero method).
if v.Type().Implements(isZeroerType) {
return v.Interface().(isZeroer).IsZero()
}
// Check if pointer type implements isZeroer.
if reflect.PointerTo(v.Type()).Implements(isZeroerType) {
if v.CanAddr() {
return v.Addr().Interface().(isZeroer).IsZero()
}
// Create a temporary addressable copy to call the pointer receiver method.
pv := reflect.New(v.Type())
pv.Elem().Set(v)
return pv.Interface().(isZeroer).IsZero()
}
// Fall back to reflect's IsZero for types without custom IsZero method.
return v.IsZero()
}
func (enc *Encoder) encodeKv(b []byte, ctx encoderCtx, options valueOptions, v reflect.Value) ([]byte, error) { func (enc *Encoder) encodeKv(b []byte, ctx encoderCtx, options valueOptions, v reflect.Value) ([]byte, error) {
var err error var err error
@@ -434,8 +464,9 @@ func isEmptyValue(v reflect.Value) bool {
return v.Float() == 0 return v.Float() == 0
case reflect.Interface, reflect.Ptr: case reflect.Interface, reflect.Ptr:
return v.IsNil() return v.IsNil()
} default:
return false return false
}
} }
func isEmptyStruct(v reflect.Value) bool { func isEmptyStruct(v reflect.Value) bool {
@@ -479,7 +510,7 @@ func (enc *Encoder) encodeString(b []byte, v string, options valueOptions) []byt
func needsQuoting(v string) bool { func needsQuoting(v string) bool {
// TODO: vectorize // TODO: vectorize
for _, b := range []byte(v) { for _, b := range []byte(v) {
if b == '\'' || b == '\r' || b == '\n' || characters.InvalidAscii(b) { if b == '\'' || b == '\r' || b == '\n' || characters.InvalidASCII(b) {
return true return true
} }
} }
@@ -517,12 +548,26 @@ func (enc *Encoder) encodeQuotedString(multiline bool, b []byte, v string) []byt
del = 0x7f del = 0x7f
) )
for _, r := range []byte(v) { bv := []byte(v)
for i := 0; i < len(bv); i++ {
r := bv[i]
switch r { switch r {
case '\\': case '\\':
b = append(b, `\\`...) b = append(b, `\\`...)
case '"': case '"':
if multiline {
// Quotation marks do not need to be quoted in multiline strings unless
// it contains 3 consecutive. If 3+ quotes appear, quote all of them
// because it's visually better
if i+2 > len(bv) || bv[i+1] != '"' || bv[i+2] != '"' {
b = append(b, r)
} else {
b = append(b, `\"\"\"`...)
i += 2
}
} else {
b = append(b, `\"`...) b = append(b, `\"`...)
}
case '\b': case '\b':
b = append(b, `\b`...) b = append(b, `\b`...)
case '\f': case '\f':
@@ -559,9 +604,9 @@ func (enc *Encoder) encodeUnquotedKey(b []byte, v string) []byte {
return append(b, v...) return append(b, v...)
} }
func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) ([]byte, error) { func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) []byte {
if len(ctx.parentKey) == 0 { if len(ctx.parentKey) == 0 {
return b, nil return b
} }
b = enc.encodeComment(ctx.indent, ctx.options.comment, b) b = enc.encodeComment(ctx.indent, ctx.options.comment, b)
@@ -581,10 +626,9 @@ func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) ([]byte, error)
b = append(b, "]\n"...) b = append(b, "]\n"...)
return b, nil return b
} }
//nolint:cyclop
func (enc *Encoder) encodeKey(b []byte, k string) []byte { func (enc *Encoder) encodeKey(b []byte, k string) []byte {
needsQuotation := false needsQuotation := false
cannotUseLiteral := false cannotUseLiteral := false
@@ -621,30 +665,33 @@ func (enc *Encoder) encodeKey(b []byte, k string) []byte {
func (enc *Encoder) keyToString(k reflect.Value) (string, error) { func (enc *Encoder) keyToString(k reflect.Value) (string, error) {
keyType := k.Type() keyType := k.Type()
switch { if keyType.Implements(textMarshalerType) {
case keyType.Kind() == reflect.String:
return k.String(), nil
case keyType.Implements(textMarshalerType):
keyB, err := k.Interface().(encoding.TextMarshaler).MarshalText() keyB, err := k.Interface().(encoding.TextMarshaler).MarshalText()
if err != nil { if err != nil {
return "", fmt.Errorf("toml: error marshalling key %v from text: %w", k, err) return "", fmt.Errorf("toml: error marshalling key %v from text: %w", k, err)
} }
return string(keyB), nil return string(keyB), nil
}
case keyType.Kind() == reflect.Int || keyType.Kind() == reflect.Int8 || keyType.Kind() == reflect.Int16 || keyType.Kind() == reflect.Int32 || keyType.Kind() == reflect.Int64: switch keyType.Kind() {
case reflect.String:
return k.String(), nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return strconv.FormatInt(k.Int(), 10), nil return strconv.FormatInt(k.Int(), 10), nil
case keyType.Kind() == reflect.Uint || keyType.Kind() == reflect.Uint8 || keyType.Kind() == reflect.Uint16 || keyType.Kind() == reflect.Uint32 || keyType.Kind() == reflect.Uint64: case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return strconv.FormatUint(k.Uint(), 10), nil return strconv.FormatUint(k.Uint(), 10), nil
case keyType.Kind() == reflect.Float32: case reflect.Float32:
return strconv.FormatFloat(k.Float(), 'f', -1, 32), nil return strconv.FormatFloat(k.Float(), 'f', -1, 32), nil
case keyType.Kind() == reflect.Float64: case reflect.Float64:
return strconv.FormatFloat(k.Float(), 'f', -1, 64), nil return strconv.FormatFloat(k.Float(), 'f', -1, 64), nil
}
default:
return "", fmt.Errorf("toml: type %s is not supported as a map key", keyType.Kind()) return "", fmt.Errorf("toml: type %s is not supported as a map key", keyType.Kind())
}
} }
func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, error) { func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, error) {
@@ -657,9 +704,19 @@ func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte
for iter.Next() { for iter.Next() {
v := iter.Value() v := iter.Value()
if isNil(v) { // Handle nil values: convert nil pointers to zero value,
// skip nil interfaces and nil maps.
switch v.Kind() {
case reflect.Ptr:
if v.IsNil() {
v = reflect.Zero(v.Type().Elem())
}
case reflect.Interface, reflect.Map:
if v.IsNil() {
continue continue
} }
default:
}
k, err := enc.keyToString(iter.Key()) k, err := enc.keyToString(iter.Key())
if err != nil { if err != nil {
@@ -748,9 +805,8 @@ func walkStruct(ctx encoderCtx, t *table, v reflect.Value) {
walkStruct(ctx, t, f.Elem()) walkStruct(ctx, t, f.Elem())
} }
continue continue
} else {
k = fieldType.Name
} }
k = fieldType.Name
} }
if isNil(f) { if isNil(f) {
@@ -760,6 +816,7 @@ func walkStruct(ctx encoderCtx, t *table, v reflect.Value) {
options := valueOptions{ options := valueOptions{
multiline: opts.multiline, multiline: opts.multiline,
omitempty: opts.omitempty, omitempty: opts.omitempty,
omitzero: opts.omitzero,
commented: opts.commented, commented: opts.commented,
comment: fieldType.Tag.Get("comment"), comment: fieldType.Tag.Get("comment"),
} }
@@ -820,6 +877,7 @@ type tagOptions struct {
multiline bool multiline bool
inline bool inline bool
omitempty bool omitempty bool
omitzero bool
commented bool commented bool
} }
@@ -832,7 +890,7 @@ func parseTag(tag string) (string, tagOptions) {
} }
raw := tag[idx+1:] raw := tag[idx+1:]
tag = string(tag[:idx]) tag = tag[:idx]
for raw != "" { for raw != "" {
var o string var o string
i := strings.Index(raw, ",") i := strings.Index(raw, ",")
@@ -848,6 +906,8 @@ func parseTag(tag string) (string, tagOptions) {
opts.inline = true opts.inline = true
case "omitempty": case "omitempty":
opts.omitempty = true opts.omitempty = true
case "omitzero":
opts.omitzero = true
case "commented": case "commented":
opts.commented = true opts.commented = true
} }
@@ -866,10 +926,7 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
} }
if !ctx.skipTableHeader { if !ctx.skipTableHeader {
b, err = enc.encodeTableHeader(ctx, b) b = enc.encodeTableHeader(ctx, b)
if err != nil {
return nil, err
}
if enc.indentTables && len(ctx.parentKey) > 0 { if enc.indentTables && len(ctx.parentKey) > 0 {
ctx.indent++ ctx.indent++
@@ -882,6 +939,9 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
if shouldOmitEmpty(kv.Options, kv.Value) { if shouldOmitEmpty(kv.Options, kv.Value) {
continue continue
} }
if kv.Options.omitzero && shouldOmitZero(kv.Options, kv.Value) {
continue
}
hasNonEmptyKV = true hasNonEmptyKV = true
ctx.setKey(kv.Key) ctx.setKey(kv.Key)
@@ -901,6 +961,9 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
if shouldOmitEmpty(table.Options, table.Value) { if shouldOmitEmpty(table.Options, table.Value) {
continue continue
} }
if table.Options.omitzero && shouldOmitZero(table.Options, table.Value) {
continue
}
if first { if first {
first = false first = false
if hasNonEmptyKV { if hasNonEmptyKV {
@@ -935,6 +998,9 @@ func (enc *Encoder) encodeTableInline(b []byte, ctx encoderCtx, t table) ([]byte
if shouldOmitEmpty(kv.Options, kv.Value) { if shouldOmitEmpty(kv.Options, kv.Value) {
continue continue
} }
if kv.Options.omitzero && shouldOmitZero(kv.Options, kv.Value) {
continue
}
if first { if first {
first = false first = false
@@ -963,11 +1029,14 @@ func willConvertToTable(ctx encoderCtx, v reflect.Value) bool {
if !v.IsValid() { if !v.IsValid() {
return false return false
} }
if v.Type() == timeType || v.Type().Implements(textMarshalerType) || (v.Kind() != reflect.Ptr && v.CanAddr() && reflect.PointerTo(v.Type()).Implements(textMarshalerType)) { t := v.Type()
if t == timeType || t.Implements(textMarshalerType) {
return false
}
if v.Kind() != reflect.Ptr && v.CanAddr() && reflect.PointerTo(t).Implements(textMarshalerType) {
return false return false
} }
t := v.Type()
switch t.Kind() { switch t.Kind() {
case reflect.Map, reflect.Struct: case reflect.Map, reflect.Struct:
return !ctx.inline return !ctx.inline

View File

@@ -1,7 +1,6 @@
package toml package toml
import ( import (
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/internal/tracker" "github.com/pelletier/go-toml/v2/internal/tracker"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -13,6 +12,9 @@ type strict struct {
key tracker.KeyTracker key tracker.KeyTracker
missing []unstable.ParserError missing []unstable.ParserError
// Reference to the document for computing key ranges.
doc []byte
} }
func (s *strict) EnterTable(node *unstable.Node) { func (s *strict) EnterTable(node *unstable.Node) {
@@ -53,7 +55,7 @@ func (s *strict) MissingTable(node *unstable.Node) {
} }
s.missing = append(s.missing, unstable.ParserError{ s.missing = append(s.missing, unstable.ParserError{
Highlight: keyLocation(node), Highlight: s.keyLocation(node),
Message: "missing table", Message: "missing table",
Key: s.key.Key(), Key: s.key.Key(),
}) })
@@ -65,7 +67,7 @@ func (s *strict) MissingField(node *unstable.Node) {
} }
s.missing = append(s.missing, unstable.ParserError{ s.missing = append(s.missing, unstable.ParserError{
Highlight: keyLocation(node), Highlight: s.keyLocation(node),
Message: "missing field", Message: "missing field",
Key: s.key.Key(), Key: s.key.Key(),
}) })
@@ -88,7 +90,7 @@ func (s *strict) Error(doc []byte) error {
return err return err
} }
func keyLocation(node *unstable.Node) []byte { func (s *strict) keyLocation(node *unstable.Node) []byte {
k := node.Key() k := node.Key()
hasOne := k.Next() hasOne := k.Next()
@@ -96,12 +98,17 @@ func keyLocation(node *unstable.Node) []byte {
panic("should not be called with empty key") panic("should not be called with empty key")
} }
start := k.Node().Data // Get the range from the first key to the last key.
end := k.Node().Data firstRaw := k.Node().Raw
lastRaw := firstRaw
for k.Next() { for k.Next() {
end = k.Node().Data lastRaw = k.Node().Raw
} }
return danger.BytesRange(start, end) // Compute the slice from the document using the ranges.
start := firstRaw.Offset
end := lastRaw.Offset + lastRaw.Length
return s.doc[start:end]
} }

View File

@@ -0,0 +1,597 @@
#!/usr/bin/env bash
set -uo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Go versions to test (1.11 through 1.26)
GO_VERSIONS=(
"1.11"
"1.12"
"1.13"
"1.14"
"1.15"
"1.16"
"1.17"
"1.18"
"1.19"
"1.20"
"1.21"
"1.22"
"1.23"
"1.24"
"1.25"
"1.26"
)
# Default values
PARALLEL=true
VERBOSE=false
OUTPUT_DIR="test-results"
DOCKER_TIMEOUT="10m"
usage() {
cat << EOF
Usage: $0 [OPTIONS] [GO_VERSIONS...]
Test go-toml across multiple Go versions using Docker containers.
The script reports the lowest continuous supported Go version (where all subsequent
versions pass) and only exits with non-zero status if either of the two most recent
Go versions fail, indicating immediate attention is needed.
Note: For Go versions < 1.21, the script automatically updates go.mod to match the
target version, but older versions may still fail due to missing standard library
features (e.g., the 'slices' package introduced in Go 1.21).
OPTIONS:
-h, --help Show this help message
-s, --sequential Run tests sequentially instead of in parallel
-v, --verbose Enable verbose output
-o, --output DIR Output directory for test results (default: test-results)
-t, --timeout TIME Docker timeout for each test (default: 10m)
--list List available Go versions and exit
ARGUMENTS:
GO_VERSIONS Specific Go versions to test (default: all supported versions)
Examples: 1.21 1.22 1.23
EXAMPLES:
$0 # Test all Go versions in parallel
$0 --sequential # Test all Go versions sequentially
$0 1.21 1.22 1.23 # Test specific versions
$0 --verbose --output ./results 1.25 1.26 # Verbose output to custom directory
EXIT CODES:
0 Recent Go versions pass (good compatibility)
1 Recent Go versions fail (needs attention) or script error
EOF
}
log() {
echo -e "${BLUE}[$(date +'%H:%M:%S')]${NC} $*" >&2
}
log_success() {
echo -e "${GREEN}[$(date +'%H:%M:%S')] ✓${NC} $*" >&2
}
log_error() {
echo -e "${RED}[$(date +'%H:%M:%S')] ✗${NC} $*" >&2
}
log_warning() {
echo -e "${YELLOW}[$(date +'%H:%M:%S')] ⚠${NC} $*" >&2
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
exit 0
;;
-s|--sequential)
PARALLEL=false
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-o|--output)
OUTPUT_DIR="$2"
shift 2
;;
-t|--timeout)
DOCKER_TIMEOUT="$2"
shift 2
;;
--list)
echo "Available Go versions:"
printf '%s\n' "${GO_VERSIONS[@]}"
exit 0
;;
-*)
echo "Unknown option: $1" >&2
usage
exit 1
;;
*)
# Remaining arguments are Go versions
break
;;
esac
done
# If specific versions provided, use those instead of defaults
if [[ $# -gt 0 ]]; then
GO_VERSIONS=("$@")
fi
# Validate Go versions
for version in "${GO_VERSIONS[@]}"; do
if ! [[ "$version" =~ ^1\.(1[1-9]|2[0-6])$ ]]; then
log_error "Invalid Go version: $version. Supported versions: 1.11-1.26"
exit 1
fi
done
# Check if Docker is available
if ! command -v docker &> /dev/null; then
log_error "Docker is required but not installed or not in PATH"
exit 1
fi
# Check if Docker daemon is running
if ! docker info &> /dev/null; then
log_error "Docker daemon is not running"
exit 1
fi
# Create output directory
mkdir -p "$OUTPUT_DIR"
# Function to test a single Go version
test_go_version() {
local go_version="$1"
local container_name="go-toml-test-${go_version}"
local result_file="${OUTPUT_DIR}/go-${go_version}.txt"
local dockerfile_content
log "Testing Go $go_version..."
# Create a temporary Dockerfile for this version
# For Go versions < 1.21, we need to update go.mod to match the Go version
local needs_go_mod_update=false
if [[ $(echo "$go_version 1.21" | tr ' ' '\n' | sort -V | head -n1) == "$go_version" && "$go_version" != "1.21" ]]; then
needs_go_mod_update=true
fi
dockerfile_content="FROM golang:${go_version}-alpine
# Install git (required for go mod)
RUN apk add --no-cache git
# Set working directory
WORKDIR /app
# Copy source code
COPY . ."
# Add go.mod update step for older Go versions
if [[ "$needs_go_mod_update" == true ]]; then
dockerfile_content="$dockerfile_content
# Update go.mod to match Go version (required for Go < 1.21)
RUN if [ -f go.mod ]; then sed -i 's/^go [0-9]\\+\\.[0-9]\\+\\(\\.[0-9]\\+\\)\\?/go $go_version/' go.mod; fi
# Note: Go versions < 1.21 may fail due to missing standard library packages (e.g., slices)
# This is expected for projects that use Go 1.21+ features"
fi
dockerfile_content="$dockerfile_content
# Run tests
CMD [\"sh\", \"-c\", \"go version && echo '--- Running go test ./... ---' && go test ./...\"]"
# Create temporary directory for this test
local temp_dir
temp_dir=$(mktemp -d)
# Copy source to temp directory (excluding test results and git)
rsync -a --exclude="$OUTPUT_DIR" --exclude=".git" --exclude="*.test" . "$temp_dir/"
# Create Dockerfile in temp directory
echo "$dockerfile_content" > "$temp_dir/Dockerfile"
# Build and run container
local exit_code=0
local output
if $VERBOSE; then
log "Building Docker image for Go $go_version..."
fi
# Capture both stdout and stderr, and the exit code
if output=$(cd "$temp_dir" && timeout "$DOCKER_TIMEOUT" docker build -t "$container_name" . 2>&1 && \
timeout "$DOCKER_TIMEOUT" docker run --rm "$container_name" 2>&1); then
log_success "Go $go_version: PASSED"
echo "PASSED" > "${result_file}.status"
else
exit_code=$?
log_error "Go $go_version: FAILED (exit code: $exit_code)"
echo "FAILED" > "${result_file}.status"
fi
# Save full output
echo "$output" > "$result_file"
# Clean up
docker rmi "$container_name" &> /dev/null || true
rm -rf "$temp_dir"
if $VERBOSE; then
echo "--- Go $go_version output ---"
echo "$output"
echo "--- End Go $go_version output ---"
fi
return $exit_code
}
# Function to run tests in parallel
run_parallel() {
local pids=()
local failed_versions=()
log "Starting parallel tests for ${#GO_VERSIONS[@]} Go versions..."
# Start all tests in background
for version in "${GO_VERSIONS[@]}"; do
test_go_version "$version" &
pids+=($!)
done
# Wait for all tests to complete
for i in "${!pids[@]}"; do
local pid=${pids[$i]}
local version=${GO_VERSIONS[$i]}
if ! wait $pid; then
failed_versions+=("$version")
fi
done
return ${#failed_versions[@]}
}
# Function to run tests sequentially
run_sequential() {
local failed_versions=()
log "Starting sequential tests for ${#GO_VERSIONS[@]} Go versions..."
for version in "${GO_VERSIONS[@]}"; do
if ! test_go_version "$version"; then
failed_versions+=("$version")
fi
done
return ${#failed_versions[@]}
}
# Main execution
main() {
local start_time
start_time=$(date +%s)
log "Starting Go version compatibility tests..."
log "Testing versions: ${GO_VERSIONS[*]}"
log "Output directory: $OUTPUT_DIR"
log "Parallel execution: $PARALLEL"
local failed_count
if $PARALLEL; then
run_parallel
failed_count=$?
else
run_sequential
failed_count=$?
fi
local end_time
end_time=$(date +%s)
local duration=$((end_time - start_time))
# Collect results for display
local passed_versions=()
local failed_versions=()
local unknown_versions=()
local passed_count=0
for version in "${GO_VERSIONS[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
if [[ -f "$status_file" ]]; then
local status
status=$(cat "$status_file")
if [[ "$status" == "PASSED" ]]; then
passed_versions+=("$version")
((passed_count++))
else
failed_versions+=("$version")
fi
else
unknown_versions+=("$version")
fi
done
# Generate summary report
local summary_file="${OUTPUT_DIR}/summary.txt"
{
echo "Go Version Compatibility Test Summary"
echo "====================================="
echo "Date: $(date)"
echo "Duration: ${duration}s"
echo "Parallel: $PARALLEL"
echo ""
echo "Results:"
for version in "${GO_VERSIONS[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
if [[ -f "$status_file" ]]; then
local status
status=$(cat "$status_file")
if [[ "$status" == "PASSED" ]]; then
echo " Go $version: ✓ PASSED"
else
echo " Go $version: ✗ FAILED"
fi
else
echo " Go $version: ? UNKNOWN (no status file)"
fi
done
echo ""
echo "Summary: $passed_count/${#GO_VERSIONS[@]} versions passed"
if [[ $failed_count -gt 0 ]]; then
echo ""
echo "Failed versions details:"
for version in "${failed_versions[@]}"; do
echo ""
echo "--- Go $version (FAILED) ---"
local result_file="${OUTPUT_DIR}/go-${version}.txt"
if [[ -f "$result_file" ]]; then
tail -n 30 "$result_file"
fi
done
fi
} > "$summary_file"
# Find lowest continuous supported version and check recent versions
local lowest_continuous_version=""
local recent_versions_failed=false
# Sort versions to ensure proper order
local sorted_versions=()
for version in "${GO_VERSIONS[@]}"; do
sorted_versions+=("$version")
done
# Sort versions numerically (1.11, 1.12, ..., 1.25)
IFS=$'\n' sorted_versions=($(sort -V <<< "${sorted_versions[*]}"))
# Find lowest continuous supported version (all versions from this point onwards pass)
for version in "${sorted_versions[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
local all_subsequent_pass=true
# Check if this version and all subsequent versions pass
local found_current=false
for check_version in "${sorted_versions[@]}"; do
if [[ "$check_version" == "$version" ]]; then
found_current=true
fi
if [[ "$found_current" == true ]]; then
local check_status_file="${OUTPUT_DIR}/go-${check_version}.txt.status"
if [[ -f "$check_status_file" ]]; then
local status
status=$(cat "$check_status_file")
if [[ "$status" != "PASSED" ]]; then
all_subsequent_pass=false
break
fi
else
all_subsequent_pass=false
break
fi
fi
done
if [[ "$all_subsequent_pass" == true ]]; then
lowest_continuous_version="$version"
break
fi
done
# Check if the two most recent versions failed
local num_versions=${#sorted_versions[@]}
if [[ $num_versions -ge 2 ]]; then
local second_recent="${sorted_versions[$((num_versions-2))]}"
local most_recent="${sorted_versions[$((num_versions-1))]}"
local second_recent_status_file="${OUTPUT_DIR}/go-${second_recent}.txt.status"
local most_recent_status_file="${OUTPUT_DIR}/go-${most_recent}.txt.status"
local second_recent_failed=false
local most_recent_failed=false
if [[ -f "$second_recent_status_file" ]]; then
local status
status=$(cat "$second_recent_status_file")
if [[ "$status" != "PASSED" ]]; then
second_recent_failed=true
fi
else
second_recent_failed=true
fi
if [[ -f "$most_recent_status_file" ]]; then
local status
status=$(cat "$most_recent_status_file")
if [[ "$status" != "PASSED" ]]; then
most_recent_failed=true
fi
else
most_recent_failed=true
fi
if [[ "$second_recent_failed" == true || "$most_recent_failed" == true ]]; then
recent_versions_failed=true
fi
elif [[ $num_versions -eq 1 ]]; then
# Only one version tested, check if it's the most recent and failed
local only_version="${sorted_versions[0]}"
local only_status_file="${OUTPUT_DIR}/go-${only_version}.txt.status"
if [[ -f "$only_status_file" ]]; then
local status
status=$(cat "$only_status_file")
if [[ "$status" != "PASSED" ]]; then
recent_versions_failed=true
fi
else
recent_versions_failed=true
fi
fi
# Display summary
echo ""
log "Test completed in ${duration}s"
log "Summary report: $summary_file"
echo ""
echo "========================================"
echo " FINAL RESULTS"
echo "========================================"
echo ""
# Display passed versions
if [[ ${#passed_versions[@]} -gt 0 ]]; then
log_success "PASSED (${#passed_versions[@]}/${#GO_VERSIONS[@]}):"
# Sort passed versions for display
local sorted_passed=()
for version in "${sorted_versions[@]}"; do
for passed_version in "${passed_versions[@]}"; do
if [[ "$version" == "$passed_version" ]]; then
sorted_passed+=("$version")
break
fi
done
done
for version in "${sorted_passed[@]}"; do
echo -e " ${GREEN}${NC} Go $version"
done
echo ""
fi
# Display failed versions
if [[ ${#failed_versions[@]} -gt 0 ]]; then
log_error "FAILED (${#failed_versions[@]}/${#GO_VERSIONS[@]}):"
# Sort failed versions for display
local sorted_failed=()
for version in "${sorted_versions[@]}"; do
for failed_version in "${failed_versions[@]}"; do
if [[ "$version" == "$failed_version" ]]; then
sorted_failed+=("$version")
break
fi
done
done
for version in "${sorted_failed[@]}"; do
echo -e " ${RED}${NC} Go $version"
done
echo ""
# Show failure details
echo "========================================"
echo " FAILURE DETAILS"
echo "========================================"
echo ""
for version in "${sorted_failed[@]}"; do
echo -e "${RED}--- Go $version FAILURE LOGS (last 30 lines) ---${NC}"
local result_file="${OUTPUT_DIR}/go-${version}.txt"
if [[ -f "$result_file" ]]; then
tail -n 30 "$result_file" | sed 's/^/ /'
else
echo " No log file found: $result_file"
fi
echo ""
done
fi
# Display unknown versions
if [[ ${#unknown_versions[@]} -gt 0 ]]; then
log_warning "UNKNOWN (${#unknown_versions[@]}/${#GO_VERSIONS[@]}):"
for version in "${unknown_versions[@]}"; do
echo -e " ${YELLOW}?${NC} Go $version (no status file)"
done
echo ""
fi
echo "========================================"
echo " COMPATIBILITY SUMMARY"
echo "========================================"
echo ""
if [[ -n "$lowest_continuous_version" ]]; then
log_success "Lowest continuous supported version: Go $lowest_continuous_version"
echo " (All versions from Go $lowest_continuous_version onwards pass)"
else
log_error "No continuous version support found"
echo " (No version has all subsequent versions passing)"
fi
echo ""
echo "========================================"
echo "Full detailed logs available in: $OUTPUT_DIR"
echo "========================================"
# Determine exit code based on recent versions
if [[ "$recent_versions_failed" == true ]]; then
log_error "OVERALL RESULT: Recent Go versions failed - this needs attention!"
if [[ -n "$lowest_continuous_version" ]]; then
echo "Note: Continuous support starts from Go $lowest_continuous_version"
fi
exit 1
else
log_success "OVERALL RESULT: Recent Go versions pass - compatibility looks good!"
if [[ -n "$lowest_continuous_version" ]]; then
echo "Continuous support starts from Go $lowest_continuous_version"
fi
exit 0
fi
}
# Trap to clean up on exit
cleanup() {
# Kill any remaining background processes
jobs -p | xargs -r kill 2>/dev/null || true
# Clean up any remaining Docker containers
docker ps -q --filter "name=go-toml-test-" | xargs -r docker stop 2>/dev/null || true
docker images -q --filter "reference=go-toml-test-*" | xargs -r docker rmi 2>/dev/null || true
}
trap cleanup EXIT
# Run main function
main

View File

@@ -6,9 +6,18 @@ import (
"time" "time"
) )
var timeType = reflect.TypeOf((*time.Time)(nil)).Elem() // isZeroer is used to check if a type has a custom IsZero method.
var textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem() // This allows custom types to define their own zero-value semantics.
var textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() type isZeroer interface {
var mapStringInterfaceType = reflect.TypeOf(map[string]interface{}(nil)) IsZero() bool
var sliceInterfaceType = reflect.TypeOf([]interface{}(nil)) }
var stringType = reflect.TypeOf("")
var (
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
isZeroerType = reflect.TypeOf((*isZeroer)(nil)).Elem()
mapStringInterfaceType = reflect.TypeOf(map[string]interface{}(nil))
sliceInterfaceType = reflect.TypeOf([]interface{}(nil))
stringType = reflect.TypeOf("")
)

View File

@@ -12,7 +12,6 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/internal/tracker" "github.com/pelletier/go-toml/v2/internal/tracker"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -57,13 +56,18 @@ func (d *Decoder) DisallowUnknownFields() *Decoder {
// EnableUnmarshalerInterface allows to enable unmarshaler interface. // EnableUnmarshalerInterface allows to enable unmarshaler interface.
// //
// With this feature enabled, types implementing the unstable/Unmarshaler // With this feature enabled, types implementing the unstable.Unmarshaler
// interface can be decoded from any structure of the document. It allows types // interface can be decoded from any structure of the document. It allows types
// that don't have a straightforward TOML representation to provide their own // that don't have a straightforward TOML representation to provide their own
// decoding logic. // decoding logic.
// //
// Currently, types can only decode from a single value. Tables and array tables // The UnmarshalTOML method receives raw TOML bytes:
// are not supported. // - For single values: the raw value bytes (e.g., `"hello"` for a string)
// - For tables: all key-value lines belonging to that table
// - For inline tables/arrays: the raw bytes of the inline structure
//
// The unstable.RawMessage type can be used to capture raw TOML bytes for
// later processing, similar to json.RawMessage.
// //
// *Unstable:* This method does not follow the compatibility guarantees of // *Unstable:* This method does not follow the compatibility guarantees of
// semver. It can be changed or removed without a new major version being // semver. It can be changed or removed without a new major version being
@@ -123,6 +127,7 @@ func (d *Decoder) Decode(v interface{}) error {
dec := decoder{ dec := decoder{
strict: strict{ strict: strict{
Enabled: d.strict, Enabled: d.strict,
doc: b,
}, },
unmarshalerInterface: d.unmarshalerInterface, unmarshalerInterface: d.unmarshalerInterface,
} }
@@ -226,7 +231,7 @@ func (d *decoder) FromParser(v interface{}) error {
} }
if r.IsNil() { if r.IsNil() {
return fmt.Errorf("toml: decoding pointer target cannot be nil") return errors.New("toml: decoding pointer target cannot be nil")
} }
r = r.Elem() r = r.Elem()
@@ -273,7 +278,7 @@ func (d *decoder) handleRootExpression(expr *unstable.Node, v reflect.Value) err
var err error var err error
var first bool // used for to clear array tables on first use var first bool // used for to clear array tables on first use
if !(d.skipUntilTable && expr.Kind == unstable.KeyValue) { if !d.skipUntilTable || expr.Kind != unstable.KeyValue {
first, err = d.seen.CheckExpression(expr) first, err = d.seen.CheckExpression(expr)
if err != nil { if err != nil {
return err return err
@@ -378,7 +383,7 @@ func (d *decoder) handleArrayTableCollectionLast(key unstable.Iterator, v reflec
case reflect.Array: case reflect.Array:
idx := d.arrayIndex(true, v) idx := d.arrayIndex(true, v)
if idx >= v.Len() { if idx >= v.Len() {
return v, fmt.Errorf("%s at position %d", d.typeMismatchError("array table", v.Type()), idx) return v, fmt.Errorf("%w at position %d", d.typeMismatchError("array table", v.Type()), idx)
} }
elem := v.Index(idx) elem := v.Index(idx)
_, err := d.handleArrayTable(key, elem) _, err := d.handleArrayTable(key, elem)
@@ -416,27 +421,51 @@ func (d *decoder) handleArrayTableCollection(key unstable.Iterator, v reflect.Va
return v, nil return v, nil
case reflect.Slice: case reflect.Slice:
elem := v.Index(v.Len() - 1) // Create a new element when the slice is empty; otherwise operate on
// the last element.
var (
elem reflect.Value
created bool
)
if v.Len() == 0 {
created = true
elemType := v.Type().Elem()
if elemType.Kind() == reflect.Interface {
elem = makeMapStringInterface()
} else {
elem = reflect.New(elemType).Elem()
}
} else {
elem = v.Index(v.Len() - 1)
}
x, err := d.handleArrayTable(key, elem) x, err := d.handleArrayTable(key, elem)
if err != nil || d.skipUntilTable { if err != nil || d.skipUntilTable {
return reflect.Value{}, err return reflect.Value{}, err
} }
if x.IsValid() { if x.IsValid() {
if created {
elem = x
} else {
elem.Set(x) elem.Set(x)
} }
}
if created {
return reflect.Append(v, elem), nil
}
return v, err return v, err
case reflect.Array: case reflect.Array:
idx := d.arrayIndex(false, v) idx := d.arrayIndex(false, v)
if idx >= v.Len() { if idx >= v.Len() {
return v, fmt.Errorf("%s at position %d", d.typeMismatchError("array table", v.Type()), idx) return v, fmt.Errorf("%w at position %d", d.typeMismatchError("array table", v.Type()), idx)
} }
elem := v.Index(idx) elem := v.Index(idx)
_, err := d.handleArrayTable(key, elem) _, err := d.handleArrayTable(key, elem)
return v, err return v, err
} default:
return d.handleArrayTable(key, v) return d.handleArrayTable(key, v)
}
} }
func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn handlerFn, makeFn valueMakerFn) (reflect.Value, error) { func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn handlerFn, makeFn valueMakerFn) (reflect.Value, error) {
@@ -470,7 +499,8 @@ func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn h
mv := v.MapIndex(mk) mv := v.MapIndex(mk)
set := false set := false
if !mv.IsValid() { switch {
case !mv.IsValid():
// If there is no value in the map, create a new one according to // If there is no value in the map, create a new one according to
// the map type. If the element type is interface, create either a // the map type. If the element type is interface, create either a
// map[string]interface{} or a []interface{} depending on whether // map[string]interface{} or a []interface{} depending on whether
@@ -483,13 +513,13 @@ func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn h
mv = reflect.New(t).Elem() mv = reflect.New(t).Elem()
} }
set = true set = true
} else if mv.Kind() == reflect.Interface { case mv.Kind() == reflect.Interface:
mv = mv.Elem() mv = mv.Elem()
if !mv.IsValid() { if !mv.IsValid() {
mv = makeFn() mv = makeFn()
} }
set = true set = true
} else if !mv.CanAddr() { case !mv.CanAddr():
vt := v.Type() vt := v.Type()
t := vt.Elem() t := vt.Elem()
oldmv := mv oldmv := mv
@@ -574,9 +604,8 @@ func (d *decoder) handleArrayTablePart(key unstable.Iterator, v reflect.Value) (
// cannot handle it. // cannot handle it.
func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.Value, error) { func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.Value, error) {
if v.Kind() == reflect.Slice { if v.Kind() == reflect.Slice {
if v.Len() == 0 { // For non-empty slices, work with the last element
return reflect.Value{}, unstable.NewParserError(key.Node().Data, "cannot store a table in a slice") if v.Len() > 0 {
}
elem := v.Index(v.Len() - 1) elem := v.Index(v.Len() - 1)
x, err := d.handleTable(key, elem) x, err := d.handleTable(key, elem)
if err != nil { if err != nil {
@@ -587,6 +616,17 @@ func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.V
} }
return reflect.Value{}, nil return reflect.Value{}, nil
} }
// Empty slice - check if it implements Unmarshaler (e.g., RawMessage)
// and we're at the end of the key path
if d.unmarshalerInterface && !key.Next() {
if v.CanAddr() && v.Addr().CanInterface() {
if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
return d.handleKeyValuesUnmarshaler(outi)
}
}
}
return reflect.Value{}, unstable.NewParserError(key.Node().Data, "cannot store a table in a slice")
}
if key.Next() { if key.Next() {
// Still scoping the key // Still scoping the key
return d.handleTablePart(key, v) return d.handleTablePart(key, v)
@@ -599,6 +639,24 @@ func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.V
// Handle root expressions until the end of the document or the next // Handle root expressions until the end of the document or the next
// non-key-value. // non-key-value.
func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) { func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) {
// Check if target implements Unmarshaler before processing key-values.
// This allows types to handle entire tables themselves.
if d.unmarshalerInterface {
vv := v
for vv.Kind() == reflect.Ptr {
if vv.IsNil() {
vv.Set(reflect.New(vv.Type().Elem()))
}
vv = vv.Elem()
}
if vv.CanAddr() && vv.Addr().CanInterface() {
if outi, ok := vv.Addr().Interface().(unstable.Unmarshaler); ok {
// Collect all key-value expressions for this table
return d.handleKeyValuesUnmarshaler(outi)
}
}
}
var rv reflect.Value var rv reflect.Value
for d.nextExpr() { for d.nextExpr() {
expr := d.expr() expr := d.expr()
@@ -628,6 +686,41 @@ func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) {
return rv, nil return rv, nil
} }
// handleKeyValuesUnmarshaler collects all key-value expressions for a table
// and passes them to the Unmarshaler as raw TOML bytes.
func (d *decoder) handleKeyValuesUnmarshaler(u unstable.Unmarshaler) (reflect.Value, error) {
// Collect raw bytes from all key-value expressions for this table.
// We use the Raw field on each KeyValue expression to preserve the
// original formatting (whitespace, quoting style, etc.) from the document.
var buf []byte
for d.nextExpr() {
expr := d.expr()
if expr.Kind != unstable.KeyValue {
d.stashExpr()
break
}
_, err := d.seen.CheckExpression(expr)
if err != nil {
return reflect.Value{}, err
}
// Use the raw bytes from the original document to preserve formatting
if expr.Raw.Length > 0 {
raw := d.p.Raw(expr.Raw)
buf = append(buf, raw...)
}
buf = append(buf, '\n')
}
if err := u.UnmarshalTOML(buf); err != nil {
return reflect.Value{}, err
}
return reflect.Value{}, nil
}
type ( type (
handlerFn func(key unstable.Iterator, v reflect.Value) (reflect.Value, error) handlerFn func(key unstable.Iterator, v reflect.Value) (reflect.Value, error)
valueMakerFn func() reflect.Value valueMakerFn func() reflect.Value
@@ -672,15 +765,22 @@ func (d *decoder) handleValue(value *unstable.Node, v reflect.Value) error {
if d.unmarshalerInterface { if d.unmarshalerInterface {
if v.CanAddr() && v.Addr().CanInterface() { if v.CanAddr() && v.Addr().CanInterface() {
if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok { if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
return outi.UnmarshalTOML(value) // Pass raw bytes from the original document
return outi.UnmarshalTOML(d.p.Raw(value.Raw))
} }
} }
} }
// Only try TextUnmarshaler for scalar types. For Array and InlineTable,
// fall through to struct/map unmarshaling to allow flexible unmarshaling
// where a type can implement UnmarshalText for string values but still
// be populated field-by-field from a table. See issue #974.
if value.Kind != unstable.Array && value.Kind != unstable.InlineTable {
ok, err := d.tryTextUnmarshaler(value, v) ok, err := d.tryTextUnmarshaler(value, v)
if ok || err != nil { if ok || err != nil {
return err return err
} }
}
switch value.Kind { switch value.Kind {
case unstable.String: case unstable.String:
@@ -821,6 +921,9 @@ func (d *decoder) unmarshalDateTime(value *unstable.Node, v reflect.Value) error
return err return err
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("datetime", v.Type()))
}
v.Set(reflect.ValueOf(dt)) v.Set(reflect.ValueOf(dt))
return nil return nil
} }
@@ -831,14 +934,14 @@ func (d *decoder) unmarshalLocalDate(value *unstable.Node, v reflect.Value) erro
return err return err
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local date", v.Type()))
}
if v.Type() == timeType { if v.Type() == timeType {
cast := ld.AsTime(time.Local) v.Set(reflect.ValueOf(ld.AsTime(time.Local)))
v.Set(reflect.ValueOf(cast))
return nil return nil
} }
v.Set(reflect.ValueOf(ld)) v.Set(reflect.ValueOf(ld))
return nil return nil
} }
@@ -852,6 +955,9 @@ func (d *decoder) unmarshalLocalTime(value *unstable.Node, v reflect.Value) erro
return unstable.NewParserError(rest, "extra characters at the end of a local time") return unstable.NewParserError(rest, "extra characters at the end of a local time")
} }
if v.Kind() != reflect.Interface {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local time", v.Type()))
}
v.Set(reflect.ValueOf(lt)) v.Set(reflect.ValueOf(lt))
return nil return nil
} }
@@ -866,15 +972,14 @@ func (d *decoder) unmarshalLocalDateTime(value *unstable.Node, v reflect.Value)
return unstable.NewParserError(rest, "extra characters at the end of a local date time") return unstable.NewParserError(rest, "extra characters at the end of a local date time")
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local datetime", v.Type()))
}
if v.Type() == timeType { if v.Type() == timeType {
cast := ldt.AsTime(time.Local) v.Set(reflect.ValueOf(ldt.AsTime(time.Local)))
v.Set(reflect.ValueOf(cast))
return nil return nil
} }
v.Set(reflect.ValueOf(ldt)) v.Set(reflect.ValueOf(ldt))
return nil return nil
} }
@@ -929,8 +1034,9 @@ const (
// compile time, so it is computed during initialization. // compile time, so it is computed during initialization.
var maxUint int64 = math.MaxInt64 var maxUint int64 = math.MaxInt64
func init() { func init() { //nolint:gochecknoinits
m := uint64(^uint(0)) m := uint64(^uint(0))
// #nosec G115
if m < uint64(maxUint) { if m < uint64(maxUint) {
maxUint = int64(m) maxUint = int64(m)
} }
@@ -1010,7 +1116,7 @@ func (d *decoder) unmarshalInteger(value *unstable.Node, v reflect.Value) error
case reflect.Interface: case reflect.Interface:
r = reflect.ValueOf(i) r = reflect.ValueOf(i)
default: default:
return unstable.NewParserError(d.p.Raw(value.Raw), d.typeMismatchString("integer", v.Type())) return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("integer", v.Type()))
} }
if !r.Type().AssignableTo(v.Type()) { if !r.Type().AssignableTo(v.Type()) {
@@ -1029,7 +1135,7 @@ func (d *decoder) unmarshalString(value *unstable.Node, v reflect.Value) error {
case reflect.Interface: case reflect.Interface:
v.Set(reflect.ValueOf(string(value.Data))) v.Set(reflect.ValueOf(string(value.Data)))
default: default:
return unstable.NewParserError(d.p.Raw(value.Raw), d.typeMismatchString("string", v.Type())) return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("string", v.Type()))
} }
return nil return nil
@@ -1080,35 +1186,39 @@ func (d *decoder) keyFromData(keyType reflect.Type, data []byte) (reflect.Value,
return reflect.Value{}, fmt.Errorf("toml: error unmarshalling key type %s from text: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error unmarshalling key type %s from text: %w", stringType, err)
} }
return mk.Elem(), nil return mk.Elem(), nil
}
case keyType.Kind() == reflect.Int || keyType.Kind() == reflect.Int8 || keyType.Kind() == reflect.Int16 || keyType.Kind() == reflect.Int32 || keyType.Kind() == reflect.Int64: switch keyType.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
key, err := strconv.ParseInt(string(data), 10, 64) key, err := strconv.ParseInt(string(data), 10, 64)
if err != nil { if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from integer: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from integer: %w", stringType, err)
} }
return reflect.ValueOf(key).Convert(keyType), nil return reflect.ValueOf(key).Convert(keyType), nil
case keyType.Kind() == reflect.Uint || keyType.Kind() == reflect.Uint8 || keyType.Kind() == reflect.Uint16 || keyType.Kind() == reflect.Uint32 || keyType.Kind() == reflect.Uint64: case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
key, err := strconv.ParseUint(string(data), 10, 64) key, err := strconv.ParseUint(string(data), 10, 64)
if err != nil { if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from unsigned integer: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from unsigned integer: %w", stringType, err)
} }
return reflect.ValueOf(key).Convert(keyType), nil return reflect.ValueOf(key).Convert(keyType), nil
case keyType.Kind() == reflect.Float32: case reflect.Float32:
key, err := strconv.ParseFloat(string(data), 32) key, err := strconv.ParseFloat(string(data), 32)
if err != nil { if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err)
} }
return reflect.ValueOf(float32(key)), nil return reflect.ValueOf(float32(key)), nil
case keyType.Kind() == reflect.Float64: case reflect.Float64:
key, err := strconv.ParseFloat(string(data), 64) key, err := strconv.ParseFloat(string(data), 64)
if err != nil { if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err)
} }
return reflect.ValueOf(float64(key)), nil return reflect.ValueOf(float64(key)), nil
}
default:
return reflect.Value{}, fmt.Errorf("toml: cannot convert map key of type %s to expected type %s", stringType, keyType) return reflect.Value{}, fmt.Errorf("toml: cannot convert map key of type %s to expected type %s", stringType, keyType)
}
} }
func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node, v reflect.Value) (reflect.Value, error) { func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node, v reflect.Value) (reflect.Value, error) {
@@ -1154,6 +1264,18 @@ func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node
case reflect.Struct: case reflect.Struct:
path, found := structFieldPath(v, string(key.Node().Data)) path, found := structFieldPath(v, string(key.Node().Data))
if !found { if !found {
// If no matching struct field is found but the target implements the
// unstable.Unmarshaler interface (and it is enabled), delegate the
// decoding of this value to the custom unmarshaler.
if d.unmarshalerInterface {
if v.CanAddr() && v.Addr().CanInterface() {
if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
// Pass raw bytes from the original document
return reflect.Value{}, outi.UnmarshalTOML(d.p.Raw(value.Raw))
}
}
}
// Otherwise, keep previous behavior and skip until the next table.
d.skipUntilTable = true d.skipUntilTable = true
break break
} }
@@ -1259,13 +1381,13 @@ func fieldByIndex(v reflect.Value, path []int) reflect.Value {
type fieldPathsMap = map[string][]int type fieldPathsMap = map[string][]int
var globalFieldPathsCache atomic.Value // map[danger.TypeID]fieldPathsMap var globalFieldPathsCache atomic.Value // map[reflect.Type]fieldPathsMap
func structFieldPath(v reflect.Value, name string) ([]int, bool) { func structFieldPath(v reflect.Value, name string) ([]int, bool) {
t := v.Type() t := v.Type()
cache, _ := globalFieldPathsCache.Load().(map[danger.TypeID]fieldPathsMap) cache, _ := globalFieldPathsCache.Load().(map[reflect.Type]fieldPathsMap)
fieldPaths, ok := cache[danger.MakeTypeID(t)] fieldPaths, ok := cache[t]
if !ok { if !ok {
fieldPaths = map[string][]int{} fieldPaths = map[string][]int{}
@@ -1276,8 +1398,8 @@ func structFieldPath(v reflect.Value, name string) ([]int, bool) {
fieldPaths[strings.ToLower(name)] = path fieldPaths[strings.ToLower(name)] = path
}) })
newCache := make(map[danger.TypeID]fieldPathsMap, len(cache)+1) newCache := make(map[reflect.Type]fieldPathsMap, len(cache)+1)
newCache[danger.MakeTypeID(t)] = fieldPaths newCache[t] = fieldPaths
for k, v := range cache { for k, v := range cache {
newCache[k] = v newCache[k] = v
} }
@@ -1301,7 +1423,9 @@ func forEachField(t reflect.Type, path []int, do func(name string, path []int))
continue continue
} }
fieldPath := append(path, i) fieldPath := make([]int, 0, len(path)+1)
fieldPath = append(fieldPath, path...)
fieldPath = append(fieldPath, i)
fieldPath = fieldPath[:len(fieldPath):len(fieldPath)] fieldPath = fieldPath[:len(fieldPath):len(fieldPath)]
name := f.Tag.Get("toml") name := f.Tag.Get("toml")

View File

@@ -1,10 +1,8 @@
package unstable package unstable
import ( import (
"errors"
"fmt" "fmt"
"unsafe"
"github.com/pelletier/go-toml/v2/internal/danger"
) )
// Iterator over a sequence of nodes. // Iterator over a sequence of nodes.
@@ -19,30 +17,43 @@ import (
// // do something with n // // do something with n
// } // }
type Iterator struct { type Iterator struct {
nodes *[]Node
idx int32
started bool started bool
node *Node
} }
// Next moves the iterator forward and returns true if points to a // Next moves the iterator forward and returns true if points to a
// node, false otherwise. // node, false otherwise.
func (c *Iterator) Next() bool { func (c *Iterator) Next() bool {
if c.nodes == nil {
return false
}
nodes := *c.nodes
if !c.started { if !c.started {
c.started = true c.started = true
} else if c.node.Valid() { } else {
c.node = c.node.Next() idx := c.idx
if idx >= 0 && int(idx) < len(nodes) {
c.idx = nodes[idx].next
} }
return c.node.Valid() }
return c.idx >= 0 && int(c.idx) < len(nodes)
} }
// IsLast returns true if the current node of the iterator is the last // IsLast returns true if the current node of the iterator is the last
// one. Subsequent calls to Next() will return false. // one. Subsequent calls to Next() will return false.
func (c *Iterator) IsLast() bool { func (c *Iterator) IsLast() bool {
return c.node.next == 0 return c.nodes == nil || c.idx < 0 || (*c.nodes)[c.idx].next < 0
} }
// Node returns a pointer to the node pointed at by the iterator. // Node returns a pointer to the node pointed at by the iterator.
func (c *Iterator) Node() *Node { func (c *Iterator) Node() *Node {
return c.node if c.nodes == nil || c.idx < 0 {
return nil
}
n := &(*c.nodes)[c.idx]
n.nodes = c.nodes
return n
} }
// Node in a TOML expression AST. // Node in a TOML expression AST.
@@ -65,11 +76,12 @@ type Node struct {
Raw Range // Raw bytes from the input. Raw Range // Raw bytes from the input.
Data []byte // Node value (either allocated or referencing the input). Data []byte // Node value (either allocated or referencing the input).
// References to other nodes, as offsets in the backing array // Absolute indices into the backing nodes slice. -1 means none.
// from this node. References can go backward, so those can be next int32
// negative. child int32
next int // 0 if last element
child int // 0 if no child // Reference to the backing nodes slice for navigation.
nodes *[]Node
} }
// Range of bytes in the document. // Range of bytes in the document.
@@ -80,24 +92,24 @@ type Range struct {
// Next returns a pointer to the next node, or nil if there is no next node. // Next returns a pointer to the next node, or nil if there is no next node.
func (n *Node) Next() *Node { func (n *Node) Next() *Node {
if n.next == 0 { if n.next < 0 {
return nil return nil
} }
ptr := unsafe.Pointer(n) next := &(*n.nodes)[n.next]
size := unsafe.Sizeof(Node{}) next.nodes = n.nodes
return (*Node)(danger.Stride(ptr, size, n.next)) return next
} }
// Child returns a pointer to the first child node of this node. Other children // Child returns a pointer to the first child node of this node. Other children
// can be accessed calling Next on the first child. Returns an nil if this Node // can be accessed calling Next on the first child. Returns nil if this Node
// has no child. // has no child.
func (n *Node) Child() *Node { func (n *Node) Child() *Node {
if n.child == 0 { if n.child < 0 {
return nil return nil
} }
ptr := unsafe.Pointer(n) child := &(*n.nodes)[n.child]
size := unsafe.Sizeof(Node{}) child.nodes = n.nodes
return (*Node)(danger.Stride(ptr, size, n.child)) return child
} }
// Valid returns true if the node's kind is set (not to Invalid). // Valid returns true if the node's kind is set (not to Invalid).
@@ -111,13 +123,14 @@ func (n *Node) Valid() bool {
func (n *Node) Key() Iterator { func (n *Node) Key() Iterator {
switch n.Kind { switch n.Kind {
case KeyValue: case KeyValue:
value := n.Child() child := n.child
if !value.Valid() { if child < 0 {
panic(fmt.Errorf("KeyValue should have at least two children")) panic(errors.New("KeyValue should have at least two children"))
} }
return Iterator{node: value.Next()} valueNode := &(*n.nodes)[child]
return Iterator{nodes: n.nodes, idx: valueNode.next}
case Table, ArrayTable: case Table, ArrayTable:
return Iterator{node: n.Child()} return Iterator{nodes: n.nodes, idx: n.child}
default: default:
panic(fmt.Errorf("Key() is not supported on a %s", n.Kind)) panic(fmt.Errorf("Key() is not supported on a %s", n.Kind))
} }
@@ -132,5 +145,5 @@ func (n *Node) Value() *Node {
// Children returns an iterator over a node's children. // Children returns an iterator over a node's children.
func (n *Node) Children() Iterator { func (n *Node) Children() Iterator {
return Iterator{node: n.Child()} return Iterator{nodes: n.nodes, idx: n.child}
} }

View File

@@ -7,15 +7,6 @@ type root struct {
nodes []Node nodes []Node
} }
// Iterator over the top level nodes.
func (r *root) Iterator() Iterator {
it := Iterator{}
if len(r.nodes) > 0 {
it.node = &r.nodes[0]
}
return it
}
func (r *root) at(idx reference) *Node { func (r *root) at(idx reference) *Node {
return &r.nodes[idx] return &r.nodes[idx]
} }
@@ -33,12 +24,10 @@ type builder struct {
lastIdx int lastIdx int
} }
func (b *builder) Tree() *root {
return &b.tree
}
func (b *builder) NodeAt(ref reference) *Node { func (b *builder) NodeAt(ref reference) *Node {
return b.tree.at(ref) n := b.tree.at(ref)
n.nodes = &b.tree.nodes
return n
} }
func (b *builder) Reset() { func (b *builder) Reset() {
@@ -48,24 +37,28 @@ func (b *builder) Reset() {
func (b *builder) Push(n Node) reference { func (b *builder) Push(n Node) reference {
b.lastIdx = len(b.tree.nodes) b.lastIdx = len(b.tree.nodes)
n.next = -1
n.child = -1
b.tree.nodes = append(b.tree.nodes, n) b.tree.nodes = append(b.tree.nodes, n)
return reference(b.lastIdx) return reference(b.lastIdx)
} }
func (b *builder) PushAndChain(n Node) reference { func (b *builder) PushAndChain(n Node) reference {
newIdx := len(b.tree.nodes) newIdx := len(b.tree.nodes)
n.next = -1
n.child = -1
b.tree.nodes = append(b.tree.nodes, n) b.tree.nodes = append(b.tree.nodes, n)
if b.lastIdx >= 0 { if b.lastIdx >= 0 {
b.tree.nodes[b.lastIdx].next = newIdx - b.lastIdx b.tree.nodes[b.lastIdx].next = int32(newIdx) //nolint:gosec // TOML ASTs are small
} }
b.lastIdx = newIdx b.lastIdx = newIdx
return reference(b.lastIdx) return reference(b.lastIdx)
} }
func (b *builder) AttachChild(parent reference, child reference) { func (b *builder) AttachChild(parent reference, child reference) {
b.tree.nodes[parent].child = int(child) - int(parent) b.tree.nodes[parent].child = int32(child) //nolint:gosec // TOML ASTs are small
} }
func (b *builder) Chain(from reference, to reference) { func (b *builder) Chain(from reference, to reference) {
b.tree.nodes[from].next = int(to) - int(from) b.tree.nodes[from].next = int32(to) //nolint:gosec // TOML ASTs are small
} }

View File

@@ -6,28 +6,40 @@ import "fmt"
type Kind int type Kind int
const ( const (
// Meta // Invalid represents an invalid meta node.
Invalid Kind = iota Invalid Kind = iota
// Comment represents a comment meta node.
Comment Comment
// Key represents a key meta node.
Key Key
// Top level structures // Table represents a top-level table.
Table Table
// ArrayTable represents a top-level array table.
ArrayTable ArrayTable
// KeyValue represents a top-level key value.
KeyValue KeyValue
// Containers values // Array represents an array container value.
Array Array
// InlineTable represents an inline table container value.
InlineTable InlineTable
// Values // String represents a string value.
String String
// Bool represents a boolean value.
Bool Bool
// Float represents a floating point value.
Float Float
// Integer represents an integer value.
Integer Integer
// LocalDate represents a a local date value.
LocalDate LocalDate
// LocalTime represents a local time value.
LocalTime LocalTime
// LocalDateTime represents a local date/time value.
LocalDateTime LocalDateTime
// DateTime represents a data/time value.
DateTime DateTime
) )

View File

@@ -6,7 +6,6 @@ import (
"unicode" "unicode"
"github.com/pelletier/go-toml/v2/internal/characters" "github.com/pelletier/go-toml/v2/internal/characters"
"github.com/pelletier/go-toml/v2/internal/danger"
) )
// ParserError describes an error relative to the content of the document. // ParserError describes an error relative to the content of the document.
@@ -70,11 +69,26 @@ func (p *Parser) Data() []byte {
// panics. // panics.
func (p *Parser) Range(b []byte) Range { func (p *Parser) Range(b []byte) Range {
return Range{ return Range{
Offset: uint32(danger.SubsliceOffset(p.data, b)), Offset: uint32(p.subsliceOffset(b)), //nolint:gosec // TOML documents are small
Length: uint32(len(b)), Length: uint32(len(b)), //nolint:gosec // TOML documents are small
} }
} }
// rangeOfToken computes the Range of a token given the remaining bytes after the token.
// This is used when the token was extracted from the beginning of some position,
// and 'rest' is what remains after the token.
func (p *Parser) rangeOfToken(token, rest []byte) Range {
offset := len(p.data) - len(token) - len(rest)
return Range{Offset: uint32(offset), Length: uint32(len(token))} //nolint:gosec // TOML documents are small
}
// subsliceOffset returns the byte offset of subslice b within p.data.
// b must be a suffix (tail) of p.data.
func (p *Parser) subsliceOffset(b []byte) int {
// b is a suffix of p.data, so its offset is len(p.data) - len(b)
return len(p.data) - len(b)
}
// Raw returns the slice corresponding to the bytes in the given range. // Raw returns the slice corresponding to the bytes in the given range.
func (p *Parser) Raw(raw Range) []byte { func (p *Parser) Raw(raw Range) []byte {
return p.data[raw.Offset : raw.Offset+raw.Length] return p.data[raw.Offset : raw.Offset+raw.Length]
@@ -158,9 +172,17 @@ type Shape struct {
End Position End Position
} }
func (p *Parser) position(b []byte) Position { // Shape returns the shape of the given range in the input. Will
offset := danger.SubsliceOffset(p.data, b) // panic if the range is not a subslice of the input.
func (p *Parser) Shape(r Range) Shape {
return Shape{
Start: p.positionAt(int(r.Offset)),
End: p.positionAt(int(r.Offset + r.Length)),
}
}
// positionAt returns the position at the given byte offset in the document.
func (p *Parser) positionAt(offset int) Position {
lead := p.data[:offset] lead := p.data[:offset]
return Position{ return Position{
@@ -170,16 +192,6 @@ func (p *Parser) position(b []byte) Position {
} }
} }
// Shape returns the shape of the given range in the input. Will
// panic if the range is not a subslice of the input.
func (p *Parser) Shape(r Range) Shape {
raw := p.Raw(r)
return Shape{
Start: p.position(raw),
End: p.position(raw[r.Length:]),
}
}
func (p *Parser) parseNewline(b []byte) ([]byte, error) { func (p *Parser) parseNewline(b []byte) ([]byte, error) {
if b[0] == '\n' { if b[0] == '\n' {
return b[1:], nil return b[1:], nil
@@ -199,7 +211,7 @@ func (p *Parser) parseComment(b []byte) (reference, []byte, error) {
if p.KeepComments && err == nil { if p.KeepComments && err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: Comment, Kind: Comment,
Raw: p.Range(data), Raw: p.rangeOfToken(data, rest),
Data: data, Data: data,
}) })
} }
@@ -316,6 +328,9 @@ func (p *Parser) parseStdTable(b []byte) (reference, []byte, error) {
func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) { func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) {
// keyval = key keyval-sep val // keyval = key keyval-sep val
// Track the start position for Raw range
startB := b
ref := p.builder.Push(Node{ ref := p.builder.Push(Node{
Kind: KeyValue, Kind: KeyValue,
}) })
@@ -330,7 +345,7 @@ func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) {
b = p.parseWhitespace(b) b = p.parseWhitespace(b)
if len(b) == 0 { if len(b) == 0 {
return invalidReference, nil, NewParserError(b, "expected = after a key, but the document ends there") return invalidReference, nil, NewParserError(startB[:len(startB)-len(b)], "expected = after a key, but the document ends there")
} }
b, err = expect('=', b) b, err = expect('=', b)
@@ -348,6 +363,11 @@ func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) {
p.builder.Chain(valRef, key) p.builder.Chain(valRef, key)
p.builder.AttachChild(ref, valRef) p.builder.AttachChild(ref, valRef)
// Set Raw to span the entire key-value expression.
// Access the node directly in the slice to avoid the write barrier
// that NodeAt's nodes-pointer setup would trigger.
p.builder.tree.nodes[ref].Raw = p.rangeOfToken(startB[:len(startB)-len(b)], b)
return ref, b, err return ref, b, err
} }
@@ -376,7 +396,7 @@ func (p *Parser) parseVal(b []byte) (reference, []byte, error) {
if err == nil { if err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: String, Kind: String,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: v, Data: v,
}) })
} }
@@ -394,7 +414,7 @@ func (p *Parser) parseVal(b []byte) (reference, []byte, error) {
if err == nil { if err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: String, Kind: String,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: v, Data: v,
}) })
} }
@@ -456,7 +476,7 @@ func (p *Parser) parseInlineTable(b []byte) (reference, []byte, error) {
// inline-table-keyvals = keyval [ inline-table-sep inline-table-keyvals ] // inline-table-keyvals = keyval [ inline-table-sep inline-table-keyvals ]
parent := p.builder.Push(Node{ parent := p.builder.Push(Node{
Kind: InlineTable, Kind: InlineTable,
Raw: p.Range(b[:1]), Raw: p.rangeOfToken(b[:1], b[1:]),
}) })
first := true first := true
@@ -542,7 +562,7 @@ func (p *Parser) parseValArray(b []byte) (reference, []byte, error) {
var err error var err error
for len(b) > 0 { for len(b) > 0 {
cref := invalidReference var cref reference
cref, b, err = p.parseOptionalWhitespaceCommentNewline(b) cref, b, err = p.parseOptionalWhitespaceCommentNewline(b)
if err != nil { if err != nil {
return parent, nil, err return parent, nil, err
@@ -611,12 +631,13 @@ func (p *Parser) parseOptionalWhitespaceCommentNewline(b []byte) (reference, []b
latestCommentRef := invalidReference latestCommentRef := invalidReference
addComment := func(ref reference) { addComment := func(ref reference) {
if rootCommentRef == invalidReference { switch {
case rootCommentRef == invalidReference:
rootCommentRef = ref rootCommentRef = ref
} else if latestCommentRef == invalidReference { case latestCommentRef == invalidReference:
p.builder.AttachChild(rootCommentRef, ref) p.builder.AttachChild(rootCommentRef, ref)
latestCommentRef = ref latestCommentRef = ref
} else { default:
p.builder.Chain(latestCommentRef, ref) p.builder.Chain(latestCommentRef, ref)
latestCommentRef = ref latestCommentRef = ref
} }
@@ -704,11 +725,11 @@ func (p *Parser) parseMultilineBasicString(b []byte) ([]byte, []byte, []byte, er
if !escaped { if !escaped {
str := token[startIdx:endIdx] str := token[startIdx:endIdx]
verr := characters.Utf8TomlValidAlreadyEscaped(str) highlight := characters.Utf8TomlValidAlreadyEscaped(str)
if verr.Zero() { if len(highlight) == 0 {
return token, str, rest, nil return token, str, rest, nil
} }
return nil, nil, nil, NewParserError(str[verr.Index:verr.Index+verr.Size], "invalid UTF-8") return nil, nil, nil, NewParserError(highlight, "invalid UTF-8")
} }
var builder bytes.Buffer var builder bytes.Buffer
@@ -744,7 +765,7 @@ func (p *Parser) parseMultilineBasicString(b []byte) ([]byte, []byte, []byte, er
i += j i += j
for ; i < len(token)-3; i++ { for ; i < len(token)-3; i++ {
c := token[i] c := token[i]
if !(c == '\n' || c == '\r' || c == ' ' || c == '\t') { if c != '\n' && c != '\r' && c != ' ' && c != '\t' {
i-- i--
break break
} }
@@ -820,7 +841,7 @@ func (p *Parser) parseKey(b []byte) (reference, []byte, error) {
ref := p.builder.Push(Node{ ref := p.builder.Push(Node{
Kind: Key, Kind: Key,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: key, Data: key,
}) })
@@ -836,7 +857,7 @@ func (p *Parser) parseKey(b []byte) (reference, []byte, error) {
p.builder.PushAndChain(Node{ p.builder.PushAndChain(Node{
Kind: Key, Kind: Key,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: key, Data: key,
}) })
} else { } else {
@@ -897,11 +918,11 @@ func (p *Parser) parseBasicString(b []byte) ([]byte, []byte, []byte, error) {
// validate the string and return a direct reference to the buffer. // validate the string and return a direct reference to the buffer.
if !escaped { if !escaped {
str := token[startIdx:endIdx] str := token[startIdx:endIdx]
verr := characters.Utf8TomlValidAlreadyEscaped(str) highlight := characters.Utf8TomlValidAlreadyEscaped(str)
if verr.Zero() { if len(highlight) == 0 {
return token, str, rest, nil return token, str, rest, nil
} }
return nil, nil, nil, NewParserError(str[verr.Index:verr.Index+verr.Size], "invalid UTF-8") return nil, nil, nil, NewParserError(highlight, "invalid UTF-8")
} }
i := startIdx i := startIdx
@@ -972,7 +993,7 @@ func hexToRune(b []byte, length int) (rune, error) {
var r uint32 var r uint32
for i, c := range b { for i, c := range b {
d := uint32(0) var d uint32
switch { switch {
case '0' <= c && c <= '9': case '0' <= c && c <= '9':
d = uint32(c - '0') d = uint32(c - '0')
@@ -1013,7 +1034,7 @@ func (p *Parser) parseIntOrFloatOrDateTime(b []byte) (reference, []byte, error)
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:3], Data: b[:3],
Raw: p.Range(b[:3]), Raw: p.rangeOfToken(b[:3], b[3:]),
}), b[3:], nil }), b[3:], nil
case 'n': case 'n':
if !scanFollowsNan(b) { if !scanFollowsNan(b) {
@@ -1023,7 +1044,7 @@ func (p *Parser) parseIntOrFloatOrDateTime(b []byte) (reference, []byte, error)
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:3], Data: b[:3],
Raw: p.Range(b[:3]), Raw: p.rangeOfToken(b[:3], b[3:]),
}), b[3:], nil }), b[3:], nil
case '+', '-': case '+', '-':
return p.scanIntOrFloat(b) return p.scanIntOrFloat(b)
@@ -1076,7 +1097,7 @@ byteLoop:
} }
case c == 'T' || c == 't' || c == ':' || c == '.': case c == 'T' || c == 't' || c == ':' || c == '.':
hasTime = true hasTime = true
case c == '+' || c == '-' || c == 'Z' || c == 'z': case c == '+' || c == 'Z' || c == 'z':
hasTz = true hasTz = true
case c == ' ': case c == ' ':
if !seenSpace && i+1 < len(b) && isDigit(b[i+1]) { if !seenSpace && i+1 < len(b) && isDigit(b[i+1]) {
@@ -1148,7 +1169,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Integer, Kind: Integer,
Data: b[:i], Data: b[:i],
Raw: p.Range(b[:i]), Raw: p.rangeOfToken(b[:i], b[i:]),
}), b[i:], nil }), b[i:], nil
} }
@@ -1172,7 +1193,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:i+3], Data: b[:i+3],
Raw: p.Range(b[:i+3]), Raw: p.rangeOfToken(b[:i+3], b[i+3:]),
}), b[i+3:], nil }), b[i+3:], nil
} }
@@ -1184,7 +1205,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:i+3], Data: b[:i+3],
Raw: p.Range(b[:i+3]), Raw: p.rangeOfToken(b[:i+3], b[i+3:]),
}), b[i+3:], nil }), b[i+3:], nil
} }
@@ -1207,7 +1228,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: kind, Kind: kind,
Data: b[:i], Data: b[:i],
Raw: p.Range(b[:i]), Raw: p.rangeOfToken(b[:i], b[i:]),
}), b[i:], nil }), b[i:], nil
} }

View File

@@ -1,7 +1,32 @@
package unstable package unstable
// The Unmarshaler interface may be implemented by types to customize their // Unmarshaler is implemented by types that can unmarshal a TOML
// behavior when being unmarshaled from a TOML document. // description of themselves. The input is a valid TOML document
// containing the relevant portion of the parsed document.
//
// For tables (including split tables defined in multiple places),
// the data contains the raw key-value bytes from the original document
// with adjusted table headers to be relative to the unmarshaling target.
type Unmarshaler interface { type Unmarshaler interface {
UnmarshalTOML(value *Node) error UnmarshalTOML(data []byte) error
}
// RawMessage is a raw encoded TOML value. It implements Unmarshaler
// and can be used to delay TOML decoding or capture raw content.
//
// Example usage:
//
// type Config struct {
// Plugin RawMessage `toml:"plugin"`
// }
//
// var cfg Config
// toml.NewDecoder(r).EnableUnmarshalerInterface().Decode(&cfg)
// // cfg.Plugin now contains the raw TOML bytes for [plugin]
type RawMessage []byte
// UnmarshalTOML implements Unmarshaler.
func (m *RawMessage) UnmarshalTOML(data []byte) error {
*m = append((*m)[0:0], data...)
return nil
} }

39
vendor/github.com/spf13/cobra/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,39 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
# Vim files https://github.com/github/gitignore/blob/master/Global/Vim.gitignore
# swap
[._]*.s[a-w][a-z]
[._]s[a-w][a-z]
# session
Session.vim
# temporary
.netrwhist
*~
# auto-generated tag files
tags
*.exe
cobra.test
bin
.idea/
*.iml

66
vendor/github.com/spf13/cobra/.golangci.yml generated vendored Normal file
View File

@@ -0,0 +1,66 @@
# Copyright 2013-2023 The Cobra Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
version: "2"
run:
timeout: 5m
formatters:
enable:
- gofmt
- goimports
linters:
default: none
enable:
#- bodyclose
#- depguard
#- dogsled
#- dupl
- errcheck
#- exhaustive
#- funlen
#- gochecknoinits
- goconst
- gocritic
#- gocyclo
#- goprintffuncname
- gosec
- govet
- ineffassign
#- lll
- misspell
#- mnd
#- nakedret
#- noctx
- nolintlint
#- rowserrcheck
- staticcheck
- unconvert
#- unparam
- unused
#- whitespace
exclusions:
presets:
- common-false-positives
- legacy
- std-error-handling
settings:
govet:
# Disable buildtag check to allow dual build tag syntax (both //go:build and // +build).
# This is necessary for Go 1.15 compatibility since //go:build was introduced in Go 1.17.
# This can be removed once Cobra requires Go 1.17 or higher.
disable:
- buildtag

3
vendor/github.com/spf13/cobra/.mailmap generated vendored Normal file
View File

@@ -0,0 +1,3 @@
Steve Francia <steve.francia@gmail.com>
Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>
Fabiano Franz <ffranz@redhat.com> <contact@fabianofranz.com>

37
vendor/github.com/spf13/cobra/CONDUCT.md generated vendored Normal file
View File

@@ -0,0 +1,37 @@
## Cobra User Contract
### Versioning
Cobra will follow a steady release cadence. Non breaking changes will be released as minor versions quarterly. Patch bug releases are at the discretion of the maintainers. Users can expect security patch fixes to be released within relatively short order of a CVE becoming known. For more information on security patch fixes see the CVE section below. Releases will follow [Semantic Versioning](https://semver.org/). Users tracking the Master branch should expect unpredictable breaking changes as the project continues to move forward. For stability, it is highly recommended to use a release.
### Backward Compatibility
We will maintain two major releases in a moving window. The N-1 release will only receive bug fixes and security updates and will be dropped once N+1 is released.
### Deprecation
Deprecation of Go versions or dependent packages will only occur in major releases. To reduce the change of this taking users by surprise, any large deprecation will be preceded by an announcement in the [#cobra slack channel](https://gophers.slack.com/archives/CD3LP1199) and an Issue on Github.
### CVE
Maintainers will make every effort to release security patches in the case of a medium to high severity CVE directly impacting the library. The speed in which these patches reach a release is up to the discretion of the maintainers. A low severity CVE may be a lower priority than a high severity one.
### Communication
Cobra maintainers will use GitHub issues and the [#cobra slack channel](https://gophers.slack.com/archives/CD3LP1199) as the primary means of communication with the community. This is to foster open communication with all users and contributors.
### Breaking Changes
Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next release of Cobra.
There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new version.
Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These version have consumers who expect the APIs, behaviors, etc, to remain stable during the lifetime of the patch stream for the minor release.
Examples of breaking changes include:
- Removing or renaming exported constant, variable, type, or function.
- Updating the version of critical libraries such as `spf13/pflag`, `spf13/viper` etc...
- Some version updates may be acceptable for picking up bug fixes, but maintainers must exercise caution when reviewing.
There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project's maintainers, and must be carefully considered before merging.
### CI Testing
Maintainers will ensure the Cobra test suite utilizes the current supported versions of Golang.
### Disclaimer
Changes to this document and the contents therein are at the discretion of the maintainers.
None of the contents of this document are legally binding in any way to the maintainers or the users.

50
vendor/github.com/spf13/cobra/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,50 @@
# Contributing to Cobra
Thank you so much for contributing to Cobra. We appreciate your time and help.
Here are some guidelines to help you get started.
## Code of Conduct
Be kind and respectful to the members of the community. Take time to educate
others who are seeking help. Harassment of any kind will not be tolerated.
## Questions
If you have questions regarding Cobra, feel free to ask it in the community
[#cobra Slack channel][cobra-slack]
## Filing a bug or feature
1. Before filing an issue, please check the existing issues to see if a
similar one was already opened. If there is one already opened, feel free
to comment on it.
1. If you believe you've found a bug, please provide detailed steps of
reproduction, the version of Cobra and anything else you believe will be
useful to help troubleshoot it (e.g. OS environment, environment variables,
etc...). Also state the current behavior vs. the expected behavior.
1. If you'd like to see a feature or an enhancement please open an issue with
a clear title and description of what the feature is and why it would be
beneficial to the project and its users.
## Submitting changes
1. CLA: Upon submitting a Pull Request (PR), contributors will be prompted to
sign a CLA. Please sign the CLA :slightly_smiling_face:
1. Tests: If you are submitting code, please ensure you have adequate tests
for the feature. Tests can be run via `go test ./...` or `make test`.
1. Since this is golang project, ensure the new code is properly formatted to
ensure code consistency. Run `make all`.
### Quick steps to contribute
1. Fork the project.
1. Download your fork to your PC (`git clone https://github.com/your_username/cobra && cd cobra`)
1. Create your feature branch (`git checkout -b my-new-feature`)
1. Make changes and run tests (`make test`)
1. Add them to staging (`git add .`)
1. Commit your changes (`git commit -m 'Add some feature'`)
1. Push to the branch (`git push origin my-new-feature`)
1. Create new pull request
<!-- Links -->
[cobra-slack]: https://gophers.slack.com/archives/CD3LP1199

174
vendor/github.com/spf13/cobra/LICENSE.txt generated vendored Normal file
View File

@@ -0,0 +1,174 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

13
vendor/github.com/spf13/cobra/MAINTAINERS generated vendored Normal file
View File

@@ -0,0 +1,13 @@
maintainers:
- spf13
- johnSchnake
- jpmcb
- marckhouzam
inactive:
- anthonyfok
- bep
- bogem
- broady
- eparis
- jharshman
- wfernandes

35
vendor/github.com/spf13/cobra/Makefile generated vendored Normal file
View File

@@ -0,0 +1,35 @@
BIN="./bin"
SRC=$(shell find . -name "*.go")
ifeq (, $(shell which golangci-lint))
$(warning "could not find golangci-lint in $(PATH), run: curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh")
endif
.PHONY: fmt lint test install_deps clean
default: all
all: fmt test
fmt:
$(info ******************** checking formatting ********************)
@test -z $(shell gofmt -l $(SRC)) || (gofmt -d $(SRC); exit 1)
lint:
$(info ******************** running lint tools ********************)
golangci-lint run -v
test: install_deps
$(info ******************** running tests ********************)
go test -v ./...
richtest: install_deps
$(info ******************** running tests with kyoh86/richgo ********************)
richgo test -v ./...
install_deps:
$(info ******************** downloading dependencies ********************)
go get -v ./...
clean:
rm -rf $(BIN)

133
vendor/github.com/spf13/cobra/README.md generated vendored Normal file
View File

@@ -0,0 +1,133 @@
<div align="center">
<a href="https://cobra.dev">
<img width="512" height="535" alt="cobra-logo" src="https://github.com/user-attachments/assets/c8bf9aad-b5ae-41d3-8899-d83baec10af8" />
</a>
</div>
Cobra is a library for creating powerful modern CLI applications.
<a href="https://cobra.dev">Visit Cobra.dev for extensive documentation</a>
Cobra is used in many Go projects such as [Kubernetes](https://kubernetes.io/),
[Hugo](https://gohugo.io), and [GitHub CLI](https://github.com/cli/cli) to
name a few. [This list](site/content/projects_using_cobra.md) contains a more extensive list of projects using Cobra.
[![](https://img.shields.io/github/actions/workflow/status/spf13/cobra/test.yml?branch=main&longCache=true&label=Test&logo=github%20actions&logoColor=fff)](https://github.com/spf13/cobra/actions?query=workflow%3ATest)
[![Go Reference](https://pkg.go.dev/badge/github.com/spf13/cobra.svg)](https://pkg.go.dev/github.com/spf13/cobra)
[![Go Report Card](https://goreportcard.com/badge/github.com/spf13/cobra)](https://goreportcard.com/report/github.com/spf13/cobra)
[![Slack](https://img.shields.io/badge/Slack-cobra-brightgreen)](https://gophers.slack.com/archives/CD3LP1199)
<hr>
<div align="center" markdown="1">
<sup>Supported by:</sup>
<br>
<br>
<a href="https://www.warp.dev/cobra">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/ab8dd143-b0fd-4904-bdc5-dd7ecac94eae">
</a>
### [Warp, the AI terminal for devs](https://www.warp.dev/cobra)
[Try Cobra in Warp today](https://www.warp.dev/cobra)<br>
</div>
<hr>
# Overview
Cobra is a library providing a simple interface to create powerful modern CLI
interfaces similar to git & go tools.
Cobra provides:
* Easy subcommand-based CLIs: `app server`, `app fetch`, etc.
* Fully POSIX-compliant flags (including short & long versions)
* Nested subcommands
* Global, local and cascading flags
* Intelligent suggestions (`app srver`... did you mean `app server`?)
* Automatic help generation for commands and flags
* Grouping help for subcommands
* Automatic help flag recognition of `-h`, `--help`, etc.
* Automatically generated shell autocomplete for your application (bash, zsh, fish, powershell)
* Automatically generated man pages for your application
* Command aliases so you can change things without breaking them
* The flexibility to define your own help, usage, etc.
* Optional seamless integration with [viper](https://github.com/spf13/viper) for 12-factor apps
# Concepts
Cobra is built on a structure of commands, arguments & flags.
**Commands** represent actions, **Args** are things and **Flags** are modifiers for those actions.
The best applications read like sentences when used, and as a result, users
intuitively know how to interact with them.
The pattern to follow is
`APPNAME VERB NOUN --ADJECTIVE`
or
`APPNAME COMMAND ARG --FLAG`.
A few good real world examples may better illustrate this point.
In the following example, 'server' is a command, and 'port' is a flag:
hugo server --port=1313
In this command we are telling Git to clone the url bare.
git clone URL --bare
## Commands
Command is the central point of the application. Each interaction that
the application supports will be contained in a Command. A command can
have children commands and optionally run an action.
In the example above, 'server' is the command.
[More about cobra.Command](https://pkg.go.dev/github.com/spf13/cobra#Command)
## Flags
A flag is a way to modify the behavior of a command. Cobra supports
fully POSIX-compliant flags as well as the Go [flag package](https://golang.org/pkg/flag/).
A Cobra command can define flags that persist through to children commands
and flags that are only available to that command.
In the example above, 'port' is the flag.
Flag functionality is provided by the [pflag
library](https://github.com/spf13/pflag), a fork of the flag standard library
which maintains the same interface while adding POSIX compliance.
# Installing
Using Cobra is easy. First, use `go get` to install the latest version
of the library.
```
go get -u github.com/spf13/cobra@latest
```
Next, include Cobra in your application:
```go
import "github.com/spf13/cobra"
```
# Usage
`cobra-cli` is a command line program to generate cobra applications and command files.
It will bootstrap your application scaffolding to rapidly
develop a Cobra-based application. It is the easiest way to incorporate Cobra into your application.
It can be installed by running:
```
go install github.com/spf13/cobra-cli@latest
```
For complete details on using the Cobra-CLI generator, please read [The Cobra Generator README](https://github.com/spf13/cobra-cli/blob/main/README.md)
For complete details on using the Cobra library, please read [The Cobra User Guide](site/content/user_guide.md).
# License
Cobra is released under the Apache 2.0 license. See [LICENSE.txt](LICENSE.txt)

105
vendor/github.com/spf13/cobra/SECURITY.md generated vendored Normal file
View File

@@ -0,0 +1,105 @@
# Security Policy
## Reporting a Vulnerability
The `cobra` maintainers take security issues seriously and
we appreciate your efforts to _**responsibly**_ disclose your findings.
We will make every effort to swiftly respond and address concerns.
To report a security vulnerability:
1. **DO NOT** create a public GitHub issue for the vulnerability!
2. **DO NOT** create a public GitHub Pull Request with a fix for the vulnerability!
3. Send an email to `cobra-security@googlegroups.com`.
4. Include the following details in your report:
- Description of the vulnerability
- Steps to reproduce
- Potential impact of the vulnerability (to your downstream project, to the Go ecosystem, etc.)
- Any potential mitigations you've already identified
5. Allow up to 7 days for an initial response.
You should receive an acknowledgment of your report and an estimated timeline for a fix.
6. (Optional) If you have a fix and would like to contribute your patch, please work
directly with the maintainers via `cobra-security@googlegroups.com` to
coordinate pushing the patch to GitHub, cutting a new release, and disclosing the change.
## Response Process
When a security vulnerability report is received, the `cobra` maintainers will:
1. Confirm receipt of the vulnerability report within 7 days.
2. Assess the report to determine if it constitutes a security vulnerability.
3. If confirmed, assign the vulnerability a severity level and create a timeline for addressing it.
4. Develop and test a fix.
5. Patch the vulnerability and make a new GitHub release: the maintainers will coordinate disclosure with the reporter.
6. Create a new GitHub Security Advisory to inform the broader Go ecosystem
## Disclosure Policy
The `cobra` maintainers follow a coordinated disclosure process:
1. Security vulnerabilities will be addressed as quickly as possible.
2. A CVE (Common Vulnerabilities and Exposures) identifier will be requested for significant vulnerabilities
that are within `cobra` itself.
3. Once a fix is ready, the maintainers will:
- Release a new version containing the fix.
- Update the security advisory with details about the vulnerability.
- Credit the reporter (unless they wish to remain anonymous).
- Credit the fixer (unless they wish to remain anonymous, this may be the same as the reporter).
- Announce the vulnerability through appropriate channels
(GitHub Security Advisory, mailing lists, GitHub Releases, etc.)
## Supported Versions
Security fixes will typically only be released for the most recent major release.
## Upstream Security Issues
`cobra` generally will not accept vulnerability reports that originate in upstream
dependencies. I.e., if there is a problem in Go code that `cobra` depends on,
it is best to engage that project's maintainers and owners.
This security policy primarily pertains only to `cobra` itself but if you believe you've
identified a problem that originates in an upstream dependency and is being widely
distributed by `cobra`, please follow the disclosure procedure above: the `cobra`
maintainers will work with you to determine the severity and ecosystem impact.
## Security Updates and CVEs
Information about known security vulnerabilities and CVEs affecting `cobra` will
be published as GitHub Security Advisories at
https://github.com/spf13/cobra/security/advisories.
All users are encouraged to watch the repository and upgrade promptly when
security releases are published.
## `cobra` Security Best Practices for Users
When using `cobra` in your CLIs, the `cobra` maintainers recommend the following:
1. Always use the latest version of `cobra`.
2. [Use Go modules](https://go.dev/blog/using-go-modules) for dependency management.
3. Always use the latest possible version of Go.
## Security Best Practices for Contributors
When contributing to `cobra`:
1. Be mindful of security implications when adding new features or modifying existing ones.
2. Be aware of `cobra`'s extremely large reach: it is used in nearly every Go CLI
(like Kubernetes, Docker, Prometheus, etc. etc.)
3. Write tests that explicitly cover edge cases and potential issues.
4. If you discover a security issue while working on `cobra`, please report it
following the process above rather than opening a public pull request or issue that
addresses the vulnerability.
5. Take personal sec-ops seriously and secure your GitHub account: use [two-factor authentication](https://docs.github.com/en/authentication/securing-your-account-with-two-factor-authentication-2fa),
[sign your commits with a GPG or SSH key](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification),
etc.
## Acknowledgments
The `cobra` maintainers would like to thank all security researchers and
community members who help keep cobra, its users, and the entire Go ecosystem secure through responsible disclosures!!
---
*This security policy is inspired by the [Open Web Application Security Project (OWASP)](https://owasp.org/) guidelines and security best practices.*

60
vendor/github.com/spf13/cobra/active_help.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"fmt"
"os"
)
const (
activeHelpMarker = "_activeHelp_ "
// The below values should not be changed: programs will be using them explicitly
// in their user documentation, and users will be using them explicitly.
activeHelpEnvVarSuffix = "ACTIVE_HELP"
activeHelpGlobalEnvVar = configEnvVarGlobalPrefix + "_" + activeHelpEnvVarSuffix
activeHelpGlobalDisable = "0"
)
// AppendActiveHelp adds the specified string to the specified array to be used as ActiveHelp.
// Such strings will be processed by the completion script and will be shown as ActiveHelp
// to the user.
// The array parameter should be the array that will contain the completions.
// This function can be called multiple times before and/or after completions are added to
// the array. Each time this function is called with the same array, the new
// ActiveHelp line will be shown below the previous ones when completion is triggered.
func AppendActiveHelp(compArray []Completion, activeHelpStr string) []Completion {
return append(compArray, fmt.Sprintf("%s%s", activeHelpMarker, activeHelpStr))
}
// GetActiveHelpConfig returns the value of the ActiveHelp environment variable
// <PROGRAM>_ACTIVE_HELP where <PROGRAM> is the name of the root command in upper
// case, with all non-ASCII-alphanumeric characters replaced by `_`.
// It will always return "0" if the global environment variable COBRA_ACTIVE_HELP
// is set to "0".
func GetActiveHelpConfig(cmd *Command) string {
activeHelpCfg := os.Getenv(activeHelpGlobalEnvVar)
if activeHelpCfg != activeHelpGlobalDisable {
activeHelpCfg = os.Getenv(activeHelpEnvVar(cmd.Root().Name()))
}
return activeHelpCfg
}
// activeHelpEnvVar returns the name of the program-specific ActiveHelp environment
// variable. It has the format <PROGRAM>_ACTIVE_HELP where <PROGRAM> is the name of the
// root command in upper case, with all non-ASCII-alphanumeric characters replaced by `_`.
func activeHelpEnvVar(name string) string {
return configEnvVar(name, activeHelpEnvVarSuffix)
}

131
vendor/github.com/spf13/cobra/args.go generated vendored Normal file
View File

@@ -0,0 +1,131 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"fmt"
"strings"
)
type PositionalArgs func(cmd *Command, args []string) error
// legacyArgs validation has the following behaviour:
// - root commands with no subcommands can take arbitrary arguments
// - root commands with subcommands will do subcommand validity checking
// - subcommands will always accept arbitrary arguments
func legacyArgs(cmd *Command, args []string) error {
// no subcommand, always take args
if !cmd.HasSubCommands() {
return nil
}
// root command with subcommands, do subcommand checking.
if !cmd.HasParent() && len(args) > 0 {
return fmt.Errorf("unknown command %q for %q%s", args[0], cmd.CommandPath(), cmd.findSuggestions(args[0]))
}
return nil
}
// NoArgs returns an error if any args are included.
func NoArgs(cmd *Command, args []string) error {
if len(args) > 0 {
return fmt.Errorf("unknown command %q for %q", args[0], cmd.CommandPath())
}
return nil
}
// OnlyValidArgs returns an error if there are any positional args that are not in
// the `ValidArgs` field of `Command`
func OnlyValidArgs(cmd *Command, args []string) error {
if len(cmd.ValidArgs) > 0 {
// Remove any description that may be included in ValidArgs.
// A description is following a tab character.
validArgs := make([]string, 0, len(cmd.ValidArgs))
for _, v := range cmd.ValidArgs {
validArgs = append(validArgs, strings.SplitN(v, "\t", 2)[0])
}
for _, v := range args {
if !stringInSlice(v, validArgs) {
return fmt.Errorf("invalid argument %q for %q%s", v, cmd.CommandPath(), cmd.findSuggestions(args[0]))
}
}
}
return nil
}
// ArbitraryArgs never returns an error.
func ArbitraryArgs(cmd *Command, args []string) error {
return nil
}
// MinimumNArgs returns an error if there is not at least N args.
func MinimumNArgs(n int) PositionalArgs {
return func(cmd *Command, args []string) error {
if len(args) < n {
return fmt.Errorf("requires at least %d arg(s), only received %d", n, len(args))
}
return nil
}
}
// MaximumNArgs returns an error if there are more than N args.
func MaximumNArgs(n int) PositionalArgs {
return func(cmd *Command, args []string) error {
if len(args) > n {
return fmt.Errorf("accepts at most %d arg(s), received %d", n, len(args))
}
return nil
}
}
// ExactArgs returns an error if there are not exactly n args.
func ExactArgs(n int) PositionalArgs {
return func(cmd *Command, args []string) error {
if len(args) != n {
return fmt.Errorf("accepts %d arg(s), received %d", n, len(args))
}
return nil
}
}
// RangeArgs returns an error if the number of args is not within the expected range.
func RangeArgs(min int, max int) PositionalArgs {
return func(cmd *Command, args []string) error {
if len(args) < min || len(args) > max {
return fmt.Errorf("accepts between %d and %d arg(s), received %d", min, max, len(args))
}
return nil
}
}
// MatchAll allows combining several PositionalArgs to work in concert.
func MatchAll(pargs ...PositionalArgs) PositionalArgs {
return func(cmd *Command, args []string) error {
for _, parg := range pargs {
if err := parg(cmd, args); err != nil {
return err
}
}
return nil
}
}
// ExactValidArgs returns an error if there are not exactly N positional args OR
// there are any positional args that are not in the `ValidArgs` field of `Command`
//
// Deprecated: use MatchAll(ExactArgs(n), OnlyValidArgs) instead
func ExactValidArgs(n int) PositionalArgs {
return MatchAll(ExactArgs(n), OnlyValidArgs)
}

709
vendor/github.com/spf13/cobra/bash_completions.go generated vendored Normal file
View File

@@ -0,0 +1,709 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"bytes"
"fmt"
"io"
"os"
"sort"
"strings"
"github.com/spf13/pflag"
)
// Annotations for Bash completion.
const (
BashCompFilenameExt = "cobra_annotation_bash_completion_filename_extensions"
BashCompCustom = "cobra_annotation_bash_completion_custom"
BashCompOneRequiredFlag = "cobra_annotation_bash_completion_one_required_flag"
BashCompSubdirsInDir = "cobra_annotation_bash_completion_subdirs_in_dir"
)
func writePreamble(buf io.StringWriter, name string) {
WriteStringAndCheck(buf, fmt.Sprintf("# bash completion for %-36s -*- shell-script -*-\n", name))
WriteStringAndCheck(buf, fmt.Sprintf(`
__%[1]s_debug()
{
if [[ -n ${BASH_COMP_DEBUG_FILE:-} ]]; then
echo "$*" >> "${BASH_COMP_DEBUG_FILE}"
fi
}
# Homebrew on Macs have version 1.3 of bash-completion which doesn't include
# _init_completion. This is a very minimal version of that function.
__%[1]s_init_completion()
{
COMPREPLY=()
_get_comp_words_by_ref "$@" cur prev words cword
}
__%[1]s_index_of_word()
{
local w word=$1
shift
index=0
for w in "$@"; do
[[ $w = "$word" ]] && return
index=$((index+1))
done
index=-1
}
__%[1]s_contains_word()
{
local w word=$1; shift
for w in "$@"; do
[[ $w = "$word" ]] && return
done
return 1
}
__%[1]s_handle_go_custom_completion()
{
__%[1]s_debug "${FUNCNAME[0]}: cur is ${cur}, words[*] is ${words[*]}, #words[@] is ${#words[@]}"
local shellCompDirectiveError=%[3]d
local shellCompDirectiveNoSpace=%[4]d
local shellCompDirectiveNoFileComp=%[5]d
local shellCompDirectiveFilterFileExt=%[6]d
local shellCompDirectiveFilterDirs=%[7]d
local out requestComp lastParam lastChar comp directive args
# Prepare the command to request completions for the program.
# Calling ${words[0]} instead of directly %[1]s allows handling aliases
args=("${words[@]:1}")
# Disable ActiveHelp which is not supported for bash completion v1
requestComp="%[8]s=0 ${words[0]} %[2]s ${args[*]}"
lastParam=${words[$((${#words[@]}-1))]}
lastChar=${lastParam:$((${#lastParam}-1)):1}
__%[1]s_debug "${FUNCNAME[0]}: lastParam ${lastParam}, lastChar ${lastChar}"
if [ -z "${cur}" ] && [ "${lastChar}" != "=" ]; then
# If the last parameter is complete (there is a space following it)
# We add an extra empty parameter so we can indicate this to the go method.
__%[1]s_debug "${FUNCNAME[0]}: Adding extra empty parameter"
requestComp="${requestComp} \"\""
fi
__%[1]s_debug "${FUNCNAME[0]}: calling ${requestComp}"
# Use eval to handle any environment variables and such
out=$(eval "${requestComp}" 2>/dev/null)
# Extract the directive integer at the very end of the output following a colon (:)
directive=${out##*:}
# Remove the directive
out=${out%%:*}
if [ "${directive}" = "${out}" ]; then
# There is not directive specified
directive=0
fi
__%[1]s_debug "${FUNCNAME[0]}: the completion directive is: ${directive}"
__%[1]s_debug "${FUNCNAME[0]}: the completions are: ${out}"
if [ $((directive & shellCompDirectiveError)) -ne 0 ]; then
# Error code. No completion.
__%[1]s_debug "${FUNCNAME[0]}: received error from custom completion go code"
return
else
if [ $((directive & shellCompDirectiveNoSpace)) -ne 0 ]; then
if [[ $(type -t compopt) = "builtin" ]]; then
__%[1]s_debug "${FUNCNAME[0]}: activating no space"
compopt -o nospace
fi
fi
if [ $((directive & shellCompDirectiveNoFileComp)) -ne 0 ]; then
if [[ $(type -t compopt) = "builtin" ]]; then
__%[1]s_debug "${FUNCNAME[0]}: activating no file completion"
compopt +o default
fi
fi
fi
if [ $((directive & shellCompDirectiveFilterFileExt)) -ne 0 ]; then
# File extension filtering
local fullFilter filter filteringCmd
# Do not use quotes around the $out variable or else newline
# characters will be kept.
for filter in ${out}; do
fullFilter+="$filter|"
done
filteringCmd="_filedir $fullFilter"
__%[1]s_debug "File filtering command: $filteringCmd"
$filteringCmd
elif [ $((directive & shellCompDirectiveFilterDirs)) -ne 0 ]; then
# File completion for directories only
local subdir
# Use printf to strip any trailing newline
subdir=$(printf "%%s" "${out}")
if [ -n "$subdir" ]; then
__%[1]s_debug "Listing directories in $subdir"
__%[1]s_handle_subdirs_in_dir_flag "$subdir"
else
__%[1]s_debug "Listing directories in ."
_filedir -d
fi
else
while IFS='' read -r comp; do
COMPREPLY+=("$comp")
done < <(compgen -W "${out}" -- "$cur")
fi
}
__%[1]s_handle_reply()
{
__%[1]s_debug "${FUNCNAME[0]}"
local comp
case $cur in
-*)
if [[ $(type -t compopt) = "builtin" ]]; then
compopt -o nospace
fi
local allflags
if [ ${#must_have_one_flag[@]} -ne 0 ]; then
allflags=("${must_have_one_flag[@]}")
else
allflags=("${flags[*]} ${two_word_flags[*]}")
fi
while IFS='' read -r comp; do
COMPREPLY+=("$comp")
done < <(compgen -W "${allflags[*]}" -- "$cur")
if [[ $(type -t compopt) = "builtin" ]]; then
[[ "${COMPREPLY[0]}" == *= ]] || compopt +o nospace
fi
# complete after --flag=abc
if [[ $cur == *=* ]]; then
if [[ $(type -t compopt) = "builtin" ]]; then
compopt +o nospace
fi
local index flag
flag="${cur%%=*}"
__%[1]s_index_of_word "${flag}" "${flags_with_completion[@]}"
COMPREPLY=()
if [[ ${index} -ge 0 ]]; then
PREFIX=""
cur="${cur#*=}"
${flags_completion[${index}]}
if [ -n "${ZSH_VERSION:-}" ]; then
# zsh completion needs --flag= prefix
eval "COMPREPLY=( \"\${COMPREPLY[@]/#/${flag}=}\" )"
fi
fi
fi
if [[ -z "${flag_parsing_disabled}" ]]; then
# If flag parsing is enabled, we have completed the flags and can return.
# If flag parsing is disabled, we may not know all (or any) of the flags, so we fallthrough
# to possibly call handle_go_custom_completion.
return 0;
fi
;;
esac
# check if we are handling a flag with special work handling
local index
__%[1]s_index_of_word "${prev}" "${flags_with_completion[@]}"
if [[ ${index} -ge 0 ]]; then
${flags_completion[${index}]}
return
fi
# we are parsing a flag and don't have a special handler, no completion
if [[ ${cur} != "${words[cword]}" ]]; then
return
fi
local completions
completions=("${commands[@]}")
if [[ ${#must_have_one_noun[@]} -ne 0 ]]; then
completions+=("${must_have_one_noun[@]}")
elif [[ -n "${has_completion_function}" ]]; then
# if a go completion function is provided, defer to that function
__%[1]s_handle_go_custom_completion
fi
if [[ ${#must_have_one_flag[@]} -ne 0 ]]; then
completions+=("${must_have_one_flag[@]}")
fi
while IFS='' read -r comp; do
COMPREPLY+=("$comp")
done < <(compgen -W "${completions[*]}" -- "$cur")
if [[ ${#COMPREPLY[@]} -eq 0 && ${#noun_aliases[@]} -gt 0 && ${#must_have_one_noun[@]} -ne 0 ]]; then
while IFS='' read -r comp; do
COMPREPLY+=("$comp")
done < <(compgen -W "${noun_aliases[*]}" -- "$cur")
fi
if [[ ${#COMPREPLY[@]} -eq 0 ]]; then
if declare -F __%[1]s_custom_func >/dev/null; then
# try command name qualified custom func
__%[1]s_custom_func
else
# otherwise fall back to unqualified for compatibility
declare -F __custom_func >/dev/null && __custom_func
fi
fi
# available in bash-completion >= 2, not always present on macOS
if declare -F __ltrim_colon_completions >/dev/null; then
__ltrim_colon_completions "$cur"
fi
# If there is only 1 completion and it is a flag with an = it will be completed
# but we don't want a space after the =
if [[ "${#COMPREPLY[@]}" -eq "1" ]] && [[ $(type -t compopt) = "builtin" ]] && [[ "${COMPREPLY[0]}" == --*= ]]; then
compopt -o nospace
fi
}
# The arguments should be in the form "ext1|ext2|extn"
__%[1]s_handle_filename_extension_flag()
{
local ext="$1"
_filedir "@(${ext})"
}
__%[1]s_handle_subdirs_in_dir_flag()
{
local dir="$1"
pushd "${dir}" >/dev/null 2>&1 && _filedir -d && popd >/dev/null 2>&1 || return
}
__%[1]s_handle_flag()
{
__%[1]s_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
# if a command required a flag, and we found it, unset must_have_one_flag()
local flagname=${words[c]}
local flagvalue=""
# if the word contained an =
if [[ ${words[c]} == *"="* ]]; then
flagvalue=${flagname#*=} # take in as flagvalue after the =
flagname=${flagname%%=*} # strip everything after the =
flagname="${flagname}=" # but put the = back
fi
__%[1]s_debug "${FUNCNAME[0]}: looking for ${flagname}"
if __%[1]s_contains_word "${flagname}" "${must_have_one_flag[@]}"; then
must_have_one_flag=()
fi
# if you set a flag which only applies to this command, don't show subcommands
if __%[1]s_contains_word "${flagname}" "${local_nonpersistent_flags[@]}"; then
commands=()
fi
# keep flag value with flagname as flaghash
# flaghash variable is an associative array which is only supported in bash > 3.
if [[ -z "${BASH_VERSION:-}" || "${BASH_VERSINFO[0]:-}" -gt 3 ]]; then
if [ -n "${flagvalue}" ] ; then
flaghash[${flagname}]=${flagvalue}
elif [ -n "${words[ $((c+1)) ]}" ] ; then
flaghash[${flagname}]=${words[ $((c+1)) ]}
else
flaghash[${flagname}]="true" # pad "true" for bool flag
fi
fi
# skip the argument to a two word flag
if [[ ${words[c]} != *"="* ]] && __%[1]s_contains_word "${words[c]}" "${two_word_flags[@]}"; then
__%[1]s_debug "${FUNCNAME[0]}: found a flag ${words[c]}, skip the next argument"
c=$((c+1))
# if we are looking for a flags value, don't show commands
if [[ $c -eq $cword ]]; then
commands=()
fi
fi
c=$((c+1))
}
__%[1]s_handle_noun()
{
__%[1]s_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
if __%[1]s_contains_word "${words[c]}" "${must_have_one_noun[@]}"; then
must_have_one_noun=()
elif __%[1]s_contains_word "${words[c]}" "${noun_aliases[@]}"; then
must_have_one_noun=()
fi
nouns+=("${words[c]}")
c=$((c+1))
}
__%[1]s_handle_command()
{
__%[1]s_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
local next_command
if [[ -n ${last_command} ]]; then
next_command="_${last_command}_${words[c]//:/__}"
else
if [[ $c -eq 0 ]]; then
next_command="_%[1]s_root_command"
else
next_command="_${words[c]//:/__}"
fi
fi
c=$((c+1))
__%[1]s_debug "${FUNCNAME[0]}: looking for ${next_command}"
declare -F "$next_command" >/dev/null && $next_command
}
__%[1]s_handle_word()
{
if [[ $c -ge $cword ]]; then
__%[1]s_handle_reply
return
fi
__%[1]s_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}"
if [[ "${words[c]}" == -* ]]; then
__%[1]s_handle_flag
elif __%[1]s_contains_word "${words[c]}" "${commands[@]}"; then
__%[1]s_handle_command
elif [[ $c -eq 0 ]]; then
__%[1]s_handle_command
elif __%[1]s_contains_word "${words[c]}" "${command_aliases[@]}"; then
# aliashash variable is an associative array which is only supported in bash > 3.
if [[ -z "${BASH_VERSION:-}" || "${BASH_VERSINFO[0]:-}" -gt 3 ]]; then
words[c]=${aliashash[${words[c]}]}
__%[1]s_handle_command
else
__%[1]s_handle_noun
fi
else
__%[1]s_handle_noun
fi
__%[1]s_handle_word
}
`, name, ShellCompNoDescRequestCmd,
ShellCompDirectiveError, ShellCompDirectiveNoSpace, ShellCompDirectiveNoFileComp,
ShellCompDirectiveFilterFileExt, ShellCompDirectiveFilterDirs, activeHelpEnvVar(name)))
}
func writePostscript(buf io.StringWriter, name string) {
name = strings.ReplaceAll(name, ":", "__")
WriteStringAndCheck(buf, fmt.Sprintf("__start_%s()\n", name))
WriteStringAndCheck(buf, fmt.Sprintf(`{
local cur prev words cword split
declare -A flaghash 2>/dev/null || :
declare -A aliashash 2>/dev/null || :
if declare -F _init_completion >/dev/null 2>&1; then
_init_completion -s || return
else
__%[1]s_init_completion -n "=" || return
fi
local c=0
local flag_parsing_disabled=
local flags=()
local two_word_flags=()
local local_nonpersistent_flags=()
local flags_with_completion=()
local flags_completion=()
local commands=("%[1]s")
local command_aliases=()
local must_have_one_flag=()
local must_have_one_noun=()
local has_completion_function=""
local last_command=""
local nouns=()
local noun_aliases=()
__%[1]s_handle_word
}
`, name))
WriteStringAndCheck(buf, fmt.Sprintf(`if [[ $(type -t compopt) = "builtin" ]]; then
complete -o default -F __start_%s %s
else
complete -o default -o nospace -F __start_%s %s
fi
`, name, name, name, name))
WriteStringAndCheck(buf, "# ex: ts=4 sw=4 et filetype=sh\n")
}
func writeCommands(buf io.StringWriter, cmd *Command) {
WriteStringAndCheck(buf, " commands=()\n")
for _, c := range cmd.Commands() {
if !c.IsAvailableCommand() && c != cmd.helpCommand {
continue
}
WriteStringAndCheck(buf, fmt.Sprintf(" commands+=(%q)\n", c.Name()))
writeCmdAliases(buf, c)
}
WriteStringAndCheck(buf, "\n")
}
func writeFlagHandler(buf io.StringWriter, name string, annotations map[string][]string, cmd *Command) {
for key, value := range annotations {
switch key {
case BashCompFilenameExt:
WriteStringAndCheck(buf, fmt.Sprintf(" flags_with_completion+=(%q)\n", name))
var ext string
if len(value) > 0 {
ext = fmt.Sprintf("__%s_handle_filename_extension_flag ", cmd.Root().Name()) + strings.Join(value, "|")
} else {
ext = "_filedir"
}
WriteStringAndCheck(buf, fmt.Sprintf(" flags_completion+=(%q)\n", ext))
case BashCompCustom:
WriteStringAndCheck(buf, fmt.Sprintf(" flags_with_completion+=(%q)\n", name))
if len(value) > 0 {
handlers := strings.Join(value, "; ")
WriteStringAndCheck(buf, fmt.Sprintf(" flags_completion+=(%q)\n", handlers))
} else {
WriteStringAndCheck(buf, " flags_completion+=(:)\n")
}
case BashCompSubdirsInDir:
WriteStringAndCheck(buf, fmt.Sprintf(" flags_with_completion+=(%q)\n", name))
var ext string
if len(value) == 1 {
ext = fmt.Sprintf("__%s_handle_subdirs_in_dir_flag ", cmd.Root().Name()) + value[0]
} else {
ext = "_filedir -d"
}
WriteStringAndCheck(buf, fmt.Sprintf(" flags_completion+=(%q)\n", ext))
}
}
}
const cbn = "\")\n"
func writeShortFlag(buf io.StringWriter, flag *pflag.Flag, cmd *Command) {
name := flag.Shorthand
format := " "
if len(flag.NoOptDefVal) == 0 {
format += "two_word_"
}
format += "flags+=(\"-%s" + cbn
WriteStringAndCheck(buf, fmt.Sprintf(format, name))
writeFlagHandler(buf, "-"+name, flag.Annotations, cmd)
}
func writeFlag(buf io.StringWriter, flag *pflag.Flag, cmd *Command) {
name := flag.Name
format := " flags+=(\"--%s"
if len(flag.NoOptDefVal) == 0 {
format += "="
}
format += cbn
WriteStringAndCheck(buf, fmt.Sprintf(format, name))
if len(flag.NoOptDefVal) == 0 {
format = " two_word_flags+=(\"--%s" + cbn
WriteStringAndCheck(buf, fmt.Sprintf(format, name))
}
writeFlagHandler(buf, "--"+name, flag.Annotations, cmd)
}
func writeLocalNonPersistentFlag(buf io.StringWriter, flag *pflag.Flag) {
name := flag.Name
format := " local_nonpersistent_flags+=(\"--%[1]s" + cbn
if len(flag.NoOptDefVal) == 0 {
format += " local_nonpersistent_flags+=(\"--%[1]s=" + cbn
}
WriteStringAndCheck(buf, fmt.Sprintf(format, name))
if len(flag.Shorthand) > 0 {
WriteStringAndCheck(buf, fmt.Sprintf(" local_nonpersistent_flags+=(\"-%s\")\n", flag.Shorthand))
}
}
// prepareCustomAnnotationsForFlags setup annotations for go completions for registered flags
func prepareCustomAnnotationsForFlags(cmd *Command) {
flagCompletionMutex.RLock()
defer flagCompletionMutex.RUnlock()
for flag := range flagCompletionFunctions {
// Make sure the completion script calls the __*_go_custom_completion function for
// every registered flag. We need to do this here (and not when the flag was registered
// for completion) so that we can know the root command name for the prefix
// of __<prefix>_go_custom_completion
if flag.Annotations == nil {
flag.Annotations = map[string][]string{}
}
flag.Annotations[BashCompCustom] = []string{fmt.Sprintf("__%[1]s_handle_go_custom_completion", cmd.Root().Name())}
}
}
func writeFlags(buf io.StringWriter, cmd *Command) {
prepareCustomAnnotationsForFlags(cmd)
WriteStringAndCheck(buf, ` flags=()
two_word_flags=()
local_nonpersistent_flags=()
flags_with_completion=()
flags_completion=()
`)
if cmd.DisableFlagParsing {
WriteStringAndCheck(buf, " flag_parsing_disabled=1\n")
}
localNonPersistentFlags := cmd.LocalNonPersistentFlags()
cmd.NonInheritedFlags().VisitAll(func(flag *pflag.Flag) {
if nonCompletableFlag(flag) {
return
}
writeFlag(buf, flag, cmd)
if len(flag.Shorthand) > 0 {
writeShortFlag(buf, flag, cmd)
}
// localNonPersistentFlags are used to stop the completion of subcommands when one is set
// if TraverseChildren is true we should allow to complete subcommands
if localNonPersistentFlags.Lookup(flag.Name) != nil && !cmd.Root().TraverseChildren {
writeLocalNonPersistentFlag(buf, flag)
}
})
cmd.InheritedFlags().VisitAll(func(flag *pflag.Flag) {
if nonCompletableFlag(flag) {
return
}
writeFlag(buf, flag, cmd)
if len(flag.Shorthand) > 0 {
writeShortFlag(buf, flag, cmd)
}
})
WriteStringAndCheck(buf, "\n")
}
func writeRequiredFlag(buf io.StringWriter, cmd *Command) {
WriteStringAndCheck(buf, " must_have_one_flag=()\n")
flags := cmd.NonInheritedFlags()
flags.VisitAll(func(flag *pflag.Flag) {
if nonCompletableFlag(flag) {
return
}
if _, ok := flag.Annotations[BashCompOneRequiredFlag]; ok {
format := " must_have_one_flag+=(\"--%s"
if flag.Value.Type() != "bool" {
format += "="
}
format += cbn
WriteStringAndCheck(buf, fmt.Sprintf(format, flag.Name))
if len(flag.Shorthand) > 0 {
WriteStringAndCheck(buf, fmt.Sprintf(" must_have_one_flag+=(\"-%s"+cbn, flag.Shorthand))
}
}
})
}
func writeRequiredNouns(buf io.StringWriter, cmd *Command) {
WriteStringAndCheck(buf, " must_have_one_noun=()\n")
sort.Strings(cmd.ValidArgs)
for _, value := range cmd.ValidArgs {
// Remove any description that may be included following a tab character.
// Descriptions are not supported by bash completion.
value = strings.SplitN(value, "\t", 2)[0]
WriteStringAndCheck(buf, fmt.Sprintf(" must_have_one_noun+=(%q)\n", value))
}
if cmd.ValidArgsFunction != nil {
WriteStringAndCheck(buf, " has_completion_function=1\n")
}
}
func writeCmdAliases(buf io.StringWriter, cmd *Command) {
if len(cmd.Aliases) == 0 {
return
}
sort.Strings(cmd.Aliases)
WriteStringAndCheck(buf, fmt.Sprint(` if [[ -z "${BASH_VERSION:-}" || "${BASH_VERSINFO[0]:-}" -gt 3 ]]; then`, "\n"))
for _, value := range cmd.Aliases {
WriteStringAndCheck(buf, fmt.Sprintf(" command_aliases+=(%q)\n", value))
WriteStringAndCheck(buf, fmt.Sprintf(" aliashash[%q]=%q\n", value, cmd.Name()))
}
WriteStringAndCheck(buf, ` fi`)
WriteStringAndCheck(buf, "\n")
}
func writeArgAliases(buf io.StringWriter, cmd *Command) {
WriteStringAndCheck(buf, " noun_aliases=()\n")
sort.Strings(cmd.ArgAliases)
for _, value := range cmd.ArgAliases {
WriteStringAndCheck(buf, fmt.Sprintf(" noun_aliases+=(%q)\n", value))
}
}
func gen(buf io.StringWriter, cmd *Command) {
for _, c := range cmd.Commands() {
if !c.IsAvailableCommand() && c != cmd.helpCommand {
continue
}
gen(buf, c)
}
commandName := cmd.CommandPath()
commandName = strings.ReplaceAll(commandName, " ", "_")
commandName = strings.ReplaceAll(commandName, ":", "__")
if cmd.Root() == cmd {
WriteStringAndCheck(buf, fmt.Sprintf("_%s_root_command()\n{\n", commandName))
} else {
WriteStringAndCheck(buf, fmt.Sprintf("_%s()\n{\n", commandName))
}
WriteStringAndCheck(buf, fmt.Sprintf(" last_command=%q\n", commandName))
WriteStringAndCheck(buf, "\n")
WriteStringAndCheck(buf, " command_aliases=()\n")
WriteStringAndCheck(buf, "\n")
writeCommands(buf, cmd)
writeFlags(buf, cmd)
writeRequiredFlag(buf, cmd)
writeRequiredNouns(buf, cmd)
writeArgAliases(buf, cmd)
WriteStringAndCheck(buf, "}\n\n")
}
// GenBashCompletion generates bash completion file and writes to the passed writer.
func (c *Command) GenBashCompletion(w io.Writer) error {
buf := new(bytes.Buffer)
writePreamble(buf, c.Name())
if len(c.BashCompletionFunction) > 0 {
buf.WriteString(c.BashCompletionFunction + "\n")
}
gen(buf, c)
writePostscript(buf, c.Name())
_, err := buf.WriteTo(w)
return err
}
func nonCompletableFlag(flag *pflag.Flag) bool {
return flag.Hidden || len(flag.Deprecated) > 0
}
// GenBashCompletionFile generates bash completion file.
func (c *Command) GenBashCompletionFile(filename string) error {
outFile, err := os.Create(filename)
if err != nil {
return err
}
defer outFile.Close()
return c.GenBashCompletion(outFile)
}

484
vendor/github.com/spf13/cobra/bash_completionsV2.go generated vendored Normal file
View File

@@ -0,0 +1,484 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"bytes"
"fmt"
"io"
"os"
)
func (c *Command) genBashCompletion(w io.Writer, includeDesc bool) error {
buf := new(bytes.Buffer)
genBashComp(buf, c.Name(), includeDesc)
_, err := buf.WriteTo(w)
return err
}
func genBashComp(buf io.StringWriter, name string, includeDesc bool) {
compCmd := ShellCompRequestCmd
if !includeDesc {
compCmd = ShellCompNoDescRequestCmd
}
WriteStringAndCheck(buf, fmt.Sprintf(`# bash completion V2 for %-36[1]s -*- shell-script -*-
__%[1]s_debug()
{
if [[ -n ${BASH_COMP_DEBUG_FILE-} ]]; then
echo "$*" >> "${BASH_COMP_DEBUG_FILE}"
fi
}
# Macs have bash3 for which the bash-completion package doesn't include
# _init_completion. This is a minimal version of that function.
__%[1]s_init_completion()
{
COMPREPLY=()
_get_comp_words_by_ref "$@" cur prev words cword
}
# This function calls the %[1]s program to obtain the completion
# results and the directive. It fills the 'out' and 'directive' vars.
__%[1]s_get_completion_results() {
local requestComp lastParam lastChar args
# Prepare the command to request completions for the program.
# Calling ${words[0]} instead of directly %[1]s allows handling aliases
args=("${words[@]:1}")
requestComp="${words[0]} %[2]s ${args[*]}"
lastParam=${words[$((${#words[@]}-1))]}
lastChar=${lastParam:$((${#lastParam}-1)):1}
__%[1]s_debug "lastParam ${lastParam}, lastChar ${lastChar}"
if [[ -z ${cur} && ${lastChar} != = ]]; then
# If the last parameter is complete (there is a space following it)
# We add an extra empty parameter so we can indicate this to the go method.
__%[1]s_debug "Adding extra empty parameter"
requestComp="${requestComp} ''"
fi
# When completing a flag with an = (e.g., %[1]s -n=<TAB>)
# bash focuses on the part after the =, so we need to remove
# the flag part from $cur
if [[ ${cur} == -*=* ]]; then
cur="${cur#*=}"
fi
__%[1]s_debug "Calling ${requestComp}"
# Use eval to handle any environment variables and such
out=$(eval "${requestComp}" 2>/dev/null)
# Extract the directive integer at the very end of the output following a colon (:)
directive=${out##*:}
# Remove the directive
out=${out%%:*}
if [[ ${directive} == "${out}" ]]; then
# There is not directive specified
directive=0
fi
__%[1]s_debug "The completion directive is: ${directive}"
__%[1]s_debug "The completions are: ${out}"
}
__%[1]s_process_completion_results() {
local shellCompDirectiveError=%[3]d
local shellCompDirectiveNoSpace=%[4]d
local shellCompDirectiveNoFileComp=%[5]d
local shellCompDirectiveFilterFileExt=%[6]d
local shellCompDirectiveFilterDirs=%[7]d
local shellCompDirectiveKeepOrder=%[8]d
if (((directive & shellCompDirectiveError) != 0)); then
# Error code. No completion.
__%[1]s_debug "Received error from custom completion go code"
return
else
if (((directive & shellCompDirectiveNoSpace) != 0)); then
if [[ $(type -t compopt) == builtin ]]; then
__%[1]s_debug "Activating no space"
compopt -o nospace
else
__%[1]s_debug "No space directive not supported in this version of bash"
fi
fi
if (((directive & shellCompDirectiveKeepOrder) != 0)); then
if [[ $(type -t compopt) == builtin ]]; then
# no sort isn't supported for bash less than < 4.4
if [[ ${BASH_VERSINFO[0]} -lt 4 || ( ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -lt 4 ) ]]; then
__%[1]s_debug "No sort directive not supported in this version of bash"
else
__%[1]s_debug "Activating keep order"
compopt -o nosort
fi
else
__%[1]s_debug "No sort directive not supported in this version of bash"
fi
fi
if (((directive & shellCompDirectiveNoFileComp) != 0)); then
if [[ $(type -t compopt) == builtin ]]; then
__%[1]s_debug "Activating no file completion"
compopt +o default
else
__%[1]s_debug "No file completion directive not supported in this version of bash"
fi
fi
fi
# Separate activeHelp from normal completions
local completions=()
local activeHelp=()
__%[1]s_extract_activeHelp
if (((directive & shellCompDirectiveFilterFileExt) != 0)); then
# File extension filtering
local fullFilter="" filter filteringCmd
# Do not use quotes around the $completions variable or else newline
# characters will be kept.
for filter in ${completions[*]}; do
fullFilter+="$filter|"
done
filteringCmd="_filedir $fullFilter"
__%[1]s_debug "File filtering command: $filteringCmd"
$filteringCmd
elif (((directive & shellCompDirectiveFilterDirs) != 0)); then
# File completion for directories only
local subdir
subdir=${completions[0]}
if [[ -n $subdir ]]; then
__%[1]s_debug "Listing directories in $subdir"
pushd "$subdir" >/dev/null 2>&1 && _filedir -d && popd >/dev/null 2>&1 || return
else
__%[1]s_debug "Listing directories in ."
_filedir -d
fi
else
__%[1]s_handle_completion_types
fi
__%[1]s_handle_special_char "$cur" :
__%[1]s_handle_special_char "$cur" =
# Print the activeHelp statements before we finish
__%[1]s_handle_activeHelp
}
__%[1]s_handle_activeHelp() {
# Print the activeHelp statements
if ((${#activeHelp[*]} != 0)); then
if [ -z $COMP_TYPE ]; then
# Bash v3 does not set the COMP_TYPE variable.
printf "\n";
printf "%%s\n" "${activeHelp[@]}"
printf "\n"
__%[1]s_reprint_commandLine
return
fi
# Only print ActiveHelp on the second TAB press
if [ $COMP_TYPE -eq 63 ]; then
printf "\n"
printf "%%s\n" "${activeHelp[@]}"
if ((${#COMPREPLY[*]} == 0)); then
# When there are no completion choices from the program, file completion
# may kick in if the program has not disabled it; in such a case, we want
# to know if any files will match what the user typed, so that we know if
# there will be completions presented, so that we know how to handle ActiveHelp.
# To find out, we actually trigger the file completion ourselves;
# the call to _filedir will fill COMPREPLY if files match.
if (((directive & shellCompDirectiveNoFileComp) == 0)); then
__%[1]s_debug "Listing files"
_filedir
fi
fi
if ((${#COMPREPLY[*]} != 0)); then
# If there are completion choices to be shown, print a delimiter.
# Re-printing the command-line will automatically be done
# by the shell when it prints the completion choices.
printf -- "--"
else
# When there are no completion choices at all, we need
# to re-print the command-line since the shell will
# not be doing it itself.
__%[1]s_reprint_commandLine
fi
elif [ $COMP_TYPE -eq 37 ] || [ $COMP_TYPE -eq 42 ]; then
# For completion type: menu-complete/menu-complete-backward and insert-completions
# the completions are immediately inserted into the command-line, so we first
# print the activeHelp message and reprint the command-line since the shell won't.
printf "\n"
printf "%%s\n" "${activeHelp[@]}"
__%[1]s_reprint_commandLine
fi
fi
}
__%[1]s_reprint_commandLine() {
# The prompt format is only available from bash 4.4.
# We test if it is available before using it.
if (x=${PS1@P}) 2> /dev/null; then
printf "%%s" "${PS1@P}${COMP_LINE[@]}"
else
# Can't print the prompt. Just print the
# text the user had typed, it is workable enough.
printf "%%s" "${COMP_LINE[@]}"
fi
}
# Separate activeHelp lines from real completions.
# Fills the $activeHelp and $completions arrays.
__%[1]s_extract_activeHelp() {
local activeHelpMarker="%[9]s"
local endIndex=${#activeHelpMarker}
while IFS='' read -r comp; do
[[ -z $comp ]] && continue
if [[ ${comp:0:endIndex} == $activeHelpMarker ]]; then
comp=${comp:endIndex}
__%[1]s_debug "ActiveHelp found: $comp"
if [[ -n $comp ]]; then
activeHelp+=("$comp")
fi
else
# Not an activeHelp line but a normal completion
completions+=("$comp")
fi
done <<<"${out}"
}
__%[1]s_handle_completion_types() {
__%[1]s_debug "__%[1]s_handle_completion_types: COMP_TYPE is $COMP_TYPE"
case $COMP_TYPE in
37|42)
# Type: menu-complete/menu-complete-backward and insert-completions
# If the user requested inserting one completion at a time, or all
# completions at once on the command-line we must remove the descriptions.
# https://github.com/spf13/cobra/issues/1508
# If there are no completions, we don't need to do anything
(( ${#completions[@]} == 0 )) && return 0
local tab=$'\t'
# Strip any description and escape the completion to handled special characters
IFS=$'\n' read -ra completions -d '' < <(printf "%%q\n" "${completions[@]%%%%$tab*}")
# Only consider the completions that match
IFS=$'\n' read -ra COMPREPLY -d '' < <(IFS=$'\n'; compgen -W "${completions[*]}" -- "${cur}")
# compgen looses the escaping so we need to escape all completions again since they will
# all be inserted on the command-line.
IFS=$'\n' read -ra COMPREPLY -d '' < <(printf "%%q\n" "${COMPREPLY[@]}")
;;
*)
# Type: complete (normal completion)
__%[1]s_handle_standard_completion_case
;;
esac
}
__%[1]s_handle_standard_completion_case() {
local tab=$'\t'
# If there are no completions, we don't need to do anything
(( ${#completions[@]} == 0 )) && return 0
# Short circuit to optimize if we don't have descriptions
if [[ "${completions[*]}" != *$tab* ]]; then
# First, escape the completions to handle special characters
IFS=$'\n' read -ra completions -d '' < <(printf "%%q\n" "${completions[@]}")
# Only consider the completions that match what the user typed
IFS=$'\n' read -ra COMPREPLY -d '' < <(IFS=$'\n'; compgen -W "${completions[*]}" -- "${cur}")
# compgen looses the escaping so, if there is only a single completion, we need to
# escape it again because it will be inserted on the command-line. If there are multiple
# completions, we don't want to escape them because they will be printed in a list
# and we don't want to show escape characters in that list.
if (( ${#COMPREPLY[@]} == 1 )); then
COMPREPLY[0]=$(printf "%%q" "${COMPREPLY[0]}")
fi
return 0
fi
local longest=0
local compline
# Look for the longest completion so that we can format things nicely
while IFS='' read -r compline; do
[[ -z $compline ]] && continue
# Before checking if the completion matches what the user typed,
# we need to strip any description and escape the completion to handle special
# characters because those escape characters are part of what the user typed.
# Don't call "printf" in a sub-shell because it will be much slower
# since we are in a loop.
printf -v comp "%%q" "${compline%%%%$tab*}" &>/dev/null || comp=$(printf "%%q" "${compline%%%%$tab*}")
# Only consider the completions that match
[[ $comp == "$cur"* ]] || continue
# The completions matches. Add it to the list of full completions including
# its description. We don't escape the completion because it may get printed
# in a list if there are more than one and we don't want show escape characters
# in that list.
COMPREPLY+=("$compline")
# Strip any description before checking the length, and again, don't escape
# the completion because this length is only used when printing the completions
# in a list and we don't want show escape characters in that list.
comp=${compline%%%%$tab*}
if ((${#comp}>longest)); then
longest=${#comp}
fi
done < <(printf "%%s\n" "${completions[@]}")
# If there is a single completion left, remove the description text and escape any special characters
if ((${#COMPREPLY[*]} == 1)); then
__%[1]s_debug "COMPREPLY[0]: ${COMPREPLY[0]}"
COMPREPLY[0]=$(printf "%%q" "${COMPREPLY[0]%%%%$tab*}")
__%[1]s_debug "Removed description from single completion, which is now: ${COMPREPLY[0]}"
else
# Format the descriptions
__%[1]s_format_comp_descriptions $longest
fi
}
__%[1]s_handle_special_char()
{
local comp="$1"
local char=$2
if [[ "$comp" == *${char}* && "$COMP_WORDBREAKS" == *${char}* ]]; then
local word=${comp%%"${comp##*${char}}"}
local idx=${#COMPREPLY[*]}
while ((--idx >= 0)); do
COMPREPLY[idx]=${COMPREPLY[idx]#"$word"}
done
fi
}
__%[1]s_format_comp_descriptions()
{
local tab=$'\t'
local comp desc maxdesclength
local longest=$1
local i ci
for ci in ${!COMPREPLY[*]}; do
comp=${COMPREPLY[ci]}
# Properly format the description string which follows a tab character if there is one
if [[ "$comp" == *$tab* ]]; then
__%[1]s_debug "Original comp: $comp"
desc=${comp#*$tab}
comp=${comp%%%%$tab*}
# $COLUMNS stores the current shell width.
# Remove an extra 4 because we add 2 spaces and 2 parentheses.
maxdesclength=$(( COLUMNS - longest - 4 ))
# Make sure we can fit a description of at least 8 characters
# if we are to align the descriptions.
if ((maxdesclength > 8)); then
# Add the proper number of spaces to align the descriptions
for ((i = ${#comp} ; i < longest ; i++)); do
comp+=" "
done
else
# Don't pad the descriptions so we can fit more text after the completion
maxdesclength=$(( COLUMNS - ${#comp} - 4 ))
fi
# If there is enough space for any description text,
# truncate the descriptions that are too long for the shell width
if ((maxdesclength > 0)); then
if ((${#desc} > maxdesclength)); then
desc=${desc:0:$(( maxdesclength - 1 ))}
desc+="…"
fi
comp+=" ($desc)"
fi
COMPREPLY[ci]=$comp
__%[1]s_debug "Final comp: $comp"
fi
done
}
__start_%[1]s()
{
local cur prev words cword split
COMPREPLY=()
# Call _init_completion from the bash-completion package
# to prepare the arguments properly
if declare -F _init_completion >/dev/null 2>&1; then
_init_completion -n =: || return
else
__%[1]s_init_completion -n =: || return
fi
__%[1]s_debug
__%[1]s_debug "========= starting completion logic =========="
__%[1]s_debug "cur is ${cur}, words[*] is ${words[*]}, #words[@] is ${#words[@]}, cword is $cword"
# The user could have moved the cursor backwards on the command-line.
# We need to trigger completion from the $cword location, so we need
# to truncate the command-line ($words) up to the $cword location.
words=("${words[@]:0:$cword+1}")
__%[1]s_debug "Truncated words[*]: ${words[*]},"
local out directive
__%[1]s_get_completion_results
__%[1]s_process_completion_results
}
if [[ $(type -t compopt) = "builtin" ]]; then
complete -o default -F __start_%[1]s %[1]s
else
complete -o default -o nospace -F __start_%[1]s %[1]s
fi
# ex: ts=4 sw=4 et filetype=sh
`, name, compCmd,
ShellCompDirectiveError, ShellCompDirectiveNoSpace, ShellCompDirectiveNoFileComp,
ShellCompDirectiveFilterFileExt, ShellCompDirectiveFilterDirs, ShellCompDirectiveKeepOrder,
activeHelpMarker))
}
// GenBashCompletionFileV2 generates Bash completion version 2.
func (c *Command) GenBashCompletionFileV2(filename string, includeDesc bool) error {
outFile, err := os.Create(filename)
if err != nil {
return err
}
defer outFile.Close()
return c.GenBashCompletionV2(outFile, includeDesc)
}
// GenBashCompletionV2 generates Bash completion file version 2
// and writes it to the passed writer.
func (c *Command) GenBashCompletionV2(w io.Writer, includeDesc bool) error {
return c.genBashCompletion(w, includeDesc)
}

246
vendor/github.com/spf13/cobra/cobra.go generated vendored Normal file
View File

@@ -0,0 +1,246 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Commands similar to git, go tools and other modern CLI tools
// inspired by go, go-Commander, gh and subcommand
package cobra
import (
"fmt"
"io"
"os"
"reflect"
"strconv"
"strings"
"text/template"
"time"
"unicode"
)
var templateFuncs = template.FuncMap{
"trim": strings.TrimSpace,
"trimRightSpace": trimRightSpace,
"trimTrailingWhitespaces": trimRightSpace,
"appendIfNotPresent": appendIfNotPresent,
"rpad": rpad,
"gt": Gt,
"eq": Eq,
}
var initializers []func()
var finalizers []func()
const (
defaultPrefixMatching = false
defaultCommandSorting = true
defaultCaseInsensitive = false
defaultTraverseRunHooks = false
)
// EnablePrefixMatching allows setting automatic prefix matching. Automatic prefix matching can be a dangerous thing
// to automatically enable in CLI tools.
// Set this to true to enable it.
var EnablePrefixMatching = defaultPrefixMatching
// EnableCommandSorting controls sorting of the slice of commands, which is turned on by default.
// To disable sorting, set it to false.
var EnableCommandSorting = defaultCommandSorting
// EnableCaseInsensitive allows case-insensitive commands names. (case sensitive by default)
var EnableCaseInsensitive = defaultCaseInsensitive
// EnableTraverseRunHooks executes persistent pre-run and post-run hooks from all parents.
// By default this is disabled, which means only the first run hook to be found is executed.
var EnableTraverseRunHooks = defaultTraverseRunHooks
// MousetrapHelpText enables an information splash screen on Windows
// if the CLI is started from explorer.exe.
// To disable the mousetrap, just set this variable to blank string ("").
// Works only on Microsoft Windows.
var MousetrapHelpText = `This is a command line tool.
You need to open cmd.exe and run it from there.
`
// MousetrapDisplayDuration controls how long the MousetrapHelpText message is displayed on Windows
// if the CLI is started from explorer.exe. Set to 0 to wait for the return key to be pressed.
// To disable the mousetrap, just set MousetrapHelpText to blank string ("").
// Works only on Microsoft Windows.
var MousetrapDisplayDuration = 5 * time.Second
// AddTemplateFunc adds a template function that's available to Usage and Help
// template generation.
func AddTemplateFunc(name string, tmplFunc interface{}) {
templateFuncs[name] = tmplFunc
}
// AddTemplateFuncs adds multiple template functions that are available to Usage and
// Help template generation.
func AddTemplateFuncs(tmplFuncs template.FuncMap) {
for k, v := range tmplFuncs {
templateFuncs[k] = v
}
}
// OnInitialize sets the passed functions to be run when each command's
// Execute method is called.
func OnInitialize(y ...func()) {
initializers = append(initializers, y...)
}
// OnFinalize sets the passed functions to be run when each command's
// Execute method is terminated.
func OnFinalize(y ...func()) {
finalizers = append(finalizers, y...)
}
// FIXME Gt is unused by cobra and should be removed in a version 2. It exists only for compatibility with users of cobra.
// Gt takes two types and checks whether the first type is greater than the second. In case of types Arrays, Chans,
// Maps and Slices, Gt will compare their lengths. Ints are compared directly while strings are first parsed as
// ints and then compared.
func Gt(a interface{}, b interface{}) bool {
var left, right int64
av := reflect.ValueOf(a)
switch av.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice:
left = int64(av.Len())
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
left = av.Int()
case reflect.String:
left, _ = strconv.ParseInt(av.String(), 10, 64)
}
bv := reflect.ValueOf(b)
switch bv.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice:
right = int64(bv.Len())
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
right = bv.Int()
case reflect.String:
right, _ = strconv.ParseInt(bv.String(), 10, 64)
}
return left > right
}
// FIXME Eq is unused by cobra and should be removed in a version 2. It exists only for compatibility with users of cobra.
// Eq takes two types and checks whether they are equal. Supported types are int and string. Unsupported types will panic.
func Eq(a interface{}, b interface{}) bool {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice:
panic("Eq called on unsupported type")
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() == bv.Int()
case reflect.String:
return av.String() == bv.String()
}
return false
}
func trimRightSpace(s string) string {
return strings.TrimRightFunc(s, unicode.IsSpace)
}
// FIXME appendIfNotPresent is unused by cobra and should be removed in a version 2. It exists only for compatibility with users of cobra.
// appendIfNotPresent will append stringToAppend to the end of s, but only if it's not yet present in s.
func appendIfNotPresent(s, stringToAppend string) string {
if strings.Contains(s, stringToAppend) {
return s
}
return s + " " + stringToAppend
}
// rpad adds padding to the right of a string.
func rpad(s string, padding int) string {
formattedString := fmt.Sprintf("%%-%ds", padding)
return fmt.Sprintf(formattedString, s)
}
func tmpl(text string) *tmplFunc {
return &tmplFunc{
tmpl: text,
fn: func(w io.Writer, data interface{}) error {
t := template.New("top")
t.Funcs(templateFuncs)
template.Must(t.Parse(text))
return t.Execute(w, data)
},
}
}
// ld compares two strings and returns the levenshtein distance between them.
func ld(s, t string, ignoreCase bool) int {
if ignoreCase {
s = strings.ToLower(s)
t = strings.ToLower(t)
}
d := make([][]int, len(s)+1)
for i := range d {
d[i] = make([]int, len(t)+1)
d[i][0] = i
}
for j := range d[0] {
d[0][j] = j
}
for j := 1; j <= len(t); j++ {
for i := 1; i <= len(s); i++ {
if s[i-1] == t[j-1] {
d[i][j] = d[i-1][j-1]
} else {
min := d[i-1][j]
if d[i][j-1] < min {
min = d[i][j-1]
}
if d[i-1][j-1] < min {
min = d[i-1][j-1]
}
d[i][j] = min + 1
}
}
}
return d[len(s)][len(t)]
}
func stringInSlice(a string, list []string) bool {
for _, b := range list {
if b == a {
return true
}
}
return false
}
// CheckErr prints the msg with the prefix 'Error:' and exits with error code 1. If the msg is nil, it does nothing.
func CheckErr(msg interface{}) {
if msg != nil {
fmt.Fprintln(os.Stderr, "Error:", msg)
os.Exit(1)
}
}
// WriteStringAndCheck writes a string into a buffer, and checks if the error is not nil.
func WriteStringAndCheck(b io.StringWriter, s string) {
_, err := b.WriteString(s)
CheckErr(err)
}

2072
vendor/github.com/spf13/cobra/command.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

20
vendor/github.com/spf13/cobra/command_notwin.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build !windows
// +build !windows
package cobra
var preExecHookFn func(*Command)

41
vendor/github.com/spf13/cobra/command_win.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build windows
// +build windows
package cobra
import (
"fmt"
"os"
"time"
"github.com/inconshreveable/mousetrap"
)
var preExecHookFn = preExecHook
func preExecHook(c *Command) {
if MousetrapHelpText != "" && mousetrap.StartedByExplorer() {
c.Print(MousetrapHelpText)
if MousetrapDisplayDuration > 0 {
time.Sleep(MousetrapDisplayDuration)
} else {
c.Println("Press return to continue...")
fmt.Scanln()
}
os.Exit(1)
}
}

1020
vendor/github.com/spf13/cobra/completions.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

292
vendor/github.com/spf13/cobra/fish_completions.go generated vendored Normal file
View File

@@ -0,0 +1,292 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"bytes"
"fmt"
"io"
"os"
"strings"
)
func genFishComp(buf io.StringWriter, name string, includeDesc bool) {
// Variables should not contain a '-' or ':' character
nameForVar := name
nameForVar = strings.ReplaceAll(nameForVar, "-", "_")
nameForVar = strings.ReplaceAll(nameForVar, ":", "_")
compCmd := ShellCompRequestCmd
if !includeDesc {
compCmd = ShellCompNoDescRequestCmd
}
WriteStringAndCheck(buf, fmt.Sprintf("# fish completion for %-36s -*- shell-script -*-\n", name))
WriteStringAndCheck(buf, fmt.Sprintf(`
function __%[1]s_debug
set -l file "$BASH_COMP_DEBUG_FILE"
if test -n "$file"
echo "$argv" >> $file
end
end
function __%[1]s_perform_completion
__%[1]s_debug "Starting __%[1]s_perform_completion"
# Extract all args except the last one
set -l args (commandline -opc)
# Extract the last arg and escape it in case it is a space
set -l lastArg (string escape -- (commandline -ct))
__%[1]s_debug "args: $args"
__%[1]s_debug "last arg: $lastArg"
# Disable ActiveHelp which is not supported for fish shell
set -l requestComp "%[10]s=0 $args[1] %[3]s $args[2..-1] $lastArg"
__%[1]s_debug "Calling $requestComp"
set -l results (eval $requestComp 2> /dev/null)
# Some programs may output extra empty lines after the directive.
# Let's ignore them or else it will break completion.
# Ref: https://github.com/spf13/cobra/issues/1279
for line in $results[-1..1]
if test (string trim -- $line) = ""
# Found an empty line, remove it
set results $results[1..-2]
else
# Found non-empty line, we have our proper output
break
end
end
set -l comps $results[1..-2]
set -l directiveLine $results[-1]
# For Fish, when completing a flag with an = (e.g., <program> -n=<TAB>)
# completions must be prefixed with the flag
set -l flagPrefix (string match -r -- '-.*=' "$lastArg")
__%[1]s_debug "Comps: $comps"
__%[1]s_debug "DirectiveLine: $directiveLine"
__%[1]s_debug "flagPrefix: $flagPrefix"
for comp in $comps
printf "%%s%%s\n" "$flagPrefix" "$comp"
end
printf "%%s\n" "$directiveLine"
end
# this function limits calls to __%[1]s_perform_completion, by caching the result behind $__%[1]s_perform_completion_once_result
function __%[1]s_perform_completion_once
__%[1]s_debug "Starting __%[1]s_perform_completion_once"
if test -n "$__%[1]s_perform_completion_once_result"
__%[1]s_debug "Seems like a valid result already exists, skipping __%[1]s_perform_completion"
return 0
end
set --global __%[1]s_perform_completion_once_result (__%[1]s_perform_completion)
if test -z "$__%[1]s_perform_completion_once_result"
__%[1]s_debug "No completions, probably due to a failure"
return 1
end
__%[1]s_debug "Performed completions and set __%[1]s_perform_completion_once_result"
return 0
end
# this function is used to clear the $__%[1]s_perform_completion_once_result variable after completions are run
function __%[1]s_clear_perform_completion_once_result
__%[1]s_debug ""
__%[1]s_debug "========= clearing previously set __%[1]s_perform_completion_once_result variable =========="
set --erase __%[1]s_perform_completion_once_result
__%[1]s_debug "Successfully erased the variable __%[1]s_perform_completion_once_result"
end
function __%[1]s_requires_order_preservation
__%[1]s_debug ""
__%[1]s_debug "========= checking if order preservation is required =========="
__%[1]s_perform_completion_once
if test -z "$__%[1]s_perform_completion_once_result"
__%[1]s_debug "Error determining if order preservation is required"
return 1
end
set -l directive (string sub --start 2 $__%[1]s_perform_completion_once_result[-1])
__%[1]s_debug "Directive is: $directive"
set -l shellCompDirectiveKeepOrder %[9]d
set -l keeporder (math (math --scale 0 $directive / $shellCompDirectiveKeepOrder) %% 2)
__%[1]s_debug "Keeporder is: $keeporder"
if test $keeporder -ne 0
__%[1]s_debug "This does require order preservation"
return 0
end
__%[1]s_debug "This doesn't require order preservation"
return 1
end
# This function does two things:
# - Obtain the completions and store them in the global __%[1]s_comp_results
# - Return false if file completion should be performed
function __%[1]s_prepare_completions
__%[1]s_debug ""
__%[1]s_debug "========= starting completion logic =========="
# Start fresh
set --erase __%[1]s_comp_results
__%[1]s_perform_completion_once
__%[1]s_debug "Completion results: $__%[1]s_perform_completion_once_result"
if test -z "$__%[1]s_perform_completion_once_result"
__%[1]s_debug "No completion, probably due to a failure"
# Might as well do file completion, in case it helps
return 1
end
set -l directive (string sub --start 2 $__%[1]s_perform_completion_once_result[-1])
set --global __%[1]s_comp_results $__%[1]s_perform_completion_once_result[1..-2]
__%[1]s_debug "Completions are: $__%[1]s_comp_results"
__%[1]s_debug "Directive is: $directive"
set -l shellCompDirectiveError %[4]d
set -l shellCompDirectiveNoSpace %[5]d
set -l shellCompDirectiveNoFileComp %[6]d
set -l shellCompDirectiveFilterFileExt %[7]d
set -l shellCompDirectiveFilterDirs %[8]d
if test -z "$directive"
set directive 0
end
set -l compErr (math (math --scale 0 $directive / $shellCompDirectiveError) %% 2)
if test $compErr -eq 1
__%[1]s_debug "Received error directive: aborting."
# Might as well do file completion, in case it helps
return 1
end
set -l filefilter (math (math --scale 0 $directive / $shellCompDirectiveFilterFileExt) %% 2)
set -l dirfilter (math (math --scale 0 $directive / $shellCompDirectiveFilterDirs) %% 2)
if test $filefilter -eq 1; or test $dirfilter -eq 1
__%[1]s_debug "File extension filtering or directory filtering not supported"
# Do full file completion instead
return 1
end
set -l nospace (math (math --scale 0 $directive / $shellCompDirectiveNoSpace) %% 2)
set -l nofiles (math (math --scale 0 $directive / $shellCompDirectiveNoFileComp) %% 2)
__%[1]s_debug "nospace: $nospace, nofiles: $nofiles"
# If we want to prevent a space, or if file completion is NOT disabled,
# we need to count the number of valid completions.
# To do so, we will filter on prefix as the completions we have received
# may not already be filtered so as to allow fish to match on different
# criteria than the prefix.
if test $nospace -ne 0; or test $nofiles -eq 0
set -l prefix (commandline -t | string escape --style=regex)
__%[1]s_debug "prefix: $prefix"
set -l completions (string match -r -- "^$prefix.*" $__%[1]s_comp_results)
set --global __%[1]s_comp_results $completions
__%[1]s_debug "Filtered completions are: $__%[1]s_comp_results"
# Important not to quote the variable for count to work
set -l numComps (count $__%[1]s_comp_results)
__%[1]s_debug "numComps: $numComps"
if test $numComps -eq 1; and test $nospace -ne 0
# We must first split on \t to get rid of the descriptions to be
# able to check what the actual completion will be.
# We don't need descriptions anyway since there is only a single
# real completion which the shell will expand immediately.
set -l split (string split --max 1 \t $__%[1]s_comp_results[1])
# Fish won't add a space if the completion ends with any
# of the following characters: @=/:.,
set -l lastChar (string sub -s -1 -- $split)
if not string match -r -q "[@=/:.,]" -- "$lastChar"
# In other cases, to support the "nospace" directive we trick the shell
# by outputting an extra, longer completion.
__%[1]s_debug "Adding second completion to perform nospace directive"
set --global __%[1]s_comp_results $split[1] $split[1].
__%[1]s_debug "Completions are now: $__%[1]s_comp_results"
end
end
if test $numComps -eq 0; and test $nofiles -eq 0
# To be consistent with bash and zsh, we only trigger file
# completion when there are no other completions
__%[1]s_debug "Requesting file completion"
return 1
end
end
return 0
end
# Since Fish completions are only loaded once the user triggers them, we trigger them ourselves
# so we can properly delete any completions provided by another script.
# Only do this if the program can be found, or else fish may print some errors; besides,
# the existing completions will only be loaded if the program can be found.
if type -q "%[2]s"
# The space after the program name is essential to trigger completion for the program
# and not completion of the program name itself.
# Also, we use '> /dev/null 2>&1' since '&>' is not supported in older versions of fish.
complete --do-complete "%[2]s " > /dev/null 2>&1
end
# Remove any pre-existing completions for the program since we will be handling all of them.
complete -c %[2]s -e
# this will get called after the two calls below and clear the $__%[1]s_perform_completion_once_result global
complete -c %[2]s -n '__%[1]s_clear_perform_completion_once_result'
# The call to __%[1]s_prepare_completions will setup __%[1]s_comp_results
# which provides the program's completion choices.
# If this doesn't require order preservation, we don't use the -k flag
complete -c %[2]s -n 'not __%[1]s_requires_order_preservation && __%[1]s_prepare_completions' -f -a '$__%[1]s_comp_results'
# otherwise we use the -k flag
complete -k -c %[2]s -n '__%[1]s_requires_order_preservation && __%[1]s_prepare_completions' -f -a '$__%[1]s_comp_results'
`, nameForVar, name, compCmd,
ShellCompDirectiveError, ShellCompDirectiveNoSpace, ShellCompDirectiveNoFileComp,
ShellCompDirectiveFilterFileExt, ShellCompDirectiveFilterDirs, ShellCompDirectiveKeepOrder, activeHelpEnvVar(name)))
}
// GenFishCompletion generates fish completion file and writes to the passed writer.
func (c *Command) GenFishCompletion(w io.Writer, includeDesc bool) error {
buf := new(bytes.Buffer)
genFishComp(buf, c.Name(), includeDesc)
_, err := buf.WriteTo(w)
return err
}
// GenFishCompletionFile generates fish completion file.
func (c *Command) GenFishCompletionFile(filename string, includeDesc bool) error {
outFile, err := os.Create(filename)
if err != nil {
return err
}
defer outFile.Close()
return c.GenFishCompletion(outFile, includeDesc)
}

290
vendor/github.com/spf13/cobra/flag_groups.go generated vendored Normal file
View File

@@ -0,0 +1,290 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"fmt"
"sort"
"strings"
flag "github.com/spf13/pflag"
)
const (
requiredAsGroupAnnotation = "cobra_annotation_required_if_others_set"
oneRequiredAnnotation = "cobra_annotation_one_required"
mutuallyExclusiveAnnotation = "cobra_annotation_mutually_exclusive"
)
// MarkFlagsRequiredTogether marks the given flags with annotations so that Cobra errors
// if the command is invoked with a subset (but not all) of the given flags.
func (c *Command) MarkFlagsRequiredTogether(flagNames ...string) {
c.mergePersistentFlags()
for _, v := range flagNames {
f := c.Flags().Lookup(v)
if f == nil {
panic(fmt.Sprintf("Failed to find flag %q and mark it as being required in a flag group", v))
}
if err := c.Flags().SetAnnotation(v, requiredAsGroupAnnotation, append(f.Annotations[requiredAsGroupAnnotation], strings.Join(flagNames, " "))); err != nil {
// Only errs if the flag isn't found.
panic(err)
}
}
}
// MarkFlagsOneRequired marks the given flags with annotations so that Cobra errors
// if the command is invoked without at least one flag from the given set of flags.
func (c *Command) MarkFlagsOneRequired(flagNames ...string) {
c.mergePersistentFlags()
for _, v := range flagNames {
f := c.Flags().Lookup(v)
if f == nil {
panic(fmt.Sprintf("Failed to find flag %q and mark it as being in a one-required flag group", v))
}
if err := c.Flags().SetAnnotation(v, oneRequiredAnnotation, append(f.Annotations[oneRequiredAnnotation], strings.Join(flagNames, " "))); err != nil {
// Only errs if the flag isn't found.
panic(err)
}
}
}
// MarkFlagsMutuallyExclusive marks the given flags with annotations so that Cobra errors
// if the command is invoked with more than one flag from the given set of flags.
func (c *Command) MarkFlagsMutuallyExclusive(flagNames ...string) {
c.mergePersistentFlags()
for _, v := range flagNames {
f := c.Flags().Lookup(v)
if f == nil {
panic(fmt.Sprintf("Failed to find flag %q and mark it as being in a mutually exclusive flag group", v))
}
// Each time this is called is a single new entry; this allows it to be a member of multiple groups if needed.
if err := c.Flags().SetAnnotation(v, mutuallyExclusiveAnnotation, append(f.Annotations[mutuallyExclusiveAnnotation], strings.Join(flagNames, " "))); err != nil {
panic(err)
}
}
}
// ValidateFlagGroups validates the mutuallyExclusive/oneRequired/requiredAsGroup logic and returns the
// first error encountered.
func (c *Command) ValidateFlagGroups() error {
if c.DisableFlagParsing {
return nil
}
flags := c.Flags()
// groupStatus format is the list of flags as a unique ID,
// then a map of each flag name and whether it is set or not.
groupStatus := map[string]map[string]bool{}
oneRequiredGroupStatus := map[string]map[string]bool{}
mutuallyExclusiveGroupStatus := map[string]map[string]bool{}
flags.VisitAll(func(pflag *flag.Flag) {
processFlagForGroupAnnotation(flags, pflag, requiredAsGroupAnnotation, groupStatus)
processFlagForGroupAnnotation(flags, pflag, oneRequiredAnnotation, oneRequiredGroupStatus)
processFlagForGroupAnnotation(flags, pflag, mutuallyExclusiveAnnotation, mutuallyExclusiveGroupStatus)
})
if err := validateRequiredFlagGroups(groupStatus); err != nil {
return err
}
if err := validateOneRequiredFlagGroups(oneRequiredGroupStatus); err != nil {
return err
}
if err := validateExclusiveFlagGroups(mutuallyExclusiveGroupStatus); err != nil {
return err
}
return nil
}
func hasAllFlags(fs *flag.FlagSet, flagnames ...string) bool {
for _, fname := range flagnames {
f := fs.Lookup(fname)
if f == nil {
return false
}
}
return true
}
func processFlagForGroupAnnotation(flags *flag.FlagSet, pflag *flag.Flag, annotation string, groupStatus map[string]map[string]bool) {
groupInfo, found := pflag.Annotations[annotation]
if found {
for _, group := range groupInfo {
if groupStatus[group] == nil {
flagnames := strings.Split(group, " ")
// Only consider this flag group at all if all the flags are defined.
if !hasAllFlags(flags, flagnames...) {
continue
}
groupStatus[group] = make(map[string]bool, len(flagnames))
for _, name := range flagnames {
groupStatus[group][name] = false
}
}
groupStatus[group][pflag.Name] = pflag.Changed
}
}
}
func validateRequiredFlagGroups(data map[string]map[string]bool) error {
keys := sortedKeys(data)
for _, flagList := range keys {
flagnameAndStatus := data[flagList]
unset := []string{}
for flagname, isSet := range flagnameAndStatus {
if !isSet {
unset = append(unset, flagname)
}
}
if len(unset) == len(flagnameAndStatus) || len(unset) == 0 {
continue
}
// Sort values, so they can be tested/scripted against consistently.
sort.Strings(unset)
return fmt.Errorf("if any flags in the group [%v] are set they must all be set; missing %v", flagList, unset)
}
return nil
}
func validateOneRequiredFlagGroups(data map[string]map[string]bool) error {
keys := sortedKeys(data)
for _, flagList := range keys {
flagnameAndStatus := data[flagList]
var set []string
for flagname, isSet := range flagnameAndStatus {
if isSet {
set = append(set, flagname)
}
}
if len(set) >= 1 {
continue
}
// Sort values, so they can be tested/scripted against consistently.
sort.Strings(set)
return fmt.Errorf("at least one of the flags in the group [%v] is required", flagList)
}
return nil
}
func validateExclusiveFlagGroups(data map[string]map[string]bool) error {
keys := sortedKeys(data)
for _, flagList := range keys {
flagnameAndStatus := data[flagList]
var set []string
for flagname, isSet := range flagnameAndStatus {
if isSet {
set = append(set, flagname)
}
}
if len(set) == 0 || len(set) == 1 {
continue
}
// Sort values, so they can be tested/scripted against consistently.
sort.Strings(set)
return fmt.Errorf("if any flags in the group [%v] are set none of the others can be; %v were all set", flagList, set)
}
return nil
}
func sortedKeys(m map[string]map[string]bool) []string {
keys := make([]string, len(m))
i := 0
for k := range m {
keys[i] = k
i++
}
sort.Strings(keys)
return keys
}
// enforceFlagGroupsForCompletion will do the following:
// - when a flag in a group is present, other flags in the group will be marked required
// - when none of the flags in a one-required group are present, all flags in the group will be marked required
// - when a flag in a mutually exclusive group is present, other flags in the group will be marked as hidden
// This allows the standard completion logic to behave appropriately for flag groups
func (c *Command) enforceFlagGroupsForCompletion() {
if c.DisableFlagParsing {
return
}
flags := c.Flags()
groupStatus := map[string]map[string]bool{}
oneRequiredGroupStatus := map[string]map[string]bool{}
mutuallyExclusiveGroupStatus := map[string]map[string]bool{}
c.Flags().VisitAll(func(pflag *flag.Flag) {
processFlagForGroupAnnotation(flags, pflag, requiredAsGroupAnnotation, groupStatus)
processFlagForGroupAnnotation(flags, pflag, oneRequiredAnnotation, oneRequiredGroupStatus)
processFlagForGroupAnnotation(flags, pflag, mutuallyExclusiveAnnotation, mutuallyExclusiveGroupStatus)
})
// If a flag that is part of a group is present, we make all the other flags
// of that group required so that the shell completion suggests them automatically
for flagList, flagnameAndStatus := range groupStatus {
for _, isSet := range flagnameAndStatus {
if isSet {
// One of the flags of the group is set, mark the other ones as required
for _, fName := range strings.Split(flagList, " ") {
_ = c.MarkFlagRequired(fName)
}
}
}
}
// If none of the flags of a one-required group are present, we make all the flags
// of that group required so that the shell completion suggests them automatically
for flagList, flagnameAndStatus := range oneRequiredGroupStatus {
isSet := false
for _, isSet = range flagnameAndStatus {
if isSet {
break
}
}
// None of the flags of the group are set, mark all flags in the group
// as required
if !isSet {
for _, fName := range strings.Split(flagList, " ") {
_ = c.MarkFlagRequired(fName)
}
}
}
// If a flag that is mutually exclusive to others is present, we hide the other
// flags of that group so the shell completion does not suggest them
for flagList, flagnameAndStatus := range mutuallyExclusiveGroupStatus {
for flagName, isSet := range flagnameAndStatus {
if isSet {
// One of the flags of the mutually exclusive group is set, mark the other ones as hidden
// Don't mark the flag that is already set as hidden because it may be an
// array or slice flag and therefore must continue being suggested
for _, fName := range strings.Split(flagList, " ") {
if fName != flagName {
flag := c.Flags().Lookup(fName)
flag.Hidden = true
}
}
}
}
}
}

350
vendor/github.com/spf13/cobra/powershell_completions.go generated vendored Normal file
View File

@@ -0,0 +1,350 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// The generated scripts require PowerShell v5.0+ (which comes Windows 10, but
// can be downloaded separately for windows 7 or 8.1).
package cobra
import (
"bytes"
"fmt"
"io"
"os"
"strings"
)
func genPowerShellComp(buf io.StringWriter, name string, includeDesc bool) {
// Variables should not contain a '-' or ':' character
nameForVar := name
nameForVar = strings.ReplaceAll(nameForVar, "-", "_")
nameForVar = strings.ReplaceAll(nameForVar, ":", "_")
compCmd := ShellCompRequestCmd
if !includeDesc {
compCmd = ShellCompNoDescRequestCmd
}
WriteStringAndCheck(buf, fmt.Sprintf(`# powershell completion for %-36[1]s -*- shell-script -*-
function __%[1]s_debug {
if ($env:BASH_COMP_DEBUG_FILE) {
"$args" | Out-File -Append -FilePath "$env:BASH_COMP_DEBUG_FILE"
}
}
filter __%[1]s_escapeStringWithSpecialChars {
`+" $_ -replace '\\s|#|@|\\$|;|,|''|\\{|\\}|\\(|\\)|\"|`|\\||<|>|&','`$&'"+`
}
[scriptblock]${__%[2]sCompleterBlock} = {
param(
$WordToComplete,
$CommandAst,
$CursorPosition
)
# Get the current command line and convert into a string
$Command = $CommandAst.CommandElements
$Command = "$Command"
__%[1]s_debug ""
__%[1]s_debug "========= starting completion logic =========="
__%[1]s_debug "WordToComplete: $WordToComplete Command: $Command CursorPosition: $CursorPosition"
# The user could have moved the cursor backwards on the command-line.
# We need to trigger completion from the $CursorPosition location, so we need
# to truncate the command-line ($Command) up to the $CursorPosition location.
# Make sure the $Command is longer then the $CursorPosition before we truncate.
# This happens because the $Command does not include the last space.
if ($Command.Length -gt $CursorPosition) {
$Command=$Command.Substring(0,$CursorPosition)
}
__%[1]s_debug "Truncated command: $Command"
$ShellCompDirectiveError=%[4]d
$ShellCompDirectiveNoSpace=%[5]d
$ShellCompDirectiveNoFileComp=%[6]d
$ShellCompDirectiveFilterFileExt=%[7]d
$ShellCompDirectiveFilterDirs=%[8]d
$ShellCompDirectiveKeepOrder=%[9]d
# Prepare the command to request completions for the program.
# Split the command at the first space to separate the program and arguments.
$Program,$Arguments = $Command.Split(" ",2)
$RequestComp="$Program %[3]s $Arguments"
__%[1]s_debug "RequestComp: $RequestComp"
# we cannot use $WordToComplete because it
# has the wrong values if the cursor was moved
# so use the last argument
if ($WordToComplete -ne "" ) {
$WordToComplete = $Arguments.Split(" ")[-1]
}
__%[1]s_debug "New WordToComplete: $WordToComplete"
# Check for flag with equal sign
$IsEqualFlag = ($WordToComplete -Like "--*=*" )
if ( $IsEqualFlag ) {
__%[1]s_debug "Completing equal sign flag"
# Remove the flag part
$Flag,$WordToComplete = $WordToComplete.Split("=",2)
}
if ( $WordToComplete -eq "" -And ( -Not $IsEqualFlag )) {
# If the last parameter is complete (there is a space following it)
# We add an extra empty parameter so we can indicate this to the go method.
__%[1]s_debug "Adding extra empty parameter"
# PowerShell 7.2+ changed the way how the arguments are passed to executables,
# so for pre-7.2 or when Legacy argument passing is enabled we need to use
`+" # `\"`\" to pass an empty argument, a \"\" or '' does not work!!!"+`
if ($PSVersionTable.PsVersion -lt [version]'7.2.0' -or
($PSVersionTable.PsVersion -lt [version]'7.3.0' -and -not [ExperimentalFeature]::IsEnabled("PSNativeCommandArgumentPassing")) -or
(($PSVersionTable.PsVersion -ge [version]'7.3.0' -or [ExperimentalFeature]::IsEnabled("PSNativeCommandArgumentPassing")) -and
$PSNativeCommandArgumentPassing -eq 'Legacy')) {
`+" $RequestComp=\"$RequestComp\" + ' `\"`\"'"+`
} else {
$RequestComp="$RequestComp" + ' ""'
}
}
__%[1]s_debug "Calling $RequestComp"
# First disable ActiveHelp which is not supported for Powershell
${env:%[10]s}=0
#call the command store the output in $out and redirect stderr and stdout to null
# $Out is an array contains each line per element
Invoke-Expression -OutVariable out "$RequestComp" 2>&1 | Out-Null
# get directive from last line
[int]$Directive = $Out[-1].TrimStart(':')
if ($Directive -eq "") {
# There is no directive specified
$Directive = 0
}
__%[1]s_debug "The completion directive is: $Directive"
# remove directive (last element) from out
$Out = $Out | Where-Object { $_ -ne $Out[-1] }
__%[1]s_debug "The completions are: $Out"
if (($Directive -band $ShellCompDirectiveError) -ne 0 ) {
# Error code. No completion.
__%[1]s_debug "Received error from custom completion go code"
return
}
$Longest = 0
[Array]$Values = $Out | ForEach-Object {
#Split the output in name and description
`+" $Name, $Description = $_.Split(\"`t\",2)"+`
__%[1]s_debug "Name: $Name Description: $Description"
# Look for the longest completion so that we can format things nicely
if ($Longest -lt $Name.Length) {
$Longest = $Name.Length
}
# Set the description to a one space string if there is none set.
# This is needed because the CompletionResult does not accept an empty string as argument
if (-Not $Description) {
$Description = " "
}
New-Object -TypeName PSCustomObject -Property @{
Name = "$Name"
Description = "$Description"
}
}
$Space = " "
if (($Directive -band $ShellCompDirectiveNoSpace) -ne 0 ) {
# remove the space here
__%[1]s_debug "ShellCompDirectiveNoSpace is called"
$Space = ""
}
if ((($Directive -band $ShellCompDirectiveFilterFileExt) -ne 0 ) -or
(($Directive -band $ShellCompDirectiveFilterDirs) -ne 0 )) {
__%[1]s_debug "ShellCompDirectiveFilterFileExt ShellCompDirectiveFilterDirs are not supported"
# return here to prevent the completion of the extensions
return
}
$Values = $Values | Where-Object {
# filter the result
$_.Name -like "$WordToComplete*"
# Join the flag back if we have an equal sign flag
if ( $IsEqualFlag ) {
__%[1]s_debug "Join the equal sign flag back to the completion value"
$_.Name = $Flag + "=" + $_.Name
}
}
# we sort the values in ascending order by name if keep order isn't passed
if (($Directive -band $ShellCompDirectiveKeepOrder) -eq 0 ) {
$Values = $Values | Sort-Object -Property Name
}
if (($Directive -band $ShellCompDirectiveNoFileComp) -ne 0 ) {
__%[1]s_debug "ShellCompDirectiveNoFileComp is called"
if ($Values.Length -eq 0) {
# Just print an empty string here so the
# shell does not start to complete paths.
# We cannot use CompletionResult here because
# it does not accept an empty string as argument.
""
return
}
}
# Get the current mode
$Mode = (Get-PSReadLineKeyHandler | Where-Object {$_.Key -eq "Tab" }).Function
__%[1]s_debug "Mode: $Mode"
$Values | ForEach-Object {
# store temporary because switch will overwrite $_
$comp = $_
# PowerShell supports three different completion modes
# - TabCompleteNext (default windows style - on each key press the next option is displayed)
# - Complete (works like bash)
# - MenuComplete (works like zsh)
# You set the mode with Set-PSReadLineKeyHandler -Key Tab -Function <mode>
# CompletionResult Arguments:
# 1) CompletionText text to be used as the auto completion result
# 2) ListItemText text to be displayed in the suggestion list
# 3) ResultType type of completion result
# 4) ToolTip text for the tooltip with details about the object
switch ($Mode) {
# bash like
"Complete" {
if ($Values.Length -eq 1) {
__%[1]s_debug "Only one completion left"
# insert space after value
$CompletionText = $($comp.Name | __%[1]s_escapeStringWithSpecialChars) + $Space
if ($ExecutionContext.SessionState.LanguageMode -eq "FullLanguage"){
[System.Management.Automation.CompletionResult]::new($CompletionText, "$($comp.Name)", 'ParameterValue', "$($comp.Description)")
} else {
$CompletionText
}
} else {
# Add the proper number of spaces to align the descriptions
while($comp.Name.Length -lt $Longest) {
$comp.Name = $comp.Name + " "
}
# Check for empty description and only add parentheses if needed
if ($($comp.Description) -eq " " ) {
$Description = ""
} else {
$Description = " ($($comp.Description))"
}
$CompletionText = "$($comp.Name)$Description"
if ($ExecutionContext.SessionState.LanguageMode -eq "FullLanguage"){
[System.Management.Automation.CompletionResult]::new($CompletionText, "$($comp.Name)$Description", 'ParameterValue', "$($comp.Description)")
} else {
$CompletionText
}
}
}
# zsh like
"MenuComplete" {
# insert space after value
# MenuComplete will automatically show the ToolTip of
# the highlighted value at the bottom of the suggestions.
$CompletionText = $($comp.Name | __%[1]s_escapeStringWithSpecialChars) + $Space
if ($ExecutionContext.SessionState.LanguageMode -eq "FullLanguage"){
[System.Management.Automation.CompletionResult]::new($CompletionText, "$($comp.Name)", 'ParameterValue', "$($comp.Description)")
} else {
$CompletionText
}
}
# TabCompleteNext and in case we get something unknown
Default {
# Like MenuComplete but we don't want to add a space here because
# the user need to press space anyway to get the completion.
# Description will not be shown because that's not possible with TabCompleteNext
$CompletionText = $($comp.Name | __%[1]s_escapeStringWithSpecialChars)
if ($ExecutionContext.SessionState.LanguageMode -eq "FullLanguage"){
[System.Management.Automation.CompletionResult]::new($CompletionText, "$($comp.Name)", 'ParameterValue', "$($comp.Description)")
} else {
$CompletionText
}
}
}
}
}
Register-ArgumentCompleter -CommandName '%[1]s' -ScriptBlock ${__%[2]sCompleterBlock}
`, name, nameForVar, compCmd,
ShellCompDirectiveError, ShellCompDirectiveNoSpace, ShellCompDirectiveNoFileComp,
ShellCompDirectiveFilterFileExt, ShellCompDirectiveFilterDirs, ShellCompDirectiveKeepOrder, activeHelpEnvVar(name)))
}
func (c *Command) genPowerShellCompletion(w io.Writer, includeDesc bool) error {
buf := new(bytes.Buffer)
genPowerShellComp(buf, c.Name(), includeDesc)
_, err := buf.WriteTo(w)
return err
}
func (c *Command) genPowerShellCompletionFile(filename string, includeDesc bool) error {
outFile, err := os.Create(filename)
if err != nil {
return err
}
defer outFile.Close()
return c.genPowerShellCompletion(outFile, includeDesc)
}
// GenPowerShellCompletionFile generates powershell completion file without descriptions.
func (c *Command) GenPowerShellCompletionFile(filename string) error {
return c.genPowerShellCompletionFile(filename, false)
}
// GenPowerShellCompletion generates powershell completion file without descriptions
// and writes it to the passed writer.
func (c *Command) GenPowerShellCompletion(w io.Writer) error {
return c.genPowerShellCompletion(w, false)
}
// GenPowerShellCompletionFileWithDesc generates powershell completion file with descriptions.
func (c *Command) GenPowerShellCompletionFileWithDesc(filename string) error {
return c.genPowerShellCompletionFile(filename, true)
}
// GenPowerShellCompletionWithDesc generates powershell completion file with descriptions
// and writes it to the passed writer.
func (c *Command) GenPowerShellCompletionWithDesc(w io.Writer) error {
return c.genPowerShellCompletion(w, true)
}

98
vendor/github.com/spf13/cobra/shell_completions.go generated vendored Normal file
View File

@@ -0,0 +1,98 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"github.com/spf13/pflag"
)
// MarkFlagRequired instructs the various shell completion implementations to
// prioritize the named flag when performing completion,
// and causes your command to report an error if invoked without the flag.
func (c *Command) MarkFlagRequired(name string) error {
return MarkFlagRequired(c.Flags(), name)
}
// MarkPersistentFlagRequired instructs the various shell completion implementations to
// prioritize the named persistent flag when performing completion,
// and causes your command to report an error if invoked without the flag.
func (c *Command) MarkPersistentFlagRequired(name string) error {
return MarkFlagRequired(c.PersistentFlags(), name)
}
// MarkFlagRequired instructs the various shell completion implementations to
// prioritize the named flag when performing completion,
// and causes your command to report an error if invoked without the flag.
func MarkFlagRequired(flags *pflag.FlagSet, name string) error {
return flags.SetAnnotation(name, BashCompOneRequiredFlag, []string{"true"})
}
// MarkFlagFilename instructs the various shell completion implementations to
// limit completions for the named flag to the specified file extensions.
func (c *Command) MarkFlagFilename(name string, extensions ...string) error {
return MarkFlagFilename(c.Flags(), name, extensions...)
}
// MarkFlagCustom adds the BashCompCustom annotation to the named flag, if it exists.
// The bash completion script will call the bash function f for the flag.
//
// This will only work for bash completion.
// It is recommended to instead use c.RegisterFlagCompletionFunc(...) which allows
// to register a Go function which will work across all shells.
func (c *Command) MarkFlagCustom(name string, f string) error {
return MarkFlagCustom(c.Flags(), name, f)
}
// MarkPersistentFlagFilename instructs the various shell completion
// implementations to limit completions for the named persistent flag to the
// specified file extensions.
func (c *Command) MarkPersistentFlagFilename(name string, extensions ...string) error {
return MarkFlagFilename(c.PersistentFlags(), name, extensions...)
}
// MarkFlagFilename instructs the various shell completion implementations to
// limit completions for the named flag to the specified file extensions.
func MarkFlagFilename(flags *pflag.FlagSet, name string, extensions ...string) error {
return flags.SetAnnotation(name, BashCompFilenameExt, extensions)
}
// MarkFlagCustom adds the BashCompCustom annotation to the named flag, if it exists.
// The bash completion script will call the bash function f for the flag.
//
// This will only work for bash completion.
// It is recommended to instead use c.RegisterFlagCompletionFunc(...) which allows
// to register a Go function which will work across all shells.
func MarkFlagCustom(flags *pflag.FlagSet, name string, f string) error {
return flags.SetAnnotation(name, BashCompCustom, []string{f})
}
// MarkFlagDirname instructs the various shell completion implementations to
// limit completions for the named flag to directory names.
func (c *Command) MarkFlagDirname(name string) error {
return MarkFlagDirname(c.Flags(), name)
}
// MarkPersistentFlagDirname instructs the various shell completion
// implementations to limit completions for the named persistent flag to
// directory names.
func (c *Command) MarkPersistentFlagDirname(name string) error {
return MarkFlagDirname(c.PersistentFlags(), name)
}
// MarkFlagDirname instructs the various shell completion implementations to
// limit completions for the named flag to directory names.
func MarkFlagDirname(flags *pflag.FlagSet, name string) error {
return flags.SetAnnotation(name, BashCompSubdirsInDir, []string{})
}

308
vendor/github.com/spf13/cobra/zsh_completions.go generated vendored Normal file
View File

@@ -0,0 +1,308 @@
// Copyright 2013-2023 The Cobra Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cobra
import (
"bytes"
"fmt"
"io"
"os"
)
// GenZshCompletionFile generates zsh completion file including descriptions.
func (c *Command) GenZshCompletionFile(filename string) error {
return c.genZshCompletionFile(filename, true)
}
// GenZshCompletion generates zsh completion file including descriptions
// and writes it to the passed writer.
func (c *Command) GenZshCompletion(w io.Writer) error {
return c.genZshCompletion(w, true)
}
// GenZshCompletionFileNoDesc generates zsh completion file without descriptions.
func (c *Command) GenZshCompletionFileNoDesc(filename string) error {
return c.genZshCompletionFile(filename, false)
}
// GenZshCompletionNoDesc generates zsh completion file without descriptions
// and writes it to the passed writer.
func (c *Command) GenZshCompletionNoDesc(w io.Writer) error {
return c.genZshCompletion(w, false)
}
// MarkZshCompPositionalArgumentFile only worked for zsh and its behavior was
// not consistent with Bash completion. It has therefore been disabled.
// Instead, when no other completion is specified, file completion is done by
// default for every argument. One can disable file completion on a per-argument
// basis by using ValidArgsFunction and ShellCompDirectiveNoFileComp.
// To achieve file extension filtering, one can use ValidArgsFunction and
// ShellCompDirectiveFilterFileExt.
//
// Deprecated
func (c *Command) MarkZshCompPositionalArgumentFile(argPosition int, patterns ...string) error {
return nil
}
// MarkZshCompPositionalArgumentWords only worked for zsh. It has therefore
// been disabled.
// To achieve the same behavior across all shells, one can use
// ValidArgs (for the first argument only) or ValidArgsFunction for
// any argument (can include the first one also).
//
// Deprecated
func (c *Command) MarkZshCompPositionalArgumentWords(argPosition int, words ...string) error {
return nil
}
func (c *Command) genZshCompletionFile(filename string, includeDesc bool) error {
outFile, err := os.Create(filename)
if err != nil {
return err
}
defer outFile.Close()
return c.genZshCompletion(outFile, includeDesc)
}
func (c *Command) genZshCompletion(w io.Writer, includeDesc bool) error {
buf := new(bytes.Buffer)
genZshComp(buf, c.Name(), includeDesc)
_, err := buf.WriteTo(w)
return err
}
func genZshComp(buf io.StringWriter, name string, includeDesc bool) {
compCmd := ShellCompRequestCmd
if !includeDesc {
compCmd = ShellCompNoDescRequestCmd
}
WriteStringAndCheck(buf, fmt.Sprintf(`#compdef %[1]s
compdef _%[1]s %[1]s
# zsh completion for %-36[1]s -*- shell-script -*-
__%[1]s_debug()
{
local file="$BASH_COMP_DEBUG_FILE"
if [[ -n ${file} ]]; then
echo "$*" >> "${file}"
fi
}
_%[1]s()
{
local shellCompDirectiveError=%[3]d
local shellCompDirectiveNoSpace=%[4]d
local shellCompDirectiveNoFileComp=%[5]d
local shellCompDirectiveFilterFileExt=%[6]d
local shellCompDirectiveFilterDirs=%[7]d
local shellCompDirectiveKeepOrder=%[8]d
local lastParam lastChar flagPrefix requestComp out directive comp lastComp noSpace keepOrder
local -a completions
__%[1]s_debug "\n========= starting completion logic =========="
__%[1]s_debug "CURRENT: ${CURRENT}, words[*]: ${words[*]}"
# The user could have moved the cursor backwards on the command-line.
# We need to trigger completion from the $CURRENT location, so we need
# to truncate the command-line ($words) up to the $CURRENT location.
# (We cannot use $CURSOR as its value does not work when a command is an alias.)
words=("${=words[1,CURRENT]}")
__%[1]s_debug "Truncated words[*]: ${words[*]},"
lastParam=${words[-1]}
lastChar=${lastParam[-1]}
__%[1]s_debug "lastParam: ${lastParam}, lastChar: ${lastChar}"
# For zsh, when completing a flag with an = (e.g., %[1]s -n=<TAB>)
# completions must be prefixed with the flag
setopt local_options BASH_REMATCH
if [[ "${lastParam}" =~ '-.*=' ]]; then
# We are dealing with a flag with an =
flagPrefix="-P ${BASH_REMATCH}"
fi
# Prepare the command to obtain completions
requestComp="${words[1]} %[2]s ${words[2,-1]}"
if [ "${lastChar}" = "" ]; then
# If the last parameter is complete (there is a space following it)
# We add an extra empty parameter so we can indicate this to the go completion code.
__%[1]s_debug "Adding extra empty parameter"
requestComp="${requestComp} \"\""
fi
__%[1]s_debug "About to call: eval ${requestComp}"
# Use eval to handle any environment variables and such
out=$(eval ${requestComp} 2>/dev/null)
__%[1]s_debug "completion output: ${out}"
# Extract the directive integer following a : from the last line
local lastLine
while IFS='\n' read -r line; do
lastLine=${line}
done < <(printf "%%s\n" "${out[@]}")
__%[1]s_debug "last line: ${lastLine}"
if [ "${lastLine[1]}" = : ]; then
directive=${lastLine[2,-1]}
# Remove the directive including the : and the newline
local suffix
(( suffix=${#lastLine}+2))
out=${out[1,-$suffix]}
else
# There is no directive specified. Leave $out as is.
__%[1]s_debug "No directive found. Setting do default"
directive=0
fi
__%[1]s_debug "directive: ${directive}"
__%[1]s_debug "completions: ${out}"
__%[1]s_debug "flagPrefix: ${flagPrefix}"
if [ $((directive & shellCompDirectiveError)) -ne 0 ]; then
__%[1]s_debug "Completion received error. Ignoring completions."
return
fi
local activeHelpMarker="%[9]s"
local endIndex=${#activeHelpMarker}
local startIndex=$((${#activeHelpMarker}+1))
local hasActiveHelp=0
while IFS='\n' read -r comp; do
# Check if this is an activeHelp statement (i.e., prefixed with $activeHelpMarker)
if [ "${comp[1,$endIndex]}" = "$activeHelpMarker" ];then
__%[1]s_debug "ActiveHelp found: $comp"
comp="${comp[$startIndex,-1]}"
if [ -n "$comp" ]; then
compadd -x "${comp}"
__%[1]s_debug "ActiveHelp will need delimiter"
hasActiveHelp=1
fi
continue
fi
if [ -n "$comp" ]; then
# If requested, completions are returned with a description.
# The description is preceded by a TAB character.
# For zsh's _describe, we need to use a : instead of a TAB.
# We first need to escape any : as part of the completion itself.
comp=${comp//:/\\:}
local tab="$(printf '\t')"
comp=${comp//$tab/:}
__%[1]s_debug "Adding completion: ${comp}"
completions+=${comp}
lastComp=$comp
fi
done < <(printf "%%s\n" "${out[@]}")
# Add a delimiter after the activeHelp statements, but only if:
# - there are completions following the activeHelp statements, or
# - file completion will be performed (so there will be choices after the activeHelp)
if [ $hasActiveHelp -eq 1 ]; then
if [ ${#completions} -ne 0 ] || [ $((directive & shellCompDirectiveNoFileComp)) -eq 0 ]; then
__%[1]s_debug "Adding activeHelp delimiter"
compadd -x "--"
hasActiveHelp=0
fi
fi
if [ $((directive & shellCompDirectiveNoSpace)) -ne 0 ]; then
__%[1]s_debug "Activating nospace."
noSpace="-S ''"
fi
if [ $((directive & shellCompDirectiveKeepOrder)) -ne 0 ]; then
__%[1]s_debug "Activating keep order."
keepOrder="-V"
fi
if [ $((directive & shellCompDirectiveFilterFileExt)) -ne 0 ]; then
# File extension filtering
local filteringCmd
filteringCmd='_files'
for filter in ${completions[@]}; do
if [ ${filter[1]} != '*' ]; then
# zsh requires a glob pattern to do file filtering
filter="\*.$filter"
fi
filteringCmd+=" -g $filter"
done
filteringCmd+=" ${flagPrefix}"
__%[1]s_debug "File filtering command: $filteringCmd"
_arguments '*:filename:'"$filteringCmd"
elif [ $((directive & shellCompDirectiveFilterDirs)) -ne 0 ]; then
# File completion for directories only
local subdir
subdir="${completions[1]}"
if [ -n "$subdir" ]; then
__%[1]s_debug "Listing directories in $subdir"
pushd "${subdir}" >/dev/null 2>&1
else
__%[1]s_debug "Listing directories in ."
fi
local result
_arguments '*:dirname:_files -/'" ${flagPrefix}"
result=$?
if [ -n "$subdir" ]; then
popd >/dev/null 2>&1
fi
return $result
else
__%[1]s_debug "Calling _describe"
if eval _describe $keepOrder "completions" completions $flagPrefix $noSpace; then
__%[1]s_debug "_describe found some completions"
# Return the success of having called _describe
return 0
else
__%[1]s_debug "_describe did not find completions."
__%[1]s_debug "Checking if we should do file completion."
if [ $((directive & shellCompDirectiveNoFileComp)) -ne 0 ]; then
__%[1]s_debug "deactivating file completion"
# We must return an error code here to let zsh know that there were no
# completions found by _describe; this is what will trigger other
# matching algorithms to attempt to find completions.
# For example zsh can match letters in the middle of words.
return 1
else
# Perform file completion
__%[1]s_debug "Activating file completion"
# We must return the result of this command, so it must be the
# last command, or else we must store its result to return it.
_arguments '*:filename:_files'" ${flagPrefix}"
fi
fi
fi
}
# don't run the completion function when being source-ed or eval-ed
if [ "$funcstack[1]" = "_%[1]s" ]; then
_%[1]s
fi
`, name, compCmd,
ShellCompDirectiveError, ShellCompDirectiveNoSpace, ShellCompDirectiveNoFileComp,
ShellCompDirectiveFilterFileExt, ShellCompDirectiveFilterDirs, ShellCompDirectiveKeepOrder,
activeHelpMarker))
}

12
vendor/github.com/spf13/pflag/.editorconfig generated vendored Normal file
View File

@@ -0,0 +1,12 @@
root = true
[*]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[*.go]
indent_style = tab

2
vendor/github.com/spf13/pflag/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
.idea/*

4
vendor/github.com/spf13/pflag/.golangci.yaml generated vendored Normal file
View File

@@ -0,0 +1,4 @@
linters:
disable-all: true
enable:
- nolintlint

Some files were not shown because too many files have changed in this diff Show More