dataset.Loader[T] is now func() (*T, error) — a closure capturing its own
paths/config, so multi-file cases (LoadFiles(paths...)) work naturally.
Dataset.Close func(*T) is called with the old value after each swap, enabling
resource cleanup (e.g. geoip2.Reader.Close).
Sources.Datasets() builds a dataset.Group + three typed *Dataset[ipcohort.Cohort].
main.go now uses blGroup.Run / cityDS.Run / asnDS.Run instead of hand-rolled
atomic.Pointer + polling loops. containsInbound/OutBound accept *Dataset[Cohort].
nopSyncer handles file-only GeoIP paths (no download, just open).
gitshallow.Repo.GCInterval int:
0 (default) = git auto gc (no explicit call)
N = aggressive gc + prune every Nth successful pull
GC() simplified to always aggressive+prune (the only mode we use).
Sync(), Init(), Fetch() all parameter-free; GCInterval baked into Repo.
Dataset[T]: one Syncer + one Loader + one atomic.Pointer. Init/Sync/Run.
Group: one Syncer driving N datasets — single Fetch, all reloads fire
together. Add[T](g, loader, path) registers a typed dataset in the group.
Discovered organically: the reload+atomic-swap pattern repeated across
every cmd is exactly this abstraction.
httpcache.Syncer interface: Fetch() (bool, error) — satisfied by both
*httpcache.Cacher and *gitshallow.Repo (new Fetch method + LightGC field).
httpcache.Cacher.Fetch now errors on zero-length 200 response instead of
clobbering the existing file with empty content.
Sources.Fetch/Init drop the lightGC param (baked into Repo.LightGC).
Sources.syncs []httpcache.Syncer replaces the separate git/httpInbound/
httpOutbound fields — Fetch iterates syncs uniformly, no more switch.
Sources itself satisfies httpcache.Syncer.
geoip.ParseConf() extracted from geoip-update into the geoip package so
both cmds can read GeoIP.conf without duplication.
check-ip gains -geoip-conf flag: reads AccountID+LicenseKey, resolves
mmdb paths into data-dir, builds httpcache.Cachers with geoip.NewCacher.
Background runLoop now refreshes both blocklists and GeoIP DBs on each
tick, hot-swapping geoip2.Reader via atomic.Pointer.Swap + old.Close().
httpcache.Cacher gains:
- Username/Password: Basic Auth, stripped before following redirects
- MaxAge: skip HTTP if local file mtime is within this duration
- MinInterval: skip HTTP if last Fetch attempt was within this duration
- Transform: post-process response body (e.g. extract .mmdb from tar.gz)
geoip.Downloader now builds an httpcache.Cacher via NewCacher(), removing
its own HTTP client. ExtractMMDB is now exported for use as a Transform.
check-ip-blacklist renamed to check-ip; adds -city-db / -asn-db flags
for GeoLite2 lookup (country, city, subdivision, ASN) printed after each
blocklist result.
Sources (blacklist.go) now owns only fetch/load logic — no atomic state.
main.go holds the three atomic.Pointer[Cohort] vars, calls reload() on
startup, and runs the background ticker directly. This makes the dataset
pattern (fetch → load → atomic.Store → poll) visible at the call site.
Top-layer callers (IPFilter) now drive all reloads directly after
Sync/Fetch return. gitshallow.Init now returns (bool, error).
httpcache drops Init and Sync — callers just call Fetch.
Blacklist → IPFilter with three separate atomic cohorts: whitelist
(never blocked), inbound, and outbound. ContainsInbound/ContainsOutbound
each skip the whitelist. HTTP sync fetches all cachers before a single
reload to avoid double-load. Also fixes httpcache.Init calling c.Fetch().
fs/dataset deleted — generic File[T] wrapper didn't earn its abstraction layer
gitshallow.ShallowRepo → Repo (redundant with package name)
gitshallow.Repo.Register(func() error) — callbacks fire after each sync
gitshallow.Repo.Init/Run — full lifecycle in one package
caller (check-ip-blacklist) holds atomic.Pointer[Cohort] directly
gitshallow: fix double-fetch (pull already fetches), drop redundant -C flags
gitdataset: split into GitDataset[T] (file+atomic) and GitRepo (git+multi-dataset)
- NewDataset for file-only use, AddDataset to register with a GitRepo
- one clone/fetch per repo regardless of how many datasets it has
ipcohort: split Cohort into hosts (sorted /32, binary search) + nets (CIDRs, linear)
- fixes false negatives when broad CIDRs (e.g. /8) precede specific entries
- fixes Parse() sort-before-copy order bug
- ReadAll always sorts; unsorted param removed (was dead code)
Add Schema string field to Migrator. When set, Applied() constructs a
schema-qualified table name via pgx.Identifier.Sanitize() rather than
the bare "_migrations". New() signature is unchanged.
Usage:
runner := pgmigrate.New(conn)
runner.Schema = "authz"
Update idFromInsert regex to match schema-qualified table references
such as INSERT INTO authz._migrations, in addition to the existing
unqualified INSERT INTO _migrations form.
Principal identity is the subject (who), not the credential instance
(which token). The hashID suffix was an internal cache fingerprint that
leaked into the public ID. Callers that need to distinguish individual
token instances must use a separate mechanism.
TSV serialization in ToRecord() still writes Name~hashID when hashID is
set so the credential file round-trips correctly.
Verify X-Hub-Signature-256 (and SHA-1) webhook signatures. Middleware
buffers and re-exposes the body for downstream handlers. Errors honor
Accept header: TSV default (text/plain for browsers), JSON, CSV, or
Markdown — three fields (error, description, hint) with pseudocode hints.
Across all four backends:
- TestAppliedOrdering: insert rows out of order, verify Applied()
returns them sorted by name. Guards against the ORDER BY clause
being dropped or the query returning rows in arbitrary order.
- TestEndToEndCycle: Collect → Up → Applied → Down → Applied via
the sqlmigrate orchestrator with real migration files. Catches
wiring bugs between Migrator and orchestrator that the in-package
mockMigrator tests cannot.
- TestDMLRollback: multi-statement DML migration where the last
statement fails, verifies earlier INSERTs are rolled back. MySQL
note: DML-only because MySQL implicitly commits DDL.
Dialect-specific:
- mymigrate TestMultiStatementsRequired: strip multiStatements=true
from the DSN, verify ExecUp fails with a clear error mentioning
multiStatements (rather than silently running only the first
statement of a multi-statement migration).
- litemigrate TestForeignKeyEnforcement: verifies FK constraints
are enforced when the DSN includes _pragma=foreign_keys(1).
Test fixture fix: cleanup closures now use context.Background()
instead of the test context. t.Context() is canceled before
t.Cleanup runs, so DB cleanup silently failed. Previously the
_migrations cleanup appeared to work because the next test's
connect() re-ran DROP TABLE at setup, but domain tables (test_*)
leaked across runs. New tests also pre-clean at setup for
self-healing after interrupted runs.
Hosted MariaDB users (e.g. todo_test_*) typically have access to a single
database/schema and cannot CREATE/DROP DATABASE. Drop the per-test database
isolation pattern in favour of dropping _migrations directly on entry and
exit, matching msmigrate's approach. Tests must not run concurrently
against the same DSN.
Apply the same lazy-error pattern fix to all backends, plus regression
tests that catch the bug.
pgmigrate is the confirmed-broken case (pgx/v5's Conn.Query is lazy and
surfaces 42P01 at rows.Err() once the prepared statement cache is primed).
The defensive check at rows.Err() is also added to mymigrate and msmigrate
in case their drivers exhibit similar behavior in some configurations.
litemigrate is refactored to probe sqlite_master with errors.Is(sql.ErrNoRows)
instead of string-matching the error message — SQLite returns the generic
SQLITE_ERROR code for "no such table" so a typed-error approach isn't
possible at the driver layer; the probe lets us use idiomatic errors.Is.
Tests:
- litemigrate: in-memory SQLite, runs on every go test (no infra)
- pgmigrate: PG_TEST_URL env-gated; verified against real Postgres,
TestAppliedAfterDropTable reproduces the agent's exact error
message ("reading rows: ... 42P01") without the fix
- mymigrate: MYSQL_TEST_DSN env-gated
- msmigrate: MSSQL_TEST_URL env-gated; verified against real SQL Server
Each backend has four cases: missing table, populated table, empty table,
and table-dropped-after-cache-primed (the lazy-error scenario).
pgx/v5's Conn.Query is lazy — when the queried table doesn't exist,
the 42P01 error doesn't surface at Query() time, it surfaces at
rows.Err() after the iteration loop. The original code only checked
for 42P01 at the Query() site, so first-run migrations against an
empty database failed with:
reading rows: ERROR: relation "_migrations" does not exist (SQLSTATE 42P01)
Apply the typed-error check at both sites via a shared helper.
Index skill (use-sqlmigrate) plus focused skills for CLI usage, Go
library integration, and per-database conventions (PostgreSQL,
MySQL/MariaDB, SQLite, SQL Server).
Replace directives pointing to local paths prevent `go install` from
working. Use published module versions (sqlmigrate v1.0.2, shmigrate
v1.0.2) so users can install with:
go install github.com/therootcompany/golib/cmd/sql-migrate/v2@latest
- Use full flag names in command defaults (--no-align, --tuples-only, etc.)
- Add sqlite/sqlite3/lite alias for SQLite
- Add sqlcmd/mssql/sqlserver alias for SQL Server (go-sqlcmd, TDS 8.0)
- Add SQL Server help text with SQLCMD env vars and TLS docs
- Add sqlite and sqlcmd SELECT expressions for id+name migration log
Same issue as pgmigrate: *sql.DB is a connection pool, so each call
may land on a different connection. Migrations need a pinned connection
for session state (SET search_path, temp tables, etc.) to persist
across sequential calls. *sql.Conn (from db.Conn(ctx)) pins one
underlying connection for its lifetime.
Migrations run sequentially on a single connection — a pool adds
unnecessary complexity and forces callers to create one. This also
drops the puddle/v2 and x/sync transitive dependencies.
Migration{ID, Name} is the identity type returned by Up/Down/Latest/Drop.
Script{Migration, Up, Down} holds collected SQL content from Collect().
Migrator interface now takes SQL as a separate parameter:
ExecUp(ctx, Migration, sql string)
ExecDown(ctx, Migration, sql string)
This separates identity from content — callers that track what ran
don't need to carry around SQL strings they'll never use.
Updates shmigrate to match (ignores the sql parameter, references
files on disk instead).