Snapshot object model
Every snapshot contains:| Field | Type | Description |
|---|---|---|
snapshotId | UUID | Unique identifier |
timestamp | ISO 8601 | Creation time |
parentSnapshotId | UUID or null | Previous snapshot in chain |
reason | enum | manual, autosave, before-upgrade, before-restore |
appVersion | string | App version at time of snapshot |
stateDigest | SHA-256 | Hash over all snapshot content |
volumeManifests | map | Per-volume metadata (consistency type, format, size, digest) |
labels | map | User-defined key-value labels |
Save algorithm
The save flow executes 10 steps in order:- Resolve target project — look up the project and confirm it has a valid state path
- Block concurrent operations — acquire an exclusive save lock; reject concurrent start/stop/restore/save for this project
- Run pre-save hooks — execute consistency hooks per service based on each volume’s
consistencytype - Wait for quiesce — confirm all pre-save hooks completed and data is flushed
- Checkpoint state — create a point-in-time marker; for generic volumes, pause or snapshot at the filesystem level
- Copy to snapshot workspace — copy
current/files/andcurrent/volumes/intosnapshots/<timestamp>/ - Compute metadata — hash all snapshot content, build volume manifests, generate
metadata.json - Atomic index update — write new index to temp file, fsync, rename over existing
index.json - Release lock — allow lifecycle operations to proceed
- Resume services — unpause any services paused for consistency
current/ is never modified during a save. The last good snapshot remains intact. Partial snapshot directories are cleaned up.
Consistency per volume type
SQLite (consistency: sqlite)
- Check no schema migration is actively writing
PRAGMA wal_checkpoint(TRUNCATE)— flush WAL into the main DB file- fsync the DB file and its parent directory
- Copy the
.dbfile (WAL is now empty) into the snapshot workspace - Run
PRAGMA integrity_checkon the copy and record the result
Postgres (consistency: postgres)
- Execute
pg_dumpin custom format against the running Postgres instance - Store the dump in
snapshots/<timestamp>/volumes/ - The logical dump provides a transaction-consistent view without stopping the database
- If dump fails, the save fails — no partial snapshots
Generic (consistency: generic)
- Pause or stop write-heavy containers (or run a registered pre-save hook)
- Tar-copy the entire volume directory
- Compute content digest
- Optionally compress in background after commit
- Resume paused containers
Chunked content-addressed storage
Snapshot data is stored using a content-addressed chunk store to control disk growth across many snapshots.- Large files are split into ~1 MiB chunks, each identified by SHA-256 digest
- Identical chunks across snapshots are stored only once (deduplication)
- The chunk store is shared across all snapshots for a project
metadata.json references chunk digests rather than storing full file copies:
Garbage collection
- A chunk is reclaimable when no snapshot manifest references it
- GC runs periodically or on explicit trigger
- Mark-and-sweep: mark all referenced chunks, sweep unreferenced — safe to run concurrently with reads
