Validate & Compile
Validator rules and compile step for producing per-node artifacts in C0
Validator Rules
flor validate is the only safety net against hand-edit errors, so it must be strict.
- Schema — every file matches its JSON Schema.
- Cross-references — every node/service/group/role name referenced exists and is unique across the whole cluster.
- Principal registry — every principal name (users, services, nodes) is unique across all principal kinds; no name collision across YAML files.
- Enrollment log consistency — every principal in
users.yaml,services.yaml, and every entry innodes.yamlhas a matching sign-event inenrollment.logthat hasn't been superseded by a revoke-event. - Service placement — every service's
atfield names a known node innodes.yaml. - Management-node integrity — services named
config-serverandconfig-publishermust exist in groupscontrol-planeandcontrol-plane-writerespectively, both hosted on the same node (the management node) with a publicly-reachableaddressesentry. If missing or malformed, validator fails with a clear message rather than silently producing artifacts that can't sync. - Reserved-name protection —
nodeandoperator(roles),control-planeandcontrol-plane-write(groups) must be present in the YAML with their canonicalallow/ member definitions. Validator fails if they're missing or redefined with different structure. The compiler never writes YAML — these live in the repo as part of the initial cluster template. - Operator presence — at least one user in
users.yamlmust haverole: operator. Without it, no one can callconfig-publisher, so the cluster can't receive new state after the initial bootstrap — worth catching at validate time. - Reachability feasibility — every node hosting a workload service (other than
control-planeservices) must have at least one Internet-reachableaddressesentry. User-only nodes (pure initiators) may omit addresses; referencing them as a service host is an error. - Access consistency — every principal's role permits only groups defined in
groups.yaml; every service'sgroupis defined; every service with an egressrolehas that role defined. - Principal role coherence — services that act as clients (outbound) must have a
roledeclared; services withoutrolemay only appear as targets.
Cert chain verification happens at runtime during mTLS, not at validate time — the repo doesn't hold individual certs.
Compile Step
florctl compile produces one artifact per node: <node>.json. It carries an envelope common to all Florete compiled artifacts:
{
"schema_version": "1.0",
"layer": "cluster",
"version": 42,
"node": "alpha",
"generated_at": "2026-04-20T12:00:00Z",
"payload": { ... }
}The schema_version tracks the compiled-artifact public contract (semver). The version is a monotonic per-compilation number — lets future delta-based distribution ask "what version do you have?" without re-sending the full artifact. layer: "cluster" distinguishes this from the additional layer: "link" artifact C1 will introduce.
Terminology
The payload uses two pairs of terms consistently, both from flor's point of view at the network boundary:
- ingress / egress — peer-to-local and local-to-peer QUIC sessions respectively.
ingresslists which remote principals may connect to our local principals;egresslists which of our local principals may connect to a given remote target, along with how to dial that target. - upstream_addr / socks5_proxy — the two local-workload sockets.
upstream_addris the loopback address where a service itself listens (flor forwards decrypted ingress traffic here);socks5_proxyis the loopback listener flor exposes for outbound SOCKS5 requests from a local principal.
Only services that accept inbound traffic have upstream_addr; only principals that initiate outbound traffic have socks5_proxy. Most services have both; users typically have only socks5_proxy; pure-target services like mongodb (when role is absent) have only upstream_addr.
Example: user-node payload
A user device initiates connections only — no listen_udp, no local_services:
{
"node": "alice-laptop",
"ca_cert_path": "~/.flor/ca.crt",
"local_users": [
{
"name": "alice",
"spiffe_id": "spiffe://rete-lovers/users/alice",
"identity": { "cert_path": "~/.flor/alice.crt", "priv_path": "~/.flor/alice.key" },
"socks5_proxy": "127.0.0.1:1080"
}
],
"egress": [
{
"target_spiffe_id": "spiffe://rete-lovers/services/api",
"target_node": "alpha",
"target_udp": "1.2.3.4:4433",
"allow": ["spiffe://rete-lovers/users/alice"]
},
{
"target_spiffe_id": "spiffe://rete-lovers/services/kafka",
"target_node": "beta",
"target_udp": "5.6.7.8:4433",
"allow": ["spiffe://rete-lovers/users/alice"]
}
]
}Example: server-node payload
A server node hosts services and accepts inbound QUIC on listen_udp. Services with a role also appear as initiators (api calls mongodb, producing an egress row). ssh binds 0.0.0.0:22 because of the emergency-access exception — the upstream_addr in the artifact just mirrors whatever services.yaml declared.
{
"node": "alpha",
"ca_cert_path": "~/.flor/ca.crt",
"listen_udp": "0.0.0.0:4433",
"local_services": [
{
"name": "api",
"spiffe_id": "spiffe://rete-lovers/services/api",
"identity": { "cert_path": "~/.flor/api.crt", "priv_path": "~/.flor/api.key" },
"upstream_addr": "127.0.0.1:8000",
"socks5_proxy": "127.0.0.1:18000"
},
{
"name": "ssh",
"spiffe_id": "spiffe://rete-lovers/services/alpha/ssh",
"identity": { "cert_path": "~/.flor/ssh.crt", "priv_path": "~/.flor/ssh.key" },
"upstream_addr": "0.0.0.0:22"
}
],
"ingress": [
{
"target_spiffe_id": "spiffe://rete-lovers/services/api",
"allow": [
"spiffe://rete-lovers/users/alice",
"spiffe://rete-lovers/users/bob"
]
},
{
"target_spiffe_id": "spiffe://rete-lovers/services/alpha/ssh",
"allow": ["spiffe://rete-lovers/users/bob"]
}
],
"egress": [
{
"target_spiffe_id": "spiffe://rete-lovers/services/mongodb",
"target_node": "beta",
"target_udp": "5.6.7.8:4433",
"allow": ["spiffe://rete-lovers/services/api"]
}
]
}Key points
- No forwarding table, no labels — every tunnel in C0 is a direct QUIC connection between the initiator's flor and the target service's flor. The compiler pre-resolves only (a) which remote peers may initiate to this node's services (
ingress) and (b) where this node's principals may initiate to (egress). - Egress is advisory, not security-enforcing. The authoritative check happens at the target's
ingress.egressexists so the initiator's flor can dial the correct UDP address and fail fast on disallowed SOCKS5 requests. A compromised initiator could ignore its own egress table; it still cannot pass the target's ingress gate. Do not treat egress as an access-control boundary — it's a dialing table with a local convenience filter. - Roles expand at compile time.
roles.yamlis YAML-only; the compiler resolves roles to explicit SPIFFE ID lists inallow. Role changes take effect on the nextflorctl compile+ push — no cert reissuance needed, no role-resolution at runtime on the agent hot path. - Deterministic — same
florctl compile --repo Xproduces identical output on any machine given identical inputs. - Per-node filtering —
alpha.jsoncontains only the identity material and ACL rows relevant to alpha's local services and principals.