Cache tuning
Cache is LRU by default; LFU (frequency-weighted) is an option. What to tune:| Setting | Effect | How to choose |
|---|---|---|
cache.size_gb | How much disk to dedicate | Larger = higher hit rate up to diminishing returns; typical break-even is 200–1000 GB |
cache.eviction | lru or lfu | LRU for bursty traffic; LFU for long-tail |
cache.min_age_seconds | Soft lower-bound on eviction | Prevents thrashing on a cold cache |
cache.probe_hold_duration | Probe-triggered eviction hold | Do not reduce below the default — shrinking it risks phantom-announcement slashes |
Probe-triggered eviction hold
When you signhas_blob: true in a ProbeResponse, the blob is held in cache for probe_hold_duration. This prevents the evil pattern “probe responded yes, client opened channel, cache pressure evicted the blob, stream request fails, node slashed for phantom announcement.”
Monitor decdn_probe_hold_slots_used / decdn_probe_hold_slots_max. If saturation exceeds ~80%, increase cache size — you have more in-flight promises than your cache can safely hold.
Region selection
Your region determines:- Which regional gossip topic you subscribe to (
cdn/region/{cc}/v1). - Which regional takedown entries bind you (takedown compliance).
- Which clients prefer you in the selection score (clients in the same region give lower RTT, making your score better).
Pull-through behavior
On cache miss, a node withpull_through: true:
- Issues
cdn/dht/v1 FIND_VALUE(hash). - Probes returned candidates in parallel via
cdn/probe/v1. - Selects the best candidate by the unified selection score.
- Pulls via
cdn/client/v1— paying the upstream node per MB. - Streams bytes to the client while pulling, with BLAKE3 verification on every chunk.
- Caches locally for subsequent requests.
pull_through: false, the node returns a redirect to another NodeId. The client opens a direct channel with that node.
Pull-through generally earns more — you collect the per-MB revenue from the client and pay upstream a smaller per-MB cost. Redirect is useful when you explicitly want to limit your egress or when you are origin-backed and want pure one-hop service.
Rate setting
Your rate is advertised inNodeAnnounce and ProbeResponse. Update via RateChange gossip message.
- Must be within governance-set
[MIN_RATE, MAX_RATE]for the token. - Rate changes are immediately effective for new streams; existing streams continue at the agreed rate.
- Rate changes carry a
slash_sigso the change can serve as counter-evidence in a rate-manipulation dispute.
Concurrency limits
Two limits interact:max_concurrent_streams— total activecdn/client/v1streams.max_concurrent_channel_opens— active on-chain channel opens in the last block window.
Prefetching
Enableprefetch to proactively pull popular content:
- Local demand —
prefetch.local_miss_threshold = 3 within 5m— start prefetching when you see the same miss N times. - Network popularity —
prefetch.popular_hash_peers = 3 within 10m— start prefetching when N peers list the hash in theirpopular_hashes.
Backups and crash recovery
Nothing in the cache needs backing up — cache is rebuildable by its nature. What does need protecting:- iroh keypair — losing this requires
reclaimNodeId, which proves ownership of the bound Ethereum address. - Ethereum keypair — losing this loses access to stake withdrawal.
- State directory — contains in-flight voucher state. On crash, the node resumes from the last flushed voucher state (64 MiB flush cadence for client-side state; node-side is similar).
Graceful shutdown
- Stop accepting new streams.
- Drain active streams.
- Deregister watchtower monitoring briefly, or notify watchtowers of expected downtime.
- Flush voucher state.
- Publish a final
NodeAnnouncewithdeparting: true.