Best Practices
This guide provides recommendations for common scenarios when using Warp.
Scenarios
1. High Read, Low Write (e.g., Product Catalog)
Goal: Maximize read performance, tolerate slight staleness.
- Mode:
ModeEventualDistributed
- Cache:
InMemorywithLRU
- TTL: Long (e.g., 1 hour) with
SlidingTTLenabled.
- Strategy: Use
Warmupon startup to load popular products.
2. User Sessions (Critical Security)
Goal: Immediate invalidation on logout.
- Mode:
ModeEventualDistributed(orStrongDistributedif you need absolute guarantees).
- Leases: Use Leases to group all session keys for a user. Revoke the lease on logout.
- TTL: Short (e.g., 15 mins) with
SlidingTTL.
3. Distributed Counters / Rate Limiting
Goal: Atomic updates across nodes.
- Mode:
ModeStrongDistributed
- Merge Engine: Register a custom merge function (e.g.,
func(old, new int) int { return old + new }).
- Quorum: Set quorum to
N/2 + 1for fault tolerance.
4. Ephemeral Data (e.g., Job Status)
Goal: Fast local access, no persistence needed.
- Mode:
ModeStrongLocal
- Store:
nil(orInMemoryStoreif you need local persistence).
- TTL: Short.
General Advice
Don’t
- Don’t use
ModeStrongDistributedfor everything. It adds latency (network round-trips). Use it only when strictly necessary.
- Don’t ignore errors. Especially from
SetorTxn.Commit. They might indicate a bus failure or quorum issue.
- Don’t cache everything. Caching has overhead. Cache only what is read frequently or expensive to compute.
Do
- Do use
Registerexplicitly. It documents your data model and consistency requirements.
- Do monitor Metrics. Set up Prometheus alerts for high
warp_core_misses_totalorwarp_bus_errors.
- Do use
Validator. It’s cheap insurance against data drift.
- Do use
Batcher. If your DB supports it, implement theBatcherinterface in your Adapter for massive write performance gains.