The Redis distributed lock in packages/modules/providers/locking-redis/src/services/redis-lock.ts (lines 195-200) uses exponential backoff without jitter:
await setTimeout(retryDelay)
retryDelay = Math.min(retryDelay * this.backoffFactor, this.maximumRetryInterval)
When multiple service instances compete for the same lock, they retry at identical intervals (20ms → 40ms → 80ms → ...), creating synchronized contention spikes on Redis.
Add decorrelated jitter to the sleep duration:
const jitteredDelay = retryDelay * (0.5 + Math.random() * 0.5)
await setTimeout(jitteredDelay)
This preserves the exponential growth curve while spreading retry times. The retryDelay variable continues to grow normally — only the actual sleep gets randomized.
In multi-instance deployments, deterministic retry creates lock contention storms at predictable intervals. This is a well-documented distributed systems issue (AWS "Exponential Backoff And Jitter", Google Cloud retry best practices).
Medusa already uses this exact jitter pattern in stripe-base.ts:205:
const delay = baseDelay * Math.pow(2, currentAttempt - 1) * (0.5 + Math.random() * 0.5)
This suggestion brings the Redis lock retry in line with the existing stripe provider pattern.
Happy to submit a PR if this approach looks right.