In Part 1, we established why idempotency breaks down in distributed systems and what the conceptual fix looks like. Now let’s get into the mechanics: what makes a good idempotency key, who generates it, where you store it, and how long you keep it.
Who Generates the Key
The client generates the idempotency key. This is the most important design decision, and it is not negotiable. The server cannot generate a meaningful idempotency key because by the time the server acts, it has already executed the operation. The key must exist before the request is sent so the client can reuse it on a retry.
The client creates a UUID (v4 is fine, v7 is better for sortability) when it first constructs a request. If that request fails with a network error or a 5xx response, the client resends the exact same request body with the exact same key. The server uses that key to recognize the retry.
use uuid::Uuid;
// Generate once, persist across retries
let idempotency_key = Uuid::new_v4().to_string();
// Attach to every attempt for this logical operation
client
.post("/orders")
.header("Idempotency-Key", &idempotency_key)
.json(&order_payload)
.send()
.await?;What Makes a Good Key
A good idempotency key has three properties:
- Globally unique per logical operation. Two different operations must never share a key. Using a UUID per request handles this well. Do not use request timestamps or user IDs alone — they are not unique enough.
- Stable across retries. The client must use the same key for every retry of the same operation. If the key changes, the server sees it as a new request and runs the operation again.
- Scoped to the user or tenant. Store keys namespaced by user ID or tenant ID so keys from one user cannot interfere with another. This is a security concern, not just a data hygiene one.
Storage Options
You need somewhere to store the mapping from key to result. There are two common approaches: Redis for speed, Postgres for durability.
flowchart LR
subgraph Client
A[Generate Key: uuid-xyz]
B[Store key locally]
A --> B
end
subgraph API Server
C{Key in store?}
D[Execute operation]
E[Store key + result]
F[Return cached result]
end
subgraph Storage
G[(Redis\nTTL: 24h)]
H[(Postgres\nidempotency_keys table)]
end
B -->|POST + Idempotency-Key header| C
C -->|No| D --> E --> G
C -->|No| D --> E --> H
C -->|Yes| F
G --> C
H --> CRedis Storage
Redis is the natural first choice. Fast reads, built-in TTL, and atomic SET NX operations make it well suited for idempotency key storage. The pattern is straightforward: set the key with a TTL when processing starts, update the value with the response when done.
use redis::AsyncCommands;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Clone)]
pub struct IdempotencyRecord {
pub status: String, // "processing" | "complete"
pub status_code: u16,
pub response_body: String,
}
pub async fn get_or_set_idempotency_key(
redis: &mut redis::aio::Connection,
key: &str,
user_id: &str,
ttl_seconds: u64,
) -> anyhow::Result<Option<IdempotencyRecord>> {
let namespaced_key = format!("idempotency:{}:{}", user_id, key);
// Try to get existing record
let existing: Option<String> = redis.get(&namespaced_key).await?;
if let Some(raw) = existing {
let record: IdempotencyRecord = serde_json::from_str(&raw)?;
return Ok(Some(record));
}
// Reserve the key atomically -- SET NX with TTL
let placeholder = IdempotencyRecord {
status: "processing".to_string(),
status_code: 0,
response_body: String::new(),
};
let set: bool = redis::cmd("SET")
.arg(&namespaced_key)
.arg(serde_json::to_string(&placeholder)?)
.arg("NX")
.arg("EX")
.arg(ttl_seconds)
.query_async(redis)
.await?;
if !set {
// Another request grabbed the key between our GET and SET
// This is the concurrent duplicate case -- handle it
return Err(anyhow::anyhow!("Concurrent request with same idempotency key"));
}
Ok(None) // No existing record, safe to proceed
}
pub async fn complete_idempotency_key(
redis: &mut redis::aio::Connection,
key: &str,
user_id: &str,
ttl_seconds: u64,
status_code: u16,
response_body: String,
) -> anyhow::Result<()> {
let namespaced_key = format!("idempotency:{}:{}", user_id, key);
let record = IdempotencyRecord {
status: "complete".to_string(),
status_code,
response_body,
};
redis::cmd("SET")
.arg(&namespaced_key)
.arg(serde_json::to_string(&record)?)
.arg("EX")
.arg(ttl_seconds)
.query_async(redis)
.await?;
Ok(())
}Postgres Storage
If you need durability — for example, in financial transactions where you need an audit trail — store idempotency records in Postgres. Redis can lose data on restart without AOF persistence configured correctly. Postgres gives you a permanent record.
// Migration: create the idempotency_keys table
// CREATE TABLE idempotency_keys (
// id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
// key TEXT NOT NULL,
// user_id UUID NOT NULL,
// status TEXT NOT NULL DEFAULT 'processing',
// status_code SMALLINT,
// response_body JSONB,
// created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
// expires_at TIMESTAMPTZ NOT NULL,
// UNIQUE (key, user_id)
// );
// CREATE INDEX ON idempotency_keys (expires_at); -- for cleanup job
use sqlx::PgPool;
use uuid::Uuid;
use chrono::{Utc, Duration};
pub struct IdempotencyRow {
pub status: String,
pub status_code: Option,
pub response_body: Option,
}
pub async fn lookup_idempotency_key(
pool: &PgPool,
key: &str,
user_id: Uuid,
) -> anyhow::Result<Option<IdempotencyRow>> {
let row = sqlx::query_as!(
IdempotencyRow,
r#"
SELECT status, status_code, response_body
FROM idempotency_keys
WHERE key = $1 AND user_id = $2 AND expires_at > now()
"#,
key,
user_id,
)
.fetch_optional(pool)
.await?;
Ok(row)
}
pub async fn insert_idempotency_key(
pool: &PgPool,
key: &str,
user_id: Uuid,
ttl_hours: i64,
) -> anyhow::Result<bool> {
let expires_at = Utc::now() + Duration::hours(ttl_hours);
// ON CONFLICT DO NOTHING returns 0 rows affected on collision
let result = sqlx::query!(
r#"
INSERT INTO idempotency_keys (key, user_id, expires_at)
VALUES ($1, $2, $3)
ON CONFLICT (key, user_id) DO NOTHING
"#,
key,
user_id,
expires_at,
)
.execute(pool)
.await?;
Ok(result.rows_affected() == 1) // false means key already existed
} TTL Strategy
How long should you keep idempotency keys? The answer depends on how long clients retry. A few guidelines:
- 24 hours is the most common default. Stripe uses 24 hours. It covers most retry windows without accumulating too much storage.
- Match your retry window. If your client retries for up to 1 hour with exponential backoff, a 2-hour TTL is sufficient. Do not over-retain.
- Never delete on first use. The key must survive the full TTL so late retries still get the cached response. Delete only when the TTL expires.
- Run a cleanup job for Postgres. Unlike Redis, Postgres does not expire rows automatically. Run a periodic job:
DELETE FROM idempotency_keys WHERE expires_at < now();
Handling Concurrent Requests with the Same Key
This is the case most implementations miss. What if two requests with the same key arrive simultaneously — before either has finished processing?
The correct behavior is to return a 409 Conflict for the second concurrent request. The client should wait and retry once the first completes. The SET NX pattern in Redis handles this atomically. In Postgres, the ON CONFLICT DO NOTHING insert followed by checking rows_affected achieves the same thing.
sequenceDiagram
participant C1 as Client Retry 1
participant C2 as Client Retry 2
participant S as Server
participant R as Redis
C1->>S: POST /orders (key: abc-123)
C2->>S: POST /orders (key: abc-123)
S->>R: SET NX idempotency:user1:abc-123 "processing"
R-->>S: OK (C1 wins the lock)
S->>R: SET NX idempotency:user1:abc-123 "processing"
R-->>S: nil (C2 loses)
S-->>C2: 409 Conflict (retry after C1 completes)
Note over S: C1 processes the order...
S->>R: SET idempotency:user1:abc-123 "complete" + result
R-->>S: OK
S-->>C1: 201 CreatedKey Mismatch: Same Key, Different Body
What if a client reuses a key with a different request body? This should be treated as an error. The server cannot safely process a different operation under the same idempotency key — the semantics are undefined. Return a 422 Unprocessable Entity and log the mismatch. Do not process the request.
// When an existing key is found, verify the request fingerprint matches
pub fn fingerprint_request(body: &[u8]) -> String {
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(body);
format!("{:x}", hasher.finalize())
}
// In your handler: compare stored fingerprint against incoming fingerprint
// If they differ, return 422Summary
The client owns key generation. Use UUIDs, namespace by user, and pick a TTL that covers your retry window. Redis gives you speed and atomic SET NX; Postgres gives you durability and audit trail. Handle concurrent duplicate keys with a 409 and mismatched keys with a 422. In Part 3, we will wire all of this into an Axum middleware so every endpoint gets idempotency protection without boilerplate in each handler.
References
- Stripe API Documentation – Idempotent Requests (https://stripe.com/docs/api/idempotent_requests)
- Redis Documentation – Distributed Locks (https://redis.io/docs/manual/patterns/distributed-locks/)
- SQLx Documentation (https://docs.rs/sqlx/latest/sqlx/)
- AWS Builders Library – Making Retries Safe with Idempotent APIs (https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/)
