Home Interview Redis Interview Questions — Core Concepts, Data Structures & Real-World Patterns

Redis Interview Questions — Core Concepts, Data Structures & Real-World Patterns

In Plain English 🔥
Imagine your office has a massive filing cabinet (your database) and a sticky-note board right next to your desk (Redis). Every time you need a document, walking to the cabinet takes 30 seconds. But if you stick the most-requested documents on your board, you grab them in 2 seconds. Redis is that sticky-note board — blazing-fast, lives in memory, and holds your hottest data so your app never has to make the slow trip to the filing cabinet. The catch? Your board has limited space, and if the office loses power, the sticky notes are gone unless you back them up.
⚡ Quick Answer
Imagine your office has a massive filing cabinet (your database) and a sticky-note board right next to your desk (Redis). Every time you need a document, walking to the cabinet takes 30 seconds. But if you stick the most-requested documents on your board, you grab them in 2 seconds. Redis is that sticky-note board — blazing-fast, lives in memory, and holds your hottest data so your app never has to make the slow trip to the filing cabinet. The catch? Your board has limited space, and if the office loses power, the sticky notes are gone unless you back them up.

Redis shows up in almost every modern backend stack — from session management at Netflix scale to real-time leaderboards in gaming apps to rate-limiting at API gateways. If you're interviewing for any backend, full-stack, or DevOps role, expect at least two or three Redis questions. Interviewers don't ask them to trip you up — they ask because Redis is one of those tools where misuse causes production fires, and they want to know you understand the trade-offs.

What Redis Actually Is — And Why It's Not 'Just a Cache'

Redis stands for Remote Dictionary Server. Yes, it's famous as a cache, but calling it 'just a cache' in an interview is a red flag. Redis is an in-memory data structure store. It natively understands strings, lists, sets, sorted sets, hashes, bitmaps, hyperloglogs, and streams. That means it's not a dumb key-value bucket — it can perform operations directly on those structures without you pulling data out, modifying it in application code, and pushing it back.

Why does this matter? Take a leaderboard. In a relational database you'd SELECT all scores, sort them in application memory, and return the top 10. With Redis Sorted Sets, you call ZREVRANGE leaderboard 0 9 and Redis returns the top 10 in O(log N) time — atomically, server-side, with no round-trip logic. That's the real power: moving computation closer to the data.

Redis is single-threaded for command execution (as of v6, I/O is multi-threaded), which sounds like a weakness but is actually why it's so predictable. No lock contention. No deadlocks. One command finishes before the next starts.

redis_data_structures_demo.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445
# Connect to Redis CLI
redis-cli

# ── STRING: Simple key-value with TTL (time-to-live) ──
SET user:1001:session_token "abc123xyz" EX 3600
# EX 3600 means this key auto-deletes after 1 hour
# Perfect for session tokens — no manual cleanup needed

GET user:1001:session_token
# Returns: "abc123xyz"

TTL user:1001:session_token
# Returns: 3598 (seconds remaining — live countdown)

# ── HASH: Store a user object without serializing to JSON ──
HSET user:1001 name "Priya Sharma" email "priya@example.com" plan "pro"
# Redis stores each field separately — you can update ONE field
# without reading and rewriting the whole object

HGET user:1001 plan
# Returns: "pro"

HGETALL user:1001
# Returns all fields: name, email, plan

# ── SORTED SET: Real-time leaderboard ──
ZADD game:leaderboard 9450 "alice"
ZADD game:leaderboard 8820 "bob"
ZADD game:leaderboard 9900 "carol"

# Top 3 players, highest score first (0-indexed range)
ZREVRANGE game:leaderboard 0 2 WITHSCORES
# Returns:
# 1) "carol"
# 2) "9900"
# 3) "alice"
# 4) "9450"
# 5) "bob"
# 6) "8820"

# ── LIST: Message queue pattern ──
LPUSH email:queue "welcome:user:1002"   # Push to the LEFT (head)
LPUSH email:queue "receipt:order:5501"
RPOP email:queue                        # Pop from the RIGHT (tail) — FIFO queue
# Returns: "welcome:user:1002"
▶ Output
"abc123xyz"
3598
"pro"
1) "name"
2) "Priya Sharma"
3) "email"
4) "priya@example.com"
5) "plan"
6) "pro"
1) "carol"
2) "9900"
3) "alice"
4) "9450"
5) "bob"
6) "8820"
"welcome:user:1002"
⚠️
Interview Gold:When asked 'what data structures does Redis support?', don't just list them — explain ONE use case per structure. Strings → session tokens with TTL, Hashes → user profiles, Sorted Sets → leaderboards, Lists → job queues, Sets → unique visitor tracking. That answer signals real-world experience, not textbook memorisation.

Persistence, Eviction & the Trade-offs That Cause Production Incidents

The single most dangerous misconception about Redis is treating it as a durable store by default. It isn't. By default, Redis is in-memory only — restart the process and your data is gone. Redis gives you two persistence mechanisms: RDB (Redis Database snapshots) and AOF (Append-Only File), and you need to understand both for interviews and for production.

RDB takes point-in-time snapshots — like a photograph of your data every N seconds if M keys changed. It's compact, fast to restore, but you can lose up to the last snapshot window of writes. AOF logs every write operation — like a transaction log. Slower to restore, larger on disk, but much less data loss risk (configurable to fsync every second or every command).

Eviction policy is equally critical. When Redis hits its maxmemory limit, it has to decide what to drop. The allkeys-lru policy evicts the least-recently-used key across all keys — great for a pure cache. The volatile-lru policy only evicts keys that have a TTL set — useful when some keys must survive (like a rate limit counter with no TTL). Picking the wrong eviction policy is a silent killer: you'll see cache misses spike with no obvious error.

redis_persistence_and_eviction.sh · BASH
12345678910111213141516171819202122232425262728
# ── Check current persistence config ──
redis-cli CONFIG GET save
# Default output — snapshot triggers:
# 1) "save"
# 2) "3600 1 300 100 60 10000"
# Meaning: snapshot if 1 change in 3600s, OR 100 changes in 300s, OR 10000 changes in 60s

# ── Enable AOF (Append-Only File) for better durability ──
redis-cli CONFIG SET appendonly yes
redis-cli CONFIG SET appendfsync everysec
# everysec = fsync every second — best balance of performance vs durability
# always  = fsync on every write — safest but ~10x slower
# no      = OS decides when to flush — fastest but data loss risk

# ── Set a memory limit and eviction policy ──
redis-cli CONFIG SET maxmemory 256mb
redis-cli CONFIG SET maxmemory-policy allkeys-lru
# allkeys-lru: when full, evict the least-recently-used key from ANY key
# volatile-lru: only evict keys that have a TTL — non-TTL keys are safe
# noeviction: reject new writes when full — returns OOM error (dangerous for a cache!)

# ── Simulate checking what would be evicted ──
redis-cli OBJECT IDLETIME user:1001:session_token
# Returns idle time in seconds — higher = more likely to be evicted under LRU

# ── Persist config changes to redis.conf so they survive restart ──
redis-cli CONFIG REWRITE
# Without this, all CONFIG SET changes are lost on restart!
▶ Output
1) "save"
2) "3600 1 300 100 60 10000"
OK
OK
OK
OK
(integer) 142
OK
⚠️
Watch Out:CONFIG SET changes are runtime-only by default. If Redis restarts (crash, deploy, maintenance), every CONFIG SET you applied is gone. Always follow up with CONFIG REWRITE to persist changes to redis.conf — or manage config via your infrastructure-as-code tooling. This catches even experienced engineers off guard.

Atomicity, Transactions & the Lua Script Pattern

Redis is single-threaded, so individual commands are always atomic. But what about multi-step operations like 'check a counter, increment it only if it's below 100'? That's where MULTI/EXEC transactions and Lua scripts come in — and this is a favourite interview deep-dive.

MULTI/EXEC queues a batch of commands that execute atomically. No other client can sneak a command in between. But there's a critical gotcha: Redis transactions don't roll back on runtime errors. If one command in the queue fails (e.g. wrong type), the rest still execute. That's intentional and very different from SQL transactions.

For complex conditional logic, Lua scripts are the better tool. A Lua script runs entirely on the Redis server as a single atomic unit. This is how rate limiting is implemented correctly — the check-and-increment happens in one atomic server-side script with no race condition possible.

Understanding the difference between MULTI/EXEC (queue and batch) versus Lua (conditional server-side logic) is what separates candidates who've used Redis in production from those who've only read the docs.

redis_atomic_rate_limiter.sh · BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
# ── MULTI/EXEC: Batch commands atomically ──
redis-cli MULTI
redis-cli INCR order:5501:item_count
redis-cli EXPIRE order:5501:item_count 86400  # Set TTL inside the transaction
redis-cli EXEC
# Both commands execute atomically — no client can modify order:5501:item_count
# between INCR and EXPIRE

# ── Lua Script: Atomic rate limiter ──
# This is the CORRECT way to implement rate limiting.
# Using separate GET + INCR commands creates a race condition.

redis-cli EVAL "
  local key = KEYS[1]          -- e.g. 'ratelimit:user:1001:minute:2024010112'
  local limit = tonumber(ARGV[1])  -- e.g. 100 (requests per minute)
  local current = redis.call('GET', key)

  if current == false then
    -- Key doesn't exist yet — first request in this window
    redis.call('SET', key, 1, 'EX', 60)  -- Set count=1 with 60s TTL
    return {1, limit - 1}  -- {current_count, remaining}
  end

  current = tonumber(current)

  if current >= limit then
    return {current, 0}  -- Rate limit hit — reject the request
  end

  -- Under the limit — increment and return updated values
  local new_count = redis.call('INCR', key)
  return {new_count, limit - new_count}
" 1 "ratelimit:user:1001:minute:2024010112" 100

# Output on first call: 1) (integer) 1   2) (integer) 99
# Output on 100th call: 1) (integer) 100  2) (integer) 0
# Output on 101st call: 1) (integer) 100  2) (integer) 0  (blocked)

# ── WATCH: Optimistic locking (alternative to Lua for simple cases) ──
redis-cli WATCH user:1001:balance
# If user:1001:balance is modified by another client before EXEC,
# the entire transaction aborts and returns nil — your app retries
redis-cli MULTI
redis-cli DECRBY user:1001:balance 50
redis-cli EXEC
# Returns nil if balance was modified by another client (optimistic lock failed)
# Returns array of results if successful
▶ Output
1) (integer) 1
2) (integer) 1

1) (integer) 1
2) (integer) 99

OK
OK
1) (integer) 1
🔥
Interview Gold:If asked 'how does Redis handle concurrency?', the layered answer is: (1) individual commands are atomic by nature of single-threading, (2) MULTI/EXEC batches commands atomically but doesn't rollback on error, (3) Lua scripts are the gold standard for conditional atomic operations, (4) WATCH provides optimistic locking for CAS (compare-and-swap) patterns. Mentioning all four layers shows real depth.

Redis Replication, Sentinel & Cluster — Knowing When to Use Each

Single-node Redis is fine for development and small applications, but production requires thinking about high availability and horizontal scaling. Interviewers at mid-to-large companies almost always probe here.

Replication is the foundation: one primary node accepts writes, one or more replicas asynchronously receive those writes. It's not synchronous — there's always a tiny lag, meaning replicas can serve slightly stale reads. This is an intentional trade-off for write throughput.

Redis Sentinel adds automatic failover on top of replication. Sentinel is a separate process (or set of processes) that monitors your primary. If the primary goes down, Sentinel promotes the most up-to-date replica to primary and updates your clients. You need at least 3 Sentinel nodes to avoid split-brain scenarios — a quorum of 2 must agree before a failover triggers.

Redis Cluster is a different beast: it's about horizontal scaling, not just failover. It automatically shards your keyspace across multiple primary nodes using 16,384 hash slots. Each primary can have replicas. The trade-off is that multi-key commands (like MGET or Lua scripts touching multiple keys) only work if all keys hash to the same slot — which you control using hash tags like {user:1001}:session and {user:1001}:profile.

redis_cluster_and_sentinel_concepts.sh · BASH
1234567891011121314151617181920212223242526272829303132333435
# ── Check replication status ──
redis-cli INFO replication
# On primary node output includes:
# role:master
# connected_slaves:2
# slave0:ip=10.0.0.2,port=6379,state=online,offset=1234567,lag=0
# slave1:ip=10.0.0.3,port=6379,state=online,offset=1234560,lag=1
# replication_backlog_size:1048576

# ── Redis Cluster: Hash slot calculation ──
# Redis Cluster uses CRC16(key) % 16384 to decide which shard a key lives on
# Problem: these two keys land on DIFFERENT shards:
redis-cli CLUSTER KEYSLOT "user:1001:session"   # Returns e.g. 11543
redis-cli CLUSTER KEYSLOT "user:1001:profile"   # Returns e.g. 6452
# MGET user:1001:session user:1001:profile would FAIL in cluster mode!

# ── Solution: Hash Tags force keys to the same slot ──
# Wrap the shared part in curly braces — only the part in {} is hashed
redis-cli CLUSTER KEYSLOT "{user:1001}:session"   # Both hash on "user:1001"
redis-cli CLUSTER KEYSLOT "{user:1001}:profile"   # Same slot as above!
# Now MGET {user:1001}:session {user:1001}:profile works correctly

# ── Check cluster node topology ──
redis-cli CLUSTER NODES
# Output shows all nodes, their roles, and which hash slot ranges they own:
# a1b2c3... 10.0.0.1:6379 master - 0 1620000000000 1 connected 0-5460
# d4e5f6... 10.0.0.2:6379 master - 0 1620000000001 2 connected 5461-10922
# g7h8i9... 10.0.0.3:6379 master - 0 1620000000002 3 connected 10923-16383

# ── Check Sentinel state ──
redis-cli -p 26379 SENTINEL MASTERS  # Port 26379 is the default Sentinel port
# Shows the primary being monitored and its current state
# 'num-slaves': 2 — how many replicas exist
# 'num-other-sentinels': 2 — how many other Sentinel nodes are watching
# 'quorum': 2 — how many must agree to trigger failover
▶ Output
# replication INFO
role:master
connected_slaves:2
slave0:ip=10.0.0.2,port=6379,state=online,offset=1234567,lag=0

# hash slots
(integer) 11543
(integer) 6452
(integer) 4847
(integer) 4847

# cluster nodes
a1b2c3 10.0.0.1:6379 master - connected 0-5460
d4e5f6 10.0.0.2:6379 master - connected 5461-10922
⚠️
Pro Tip:Interviewers love asking 'what's the difference between Redis Sentinel and Redis Cluster?' The clean answer: Sentinel = high availability for a single dataset (automatic failover), Cluster = horizontal sharding for a dataset too large for one node (plus built-in HA). They solve different problems. Many companies run both patterns — Cluster for scale, with each shard having Sentinel-like replica failover built in.
AspectRDB PersistenceAOF Persistence
MechanismPoint-in-time snapshots (fork + dump)Logs every write command sequentially
Data loss riskUp to last snapshot interval (minutes)Up to 1 second (with everysec config)
File sizeCompact binary format — smallGrows over time — needs periodic rewrite
Restart/restore speedVery fast — single file loadSlower — replays all commands
CPU/Memory impactFork() spike during snapshotContinuous small overhead per write
Best forAcceptable data loss, fast restartsNear-zero data loss requirement
Use together?Yes — Redis supports RDB+AOF simultaneously for best of both worldsAOF used for recovery, RDB for backups

🎯 Key Takeaways

  • Redis is a data structure server, not just a cache — its native operations on Sorted Sets, Hashes, and Lists let you move computation to the data layer, which is its real superpower.
  • Redis persistence is opt-in — RDB gives compact snapshots with higher data-loss risk, AOF gives near-zero loss at the cost of file size. Use both together in production for defence-in-depth.
  • MULTI/EXEC does NOT roll back on runtime errors — use Lua scripts (EVAL) for conditional atomic operations like rate limiting, where a race condition between separate commands would cause bugs.
  • In Redis Cluster, multi-key commands only work if all keys share the same hash slot — control this with hash tags ({user:1001}:key) to group related keys onto the same shard deliberately.

⚠ Common Mistakes to Avoid

  • Mistake 1: Using KEYS * in production — The KEYS command scans the entire keyspace and blocks all other commands while it runs (remember: single-threaded). On a Redis instance with 10 million keys, this can block for seconds, causing cascading timeouts. Fix: always use SCAN with a COUNT hint and cursor-based iteration for any key discovery in production code.
  • Mistake 2: Not setting TTLs on cached keys — Developers SET thousands of keys during a traffic spike with no expiry. Redis fills to maxmemory, the eviction policy kicks in, and it starts evicting random keys — possibly evicting important non-cache data. Fix: always pass EX (seconds) or PX (milliseconds) when caching. Make TTL a first-class requirement in your caching strategy, not an afterthought.
  • Mistake 3: Assuming MULTI/EXEC rolls back on error — A developer wraps a payment flow in MULTI/EXEC, one command fails due to a wrong data type, and the other commands still execute — leaving data in a partially-updated state. Redis does NOT roll back on runtime errors (only on syntax errors before EXEC). Fix: use Lua scripts for operations that require true all-or-nothing atomicity with conditional logic, or validate all inputs before entering the MULTI block.

Interview Questions on This Topic

  • QExplain how you'd implement a rate limiter using Redis. What commands would you use and why? What race conditions could occur with a naive implementation?
  • QYour Redis instance is running out of memory. Walk me through how you'd diagnose what's taking up space, which eviction policy you'd choose and why, and how you'd prevent this from happening again.
  • QA developer on your team suggests using Redis Pub/Sub as a reliable message queue for order processing. What would you tell them? What are the limitations of Pub/Sub that make it unsuitable, and what Redis feature would you recommend instead?

Frequently Asked Questions

Is Redis single-threaded and does that make it slow?

Redis uses a single thread for command execution, which means no lock contention and perfectly predictable performance. It's not slow — it handles millions of operations per second because memory access is orders of magnitude faster than disk I/O. Since Redis 6.0, network I/O is handled by multiple threads, but command processing remains single-threaded by design.

What is the difference between Redis cache eviction policies?

The key ones: allkeys-lru evicts the least-recently-used key across all keys (best for a pure cache), volatile-lru evicts only keys with a TTL set (safe for mixed workloads where some keys must never be evicted), and noeviction rejects new writes when full (useful when you'd rather crash loudly than silently lose data). Always pair maxmemory-policy with a maxmemory limit, otherwise no eviction ever triggers.

When should I use Redis Pub/Sub versus Redis Streams?

Use Pub/Sub when you need fire-and-forget real-time messaging where losing messages is acceptable — live notifications, chat, presence indicators. Use Redis Streams when delivery guarantees matter: Streams persist messages, support consumer groups with acknowledgement, and let offline consumers catch up. For anything business-critical like order processing or payment events, Streams is the right choice.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousDesign a Job SchedulerNext →Where Do You See Yourself in 5 Years
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged