Polyglot Persistence
- Polyglot persistence: use the right database for each access pattern, not one database for everything.
- The hardest problem: keeping data consistent across multiple systems.
- CDC (Change Data Capture) + Kafka is the standard pattern for syncing across systems.
Polyglot persistence is the practice of using different database technologies for different storage needs within the same application. User profiles in PostgreSQL, session data in Redis, product search in Elasticsearch, social graph in Neo4j — each database used where it fits best rather than forcing everything into one system.
Database Types and Their Sweet Spots
# Package: io.thecodeforge.python.system_design # Typical polyglot architecture for an e-commerce system: # PostgreSQL — relational: users, orders, products, payments # - ACID transactions: order.create() and inventory.decrement() atomically # - Complex queries: reporting, joins between entities # - Example: user creates an order # Redis — key-value: sessions, caching, rate limiting # - Sub-millisecond reads: session token → user object # - TTL-based expiry: sessions expire automatically # - Example: cache product page for 10 minutes # Elasticsearch — search: product search with relevance # - Full-text search with typo tolerance # - Faceted search: filter by price, brand, rating # - Example: user searches 'wireless headphones' # MongoDB (document) — product catalogue # - Flexible schema: different products have different attributes # - Laptop has CPU, RAM; T-shirt has size, colour # - Example: store varied product attributes without schema migration # Neo4j (graph) — social features, recommendations # - 'Users who bought X also bought Y' # - Friend-of-friend queries # - Example: find all users within 3 hops in social graph # InfluxDB (time-series) — metrics, analytics # - Write-optimised for timestamped data # - Example: page views, API response times
Data Consistency Across Systems
The hardest part of polyglot persistence: keeping data consistent when it exists in multiple systems.
import asyncio # When a product is created: must update PostgreSQL AND Elasticsearch # Option 1: Dual write (naive — risks partial failure) async def create_product_bad(product): await postgres.insert('products', product) # succeeds await elasticsearch.index('products', product) # what if this fails? # Product exists in Postgres but not in search — inconsistency # Option 2: Write to primary, sync via CDC (Change Data Capture) # Debezium reads PostgreSQL WAL → publishes to Kafka → Elasticsearch consumer # Primary write is source of truth; Elasticsearch is eventually consistent # Option 3: Outbox pattern async def create_product_outbox(product): async with postgres.transaction(): await postgres.insert('products', product) await postgres.insert('outbox', { 'event': 'product_created', 'data': product, 'processed': False }) # Separate process reads outbox and syncs to Elasticsearch # If it fails, retries safely (at-least-once delivery) print('Outbox pattern ensures eventual consistency')
| Database Type | Best For | Example | Not Good For |
|---|---|---|---|
| Relational (PostgreSQL) | Transactional data, complex queries | Orders, users, payments | Full-text search, graph traversal |
| Document (MongoDB) | Flexible schema, nested data | Product catalogue, CMS | Complex multi-document transactions |
| Key-Value (Redis) | Caching, sessions, queues | Session store, rate limiter | Complex queries, large datasets |
| Search (Elasticsearch) | Full-text search, analytics | Product search, log analytics | Primary storage, ACID transactions |
| Graph (Neo4j) | Relationships, recommendations | Social graph, fraud detection | Write-heavy, simple key-value lookups |
| Time-Series (InfluxDB) | Timestamped metrics | Monitoring, IoT | Relational data, flexible schema |
🎯 Key Takeaways
- Polyglot persistence: use the right database for each access pattern, not one database for everything.
- The hardest problem: keeping data consistent across multiple systems.
- CDC (Change Data Capture) + Kafka is the standard pattern for syncing across systems.
- Outbox pattern ensures at-least-once delivery without dual-write inconsistency.
- Operational complexity increases with each database technology — justify each addition.
Interview Questions on This Topic
- QWhat is polyglot persistence and what are its benefits and drawbacks?
- QHow do you maintain consistency when data exists in both PostgreSQL and Elasticsearch?
- QWhen would you choose Redis over a relational database for session storage?
Frequently Asked Questions
When is polyglot persistence worth the complexity?
When forcing all data into one database creates significant pain — a relational database struggling with full-text search, a document database used for transactional data with complex consistency requirements. The complexity of polyglot is often preferable to the workarounds required to bend one database to all use cases. Start simple (one database) and add others only when you have a clear performance or modelling problem.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.