Encryption at Rest and in Transit Explained — How Data Stays Safe
Every time you swipe your card at a coffee shop, your bank account number travels across wires and sits on servers around the world. If that data were plain text — readable by anyone who intercepts it or gains access to a hard drive — a single breach could expose millions of customers in one shot. Encryption is the reason that does not happen (when done right). It is not optional polish; it is the foundation every production system must have before it ships.
Encryption at Rest — Protecting Data When It is Sitting Still
Encryption at rest means that any data written to persistent storage — a database, a file system, an S3 bucket, a backup tape — is stored in an unreadable ciphertext form. Even if an attacker physically pulls a hard drive from a decommissioned server or dumps a database file, all they see is noise without the decryption key.
The most common approach is AES-256 (Advanced Encryption Standard with a 256-bit key). It is symmetric: the same key encrypts and decrypts. Speed is excellent because AES is hardware-accelerated on virtually every modern CPU.
Key management is where most teams get it wrong. Encrypting data with a key stored right next to that data is like locking your house and leaving the key under the mat. Use a dedicated Key Management Service — AWS KMS, Google Cloud KMS, or HashiCorp Vault — so the key and the data live in completely separate trust zones. Rotate keys on a schedule (annually at minimum, or immediately after any suspected compromise).
import os from cryptography.hazmat.primitives.ciphers.aead import AESGCM import base64 import json # ── Key Management ────────────────────────────────────────────────────────── # In production this key would come from AWS KMS / GCP KMS / HashiCorp Vault. # NEVER hard-code it like this outside of learning examples. ENCRYPTION_KEY = AESGCM.generate_key(bit_length=256) # 32 cryptographically random bytes def encrypt_record(plaintext_record: dict) -> dict: """ Encrypts a user record before writing it to persistent storage. Returns a dict containing the ciphertext and the nonce (both base64-encoded). The nonce does NOT need to be secret — it just must be unique per encryption. """ aesgcm = AESGCM(ENCRYPTION_KEY) # A nonce (Number Used Once) prevents two identical plaintexts # from producing the same ciphertext — critical for security. nonce = os.urandom(12) # 96-bit nonce is the GCM standard plaintext_bytes = json.dumps(plaintext_record).encode("utf-8") # AESGCM also produces an authentication tag automatically. # This means decryption will FAIL if the ciphertext has been tampered with. ciphertext = aesgcm.encrypt(nonce, plaintext_bytes, associated_data=None) return { "ciphertext": base64.b64encode(ciphertext).decode("utf-8"), "nonce": base64.b64encode(nonce).decode("utf-8"), } def decrypt_record(encrypted_blob: dict) -> dict: """ Reverses encrypt_record. Raises InvalidTag if data was tampered with. """ aesgcm = AESGCM(ENCRYPTION_KEY) ciphertext = base64.b64decode(encrypted_blob["ciphertext"]) nonce = base64.b64decode(encrypted_blob["nonce"]) plaintext_bytes = aesgcm.decrypt(nonce, ciphertext, associated_data=None) return json.loads(plaintext_bytes.decode("utf-8")) # ── Demo ───────────────────────────────────────────────────────────────────── user_record = { "user_id": "usr_8821", "email": "alice@example.com", "credit_card_last4": "4242", "ssn": "123-45-6789" # Sensitive PII — must never sit unencrypted on disk } print("Original record:") print(json.dumps(user_record, indent=2)) encrypted = encrypt_record(user_record) print("\nWhat gets written to the database (ciphertext):") print(json.dumps(encrypted, indent=2)) decrypted = decrypt_record(encrypted) print("\nDecrypted record (what the app sees after reading from DB):") print(json.dumps(decrypted, indent=2))
{
"user_id": "usr_8821",
"email": "alice@example.com",
"credit_card_last4": "4242",
"ssn": "123-45-6789"
}
What gets written to the database (ciphertext):
{
"ciphertext": "base64-encoded noise — unreadable without the key",
"nonce": "3q7YmXp2Kv8="
}
Decrypted record (what the app sees after reading from DB):
{
"user_id": "usr_8821",
"email": "alice@example.com",
"credit_card_last4": "4242",
"ssn": "123-45-6789"
}
Encryption in Transit — Protecting Data While It Moves
Encryption in transit means data is wrapped in a cryptographic tunnel for every hop it takes across a network — browser to server, microservice to microservice, app server to database. Without it, anyone with access to the network path (a shared Wi-Fi router, a malicious ISP, a compromised internal switch) can read every byte using a packet sniffer.
TLS (Transport Layer Security) is the protocol that handles this. It is what the padlock in your browser address bar represents. TLS 1.3 is the current standard; TLS 1.0 and 1.1 are deprecated and should be actively disabled.
TLS does three things simultaneously: it encrypts the payload so eavesdroppers cannot read it, it verifies the server's identity via a certificate (preventing man-in-the-middle attacks), and it ensures integrity so tampered data is detected and rejected.
For internal microservice traffic, many teams mistakenly assume their private VPC is safe enough to skip TLS. It is not. Lateral movement — where an attacker compromises one internal service and sniffs traffic — is one of the most common post-breach attack patterns. Mutual TLS (mTLS) takes this further by requiring both client and server to present certificates, making it the right choice for service mesh architectures.
# ── Part 1: HTTPS Server with TLS ──────────────────────────────────────────── # Requires: pip install flask pyopenssl # Certificate generation (run once in terminal): # openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt \ # -days 365 -nodes -subj "/CN=localhost" from flask import Flask, jsonify import ssl app = Flask(__name__) @app.route("/api/user-profile") def get_user_profile(): # Sensitive data — safe to serve because the transport is encrypted return jsonify({ "user_id": "usr_8821", "email": "alice@example.com", "account_tier": "premium" }) if __name__ == "__main__": # Build an SSL context — this is what activates TLS on the server socket ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Load the certificate (public) and private key ssl_context.load_cert_chain(certfile="server.crt", keyfile="server.key") # Enforce TLS 1.2 minimum — explicitly reject older, broken versions ssl_context.minimum_version = ssl.TLSVersion.TLSv1_2 print("Server running on https://localhost:5000 — all traffic is encrypted") app.run(host="0.0.0.0", port=5000, ssl_context=ssl_context) # ── Part 2: Client that enforces certificate verification ──────────────────── # ALWAYS verify the server certificate. Never set verify=False in production. import requests def fetch_user_profile_securely(base_url: str, ca_cert_path: str) -> dict: """ Fetches user data over HTTPS. ca_cert_path: path to the CA cert that signed the server certificate. In production this would be the system CA bundle or your internal PKI cert. """ response = requests.get( url=f"{base_url}/api/user-profile", verify=ca_cert_path, # Validates server cert against this CA — prevents MITM timeout=10 # Always set a timeout; hanging connections are a DoS risk ) response.raise_for_status() # Raises exception on 4xx/5xx return response.json() # Example call (would work against the server above): # profile = fetch_user_profile_securely("https://localhost:5000", "server.crt") # print(profile)
# When the client calls fetch_user_profile_securely():
# TLS handshake completes — server identity verified against CA cert
# Response received over encrypted channel:
{
"user_id": "usr_8821",
"email": "alice@example.com",
"account_tier": "premium"
}
System Design — Putting Both Layers Together in a Real Architecture
Understanding each layer in isolation is not enough. Real systems need both, and the design decisions around where each layer lives matter enormously.
Consider a typical web application: a React frontend talks to an API gateway, which routes to microservices, which read from a PostgreSQL database and write blobs to S3. Every arrow in that diagram is a transit path; every box is a rest location. You need TLS on every arrow and encryption on every box.
A common pattern is the envelope encryption model: your actual data is encrypted with a Data Encryption Key (DEK). The DEK itself is then encrypted with a Key Encryption Key (KEK) that lives in a KMS. This means your KMS never sees the raw data volume — it only ever handles tiny keys — and you can rotate the KEK without re-encrypting every record; just re-wrap the DEK.
For transit, use a service mesh like Istio or Linkerd to enforce mTLS automatically between every microservice pair. This moves the certificate management burden off individual teams and into infrastructure, and gives you a single policy point to audit.
# ── Envelope Encryption — AWS Architecture Reference ──────────────────────── # This CloudFormation-style pseudo-config shows the layered key hierarchy. # It is conceptual but reflects real AWS KMS + S3 + RDS patterns. KeyManagement: CustomerMasterKey: # Lives ONLY inside AWS KMS — never exported alias: "alias/payments-service-cmk" rotation: enabled # AWS auto-rotates annually key_policy: least_privilege # Only the payments-service IAM role can use it Storage: UserDatabase: # Amazon RDS PostgreSQL encryption_at_rest: true encryption_type: envelope # RDS generates a DEK per tablespace, wraps it with the CMK above. # If DB volume is stolen, attacker has ciphertext but no DEK. # If DEK is somehow leaked, attacker still cannot unwrap it without KMS access. kms_key_ref: "alias/payments-service-cmk" tls_in_transit: enforce: true minimum_tls_version: "1.2" # Reject 1.0/1.1 connections at engine level certificate_authority: "AWS ACM" DocumentStore: # Amazon S3 default_encryption: "aws:kms" # Every object encrypted on write automatically kms_key_ref: "alias/payments-service-cmk" bucket_policy: deny_non_https: true # Bucket policy rejects any HTTP (unencrypted) PUT/GET Transit: APIGateway: protocol: HTTPS tls_policy: "TLS_1_2" # AWS managed policy — auto-rejects older handshakes certificate: "ACM-managed" # Auto-renewed, no manual cert rotation MicroserviceMesh: type: Istio mtls_mode: STRICT # PERMISSIVE allows plaintext fallback — never use in prod # STRICT means: if a service does not present a valid cert, the connection is dropped. # This stops lateral movement — a compromised sidecar cannot sniff other services. KeyRotationPolicy: cmk_rotation: "annually-automatic" dek_rotation: "on-cmk-rotation" # Re-wrap DEKs when CMK rotates tls_certificate_renewal: "60-days-before-expiry" # Automated via ACM incident_rotation: "immediate" # Runbook trigger on any suspected key compromise
# When applied:
# - Every byte written to RDS is AES-256 encrypted before hitting disk.
# - Every byte written to S3 is encrypted with a KMS-managed key.
# - All network traffic between services uses mTLS — no plaintext paths exist.
# - The CMK never leaves KMS hardware — even AWS engineers cannot access it.
| Aspect | Encryption at Rest | Encryption in Transit |
|---|---|---|
| What it protects against | Stolen disks, DB dumps, insider access to storage | Network eavesdropping, man-in-the-middle attacks |
| When it applies | Data is idle — written to disk, S3, backup tape | Data is moving — API calls, DB queries, file transfers |
| Primary protocol / algorithm | AES-256-GCM (symmetric) | TLS 1.3 (asymmetric handshake, symmetric session) |
| Key storage | KMS (AWS/GCP/Vault) — separate from data | Certificate Authority (CA) — public cert on server |
| Managed cloud option | AWS KMS, GCP CMEK, Azure Key Vault | AWS ACM, GCP Certificate Manager, Let's Encrypt |
| Performance cost | Minimal — AES is hardware-accelerated | Slight latency on TLS handshake; negligible per request after |
| Biggest mistake teams make | Storing encryption key next to the data it protects | Setting verify=False in HTTP clients to skip cert validation |
| Compliance relevance | PCI-DSS Req 3, HIPAA §164.312(a)(2)(iv) | PCI-DSS Req 4, HIPAA §164.312(e)(1) |
🎯 Key Takeaways
- At-rest encryption protects against physical theft and unauthorized storage access — AES-256-GCM is the right algorithm, and your key must live in a separate KMS, never alongside the data.
- In-transit encryption (TLS 1.2+) protects against network interception — always enforce certificate validation on the client side; verify=False is a security vulnerability, not a convenience setting.
- Envelope encryption is the production-grade pattern: encrypt data with a DEK, wrap the DEK with a KEK in KMS. This handles large data volumes, keeps keys out of your storage layer, and makes key rotation cheap.
- Compliance is a floor, not a ceiling — PCI-DSS and HIPAA mandate both layers, but best-practice architecture (mTLS between microservices, CMK rotation, deny-non-HTTPS bucket policies) goes further and gives you defense-in-depth when one layer is eventually compromised.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Storing the encryption key in the same database as the encrypted data — Symptom: A single database breach gives the attacker both the ciphertext and the key, making encryption completely useless — Fix: Always store keys in a dedicated KMS (AWS KMS, HashiCorp Vault). The key and the data it protects must live in separate trust boundaries with separate access controls.
- ✕Mistake 2: Setting verify=False on HTTP clients to silence SSL errors — Symptom: Development workaround that gets copy-pasted to production; TLS connection is made but certificate is never validated, leaving you wide open to man-in-the-middle attacks with no warning — Fix: Fix the underlying cert issue (expired cert, wrong hostname, self-signed CA not trusted). In internal systems, add your internal CA cert to the trust store instead of disabling verification.
- ✕Mistake 3: Encrypting at the application layer but forgetting the database connection is plaintext — Symptom: Data is encrypted on disk and decrypted in the app, but the cleartext travels over a plain TCP connection between app server and database on the internal network — Fix: Enable require_ssl on PostgreSQL / ssl-mode=REQUIRED on MySQL and verify that the connection string in your app explicitly enables TLS. Run a packet capture on your internal network during testing to confirm no plaintext credentials or query results are visible.
Interview Questions on This Topic
- QWhat is the difference between encryption at rest and encryption in transit, and can you walk me through where each applies in a three-tier web application?
- QExplain envelope encryption — why do we encrypt the encryption key rather than just using a single master key for all data?
- QA developer on your team says 'our database is inside a private VPC with no public internet access, so TLS between the app and database is unnecessary overhead.' How do you respond, and what specific attack does their assumption leave open?
Frequently Asked Questions
Is HTTPS enough to protect my users' data?
HTTPS (TLS) only protects data while it is moving between the browser and your server. Once data lands in your database or on disk, HTTPS provides zero protection. You need encryption at rest as well — otherwise a database breach exposes everything in plaintext. Think of HTTPS as locking the front door; encryption at rest is locking the safe inside.
What is the difference between TLS 1.2 and TLS 1.3?
TLS 1.3 is faster (one round-trip handshake vs two in TLS 1.2) and more secure — it removed support for weak cipher suites like RC4 and 3DES that TLS 1.2 still technically allows. It also enables 0-RTT resumption for repeat connections. You should serve TLS 1.3 by default and support TLS 1.2 as a fallback for older clients. TLS 1.0 and 1.1 should be disabled entirely.
Do I need to implement encryption myself or can I let AWS/GCP handle it?
For most applications, enabling the cloud provider's managed encryption (AWS KMS with RDS, S3-SSE-KMS, ACM for TLS) is the right call — it is battle-tested, audit-logged, and handles key rotation. You would only implement application-layer encryption yourself when you need end-to-end encryption where even the cloud provider cannot access the cleartext, which is required in some healthcare and financial regulations. The two approaches are not mutually exclusive; many compliance-sensitive systems use both.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.