Junior 5 min · March 28, 2026

Python print() — Print Buffering Drops Docker Output

Python’s 8KB block buffer in Docker containers drops print output for 30 minutes — fix with PYTHONUNBUFFERED or flush=True to prevent false deadlocks.

N
Naren · Founder
Plain-English first. Then code. Then the interview question.
About
 ● Production Incident 🔎 Debug Guide
Quick Answer
  • print() converts values to strings and writes them to stdout, which is buffered by the OS — not displayed immediately
  • Five parameters: *objects (values), sep (between values), end (after last value), file (output destination), flush (force write)
  • flush=True is non-negotiable in Docker, piped, or containerised environments — without it, output may never appear before a crash
  • f-strings are faster than .format() and fail at parse time — always prefer them for embedded variable formatting
  • print() writes to stdout by default — errors belong on stderr via file=sys.stderr to keep piped data clean
  • Biggest mistake: using print() as production logging — it has no timestamps, no severity levels, no way to disable output without code changes
Plain-English First

Think of print() as a PA system announcement in a big office building. Your Python program is doing work quietly in a back room, and print() is the moment someone walks to the microphone and broadcasts what's happening to everyone in the building. Without it, work still gets done — but nobody outside that room has any idea what's going on. The PA system doesn't change the work; it just makes it visible. That's all print() does — it makes your program's internal state visible to you.

Every Python developer has done it: spent 45 minutes convinced there's a logic bug, only to realise print() was buffering output and they were reading stale data the entire time. Not a beginner mistake — I've watched a senior engineer waste a morning on this during a live incident because stdout was line-buffered inside a Docker container and nothing was flushing to the log aggregator.

print() looks like the simplest thing in Python. One function, one job — show something on screen. But it has five parameters most beginners never touch, a buffering model that silently eats your debug output at the worst possible moment, and formatting options that, once you know them, make you wonder how you ever lived without them. The gap between knowing print() exists and actually knowing how it works is wider than it looks.

By the end of this article you'll be able to use every parameter print() accepts, format output cleanly using f-strings and the sep and end arguments, write output to files instead of the terminal, force-flush buffered output so it actually appears when you need it, and spot the exact mistakes that cause beginners to think their code isn't running when it's running just fine.

What print() Actually Does — and the Buffering Trap Nobody Warns You About

Before you can use print() well, you need a mental model of what happens when you call it. You type print('hello') and a word appears on screen. Simple. But there are three invisible steps between those two events: Python converts your value to a string, hands it to the operating system's standard output stream (stdout), and the OS decides when to actually display it. That last step is the one that bites people.

Stdout is buffered. That means Python doesn't necessarily write your output to the screen the instant you call print(). It stacks up output in memory and flushes it in chunks — usually when the buffer fills up, when the program ends cleanly, or when a newline character is written. In an interactive terminal, Python uses line-buffering, so you see output after each newline. But pipe that program's output to a file, run it inside Docker, or wrap it in a subprocess, and you get full block-buffering. Your print() calls appear to do nothing until the program exits.

I've seen this burn people specifically in long-running data pipeline scripts — a developer adds print() calls for progress reporting, runs the script piped to tee to capture logs, and sees nothing for 30 minutes then gets a wall of text all at once when the script finishes. The fix is the flush parameter, which we'll get to. But knowing this buffering model exists is the first thing you need, because without it the symptom looks exactly like a hung process.

BufferingDemo.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# io.thecodeforge — Python tutorial

import time
import sys

# Simulates a long-running data import job.
# Without flush=True, none of these progress lines appear
# until the entire job finishes — which is useless for monitoring.

def run_import_job(total_records: int) -> None:
    print(f"Starting import of {total_records} records...")

    for batch_number in range(1, total_records + 1):
        # Simulate work being done on each record
        time.sleep(0.1)

        # flush=True forces Python to push this line to stdout RIGHT NOW
        # instead of waiting for the buffer to fill.
        # Without this, in a piped or containerised environment,
        # you'd see nothing until the program exits.
        print(
            f"Processed record {batch_number}/{total_records}",
            flush=True  # <-- this is the line that makes monitoring actually work
        )

    # '\n' at the start gives a blank line after the last progress entry
    # so the completion message stands visually apart.
    print("\nImport complete.", flush=True)


if __name__ == "__main__":
    run_import_job(total_records=5)
Output
Starting import of 5 records...
Processed record 1/5
Processed record 2/5
Processed record 3/5
Processed record 4/5
Processed record 5/5
Import complete.
Production Trap: Silent Buffering in Docker
Running Python inside Docker with output piped to a log driver? Add PYTHONUNBUFFERED=1 to your environment variables (or run Python with the -u flag). Without it, your container logs stay empty until the process dies — and if it crashes, you lose every print() call that never flushed. This has killed post-mortem debugging on more incidents than I can count. Make it a default in your organisation's Dockerfile templates, not something engineers remember on their own.
Production Insight
stdout is block-buffered when not connected to a terminal — Docker triggers this by design, not by accident.
Buffered output is lost entirely if the process is killed with SIGKILL before the buffer drains.
Rule: always set PYTHONUNBUFFERED=1 in Dockerfiles — make it a default in your base image, not an afterthought when an incident exposes the gap.
Key Takeaway
stdout buffering is invisible in a terminal but catastrophic in Docker — your output silently vanishes without a trace.
PYTHONUNBUFFERED=1 in every Dockerfile is the production default — treat it like setting your timezone: do it once in the base image and forget about it.
The flush parameter is the surgical fix for critical progress lines that must appear before the buffer drains.
Buffering Strategy Selection
IfRunning Python interactively in a terminal
UseLine-buffered by default — output appears after each newline, no action needed
IfRunning Python inside Docker or piped to another command
UseBlock-buffered — set PYTHONUNBUFFERED=1 in the environment or use python -u to force unbuffered output globally
IfNeed specific print() calls to appear immediately in a buffered environment
UseAdd flush=True to those individual calls — surgical fix without changing global buffering behaviour for the rest of the process
IfProcess may receive SIGKILL and you need buffered output preserved
UseRegister a SIGTERM handler that calls sys.stdout.flush() before exiting — SIGKILL cannot be caught, so graceful shutdown via SIGTERM is the only path

The Full print() Syntax: All Five Parameters, Actually Explained

The full signature of print() is: print(*objects, sep=' ', end=' ', file=sys.stdout, flush=False). Most tutorials show you the first argument and quietly ignore the other four. That's a mistake, because those four parameters are where print() becomes genuinely useful.

*objects means you can pass as many values as you want, separated by commas. Python converts each one to a string using str() before printing. sep is what gets placed between those values — it defaults to a single space, but you can make it anything: a pipe character, a tab, a comma, an empty string. end is what gets appended after the last value — it defaults to a newline character, which is why each print() call appears on its own line. Change it to an empty string and you can print multiple things on one line across multiple calls.

file lets you redirect print() output to any object that has a write() method — a file handle, sys.stderr, a StringIO buffer, or any custom stream. This is genuinely useful for writing lightweight scripts that log to a file without importing the logging module. flush we already covered in depth: it bypasses the OS buffer and writes to the stream immediately. None of these parameters are optional knowledge — they come up the moment you write anything beyond a simple script.

CheckoutReceiptPrinter.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# io.thecodeforge — Python tutorial

import sys
from datetime import datetime

# Imagine this is a checkout service generating a transaction receipt.
# Each parameter of print() is doing real work here.

def print_receipt(
    order_id: str,
    items: list[tuple[str, float]],
    log_file_path: str
) -> None:

    # Open a file to log receipts — print() will write there instead of stdout
    with open(log_file_path, "a") as receipt_log:

        # sep='\n' puts each argument on its own line.
        # end='\n' keeps the normal line break after the last item.
        print(
            "=" * 40,
            f"ORDER ID : {order_id}",
            f"TIMESTAMP: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
            sep="\n",   # each argument printed on its own line
            end="\n",  # normal newline after the last item
            file=receipt_log,  # writes to the file, not the terminal
            flush=True  # flush immediately so log isn't lost if service crashes
        )

        # Print each line item — sep is a pipe character for readability
        for item_name, item_price in items:
            print(
                f"  {item_name}",
                f"£{item_price:.2f}",  # :.2f formats float to 2 decimal places
                sep=" | ",  # separator between item name and price
                file=receipt_log,
                flush=True
            )

        # Calculate and print total
        total = sum(price for _, price in items)
        print(f"\nTOTAL: £{total:.2f}", file=receipt_log, flush=True)
        print("=" * 40, file=receipt_log, flush=True)

    # Confirm to the terminal (stdout) — separate from the file log
    print(f"Receipt for order {order_id} written to {log_file_path}")


if __name__ == "__main__":
    cart = [
        ("Wireless Headphones", 79.99),
        ("USB-C Cable", 12.49),
        ("Screen Protector", 8.00),
    ]
    print_receipt(
        order_id="ORD-20240812-7743",
        items=cart,
        log_file_path="receipts.log"
    )
Output
Receipt for order ORD-20240812-7743 written to receipts.log
[Contents of receipts.log:]
========================================
ORDER ID : ORD-20240812-7743
TIMESTAMP: 2024-08-12 14:33:01
Wireless Headphones | £79.99
USB-C Cable | £12.49
Screen Protector | £8.00
TOTAL: £100.48
========================================
Senior Shortcut: Print Errors to stderr, Not stdout
Use print('Something went wrong', file=sys.stderr) for error messages. stdout and stderr are separate streams — keeping them separate means scripts that pipe your output to another command don't get error text mixed into the data. It's a one-word change that makes your scripts composable with standard Unix tooling, and it's the convention every tool from grep to curl follows.
Production Insight
print() to stdout mixes errors with data — breaks downstream piping and corrupts data pipelines silently.
Using file=sys.stderr for errors keeps the data stream clean and composable with any Unix tool.
Rule: error output always goes to stderr — train this reflex early and your scripts become idiomatic from the start.
Key Takeaway
print() has five parameters — ignoring sep, end, file, and flush means you're using 20% of the function.
file=sys.stderr for errors is not optional in production scripts — it's what makes your tool composable with Unix pipelines.
The file parameter turns print() into a lightweight logger without importing anything — appropriate for simple scripts that write to a single output file.
print() Parameter Selection
IfPrinting multiple values that need a custom separator
UseUse sep parameter — print(a, b, sep=' | ') instead of string concatenation, which fails on non-string types
IfBuilding output across multiple print() calls on the same line
UseUse end='' to suppress the newline, then add an explicit print() with no arguments to start a new line when done
IfWriting output to a file or log stream instead of the terminal
UseUse file parameter — print('msg', file=f) for file handles, print('err', file=sys.stderr) for error output
IfOutput must appear immediately in a containerised or piped environment
UseUse flush=True — bypasses the OS buffer and writes to the stream immediately rather than waiting for the buffer to fill

Formatting Output with f-strings: The Right Tool for the Job

You will format strings inside print() constantly. There are three ways to do it in modern Python: concatenation with +, the old .format() method, and f-strings. Don't use concatenation for anything beyond gluing two things together — it breaks the moment you mix strings and non-strings and it's unreadable at scale. Don't use .format() for new code — it was the right answer before Python 3.6 and it's been outdated since. Use f-strings.

An f-string is a string literal prefixed with f. Any expression inside curly braces gets evaluated and inserted. You can put arithmetic, function calls, method calls, conditional expressions — any valid Python expression — directly inside the braces. You can also add format specifiers after a colon inside the braces to control number formatting, padding, and alignment.

The format specifiers are where beginners usually stop reading, which is a shame because they solve real problems. :.2f gives you a float rounded to two decimal places — essential for money. :, adds thousands separators to integers. :>10 right-aligns a value in a 10-character-wide column. :<10 left-aligns. :^10 centres. These turn chaotic number output into readable, aligned tables without reaching for any external library. The combination :>15,.2f means right-align in a 15-character field, add comma separators, show two decimal places — one specifier replaces what used to take three or four string operations.

SalesReportFormatter.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# io.thecodeforge — Python tutorial

# Realistic scenario: a sales reporting script that prints
# a formatted summary table to the terminal each morning.
# This is the kind of internal tooling that lives in every company
# and is almost always badly formatted because nobody knew f-string specifiers.

from datetime import date

def print_sales_report(
    report_date: date,
    regional_sales: list[tuple[str, int, float]]
) -> None:

    # Header with dynamic date
    print(f"\n{'DAILY SALES REPORT':^50}")
    print(f"{'Date: ' + str(report_date):^50}")
    print("-" * 50)

    # Column headers — :<20 left-aligns in a 20-char wide column
    # :>10 right-aligns in a 10-char column
    print(f"{'Region':<20} {'Units':>10} {'Revenue':>15}")
    print("-" * 50)

    total_units = 0
    total_revenue = 0.0

    for region_name, units_sold, revenue in regional_sales:
        total_units += units_sold
        total_revenue += revenue

        # :, adds thousands separators: 12000 becomes 12,000
        # :.2f formats to 2 decimal places for currency
        # The full :>15,.2f means: right-align in 15 chars, comma separator, 2 decimal places
        print(
            f"{region_name:<20} "
            f"{units_sold:>10,} "
            f"£{revenue:>14,.2f}"
        )

    print("=" * 50)

    # Conditional expression inside f-string — shows performance status inline
    performance_flag = "✓ On Target" if total_revenue >= 50_000 else "⚠ Below Target"
    print(f"{'TOTAL':<20} {total_units:>10,} £{total_revenue:>14,.2f}")
    print(f"\nStatus: {performance_flag}")


if __name__ == "__main__":
    sales_data = [
        ("North",   4210,  18_430.50),
        ("South",   3875,  15_200.00),
        ("East",    2990,  12_750.75),
        ("West",    5100,  21_880.25),
    ]
    print_sales_report(
        report_date=date(2024, 8, 12),
        regional_sales=sales_data
    )
Output
DAILY SALES REPORT
Date: 2024-08-12
--------------------------------------------------
Region Units Revenue
--------------------------------------------------
North 4,210 £18,430.50
South 3,875 £15,200.00
East 2,990 £12,750.75
West 5,100 £21,880.25
==================================================
TOTAL 16,175 £68,261.50
Status: ✓ On Target
Interview Gold: f-strings vs .format() Performance
f-strings are faster than .format() — they're evaluated at parse time rather than being dispatched through a method call at runtime. In a tight loop printing thousands of lines (logging, report generation), this difference is measurable. More importantly, f-strings fail at parse time if the variable doesn't exist — .format() with a missing key throws a KeyError at runtime, which is a worse time to find out. Parse-time failures are free; runtime failures cost you an incident.
Production Insight
f-strings fail at parse time if a variable doesn't exist — .format() fails at runtime, in production, under load.
Format specifiers like :.2f and :>10 replace entire formatting libraries for terminal output and internal reporting.
Rule: always use f-strings for new code — they're faster, safer, and more readable than every alternative.
Key Takeaway
f-strings are the only correct choice for new Python code — faster than .format() and fail at parse time rather than runtime.
Format specifiers (:.2f, :, :>10) solve real formatting problems without external libraries — learn them once and you stop reaching for pandas just to print a table.
Never use string concatenation to build output with mixed types — it raises TypeError on the first non-string value and forces you to litter str() calls everywhere.
String Formatting Selection
IfEmbedding variables or expressions into output text
UseUse f-strings — f'{value:.2f}' — fastest option, parse-time validation, most readable in code review
IfFormatting currency or financial values
UseUse :.2f specifier — f'£{amount:.2f}' — guarantees exactly 2 decimal places regardless of the input value
IfFormatting numbers with thousands separators
UseUse :, specifier — f'{count:,}' — 12000 becomes 12,000 without any manual string manipulation
IfAligning output in columns for terminal tables
UseUse :>N (right-align), :<N (left-align), :^N (center) — f'{name:<20}' pads to exactly 20 characters

Printing Multiple Values, Special Characters, and When to Stop Using print()

There are two small mechanics beginners stumble over: printing multiple values in one call, and dealing with special characters. Passing multiple arguments to print() is cleaner than concatenation — print(first_name, last_name) just works, and you control the separator with sep. You don't need to manually add spaces or call str() on each value — print() handles the conversion internally.

Special characters use escape sequences inside strings. is a newline, \t is a tab, \\ is a literal backslash, and \' or \" escape quotes inside a string. You'll use constantly. You'll use \t occasionally for quick-and-dirty alignment, though f-string width specifiers are cleaner and more predictable for anything that needs to stay aligned across different-length values.

Now — the opinion you didn't ask for but need to hear. print() is a debugging and light-output tool. The moment you're writing a production service, a daemon, a web application, or anything that runs unattended, switch to Python's logging module. It gives you log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL), timestamps, filenames, line numbers, and configurable output destinations — all things print() cannot give you. I've inherited codebases where the entire observability strategy was print() calls sprinkled across 15 files, and debugging a production issue in that codebase meant grepping through 200 identical-looking lines with no timestamps and no severity context. The on-call engineer has no idea which print() is relevant to the current incident and which is leftover from someone's debugging session six months ago. Don't build that codebase. Use print() to learn, use it in one-off scripts, use it for deliberate user-facing terminal output. For everything else: logging.

UserAuthDebugger.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# io.thecodeforge — Python tutorial

# Shows the difference between print() for user-facing terminal output
# and the point where you should switch to the logging module.
# This is a simplified auth flow for illustration.

import logging
import sys

# Configure logging — this is the pattern you use instead of print()
# for anything that runs in the background or in production.
logging.basicConfig(
    level=logging.DEBUG,
    format="%(asctime)s [%(levelname)s] %(filename)s:%(lineno)d — %(message)s",
    stream=sys.stdout  # you can also point this at a file handler
)

logger = logging.getLogger(__name__)


def attempt_login(username: str, provided_password: str) -> bool:
    """Simulates a login check with both print() and logging to show the contrast."""

    # print() is fine here — this is deliberate user-facing terminal output
    print(f"\nAttempting login for: {username}")

    # logging.debug is the right call for internals —
    # it includes timestamp, file, and line number automatically
    logger.debug("Login attempt started for user: %s", username)

    # Simulated password check (never store plain passwords — this is just illustration)
    stored_password_hash = "hashed_secret_123"  # placeholder
    password_matches = provided_password == "correct_password"  # simplified check

    if not password_matches:
        # logging.warning is appropriate — it's an event worth recording with severity
        logger.warning("Failed login attempt for user: %s", username)

        # print() for the user-facing message — they need to see this
        print("Login failed. Check your credentials.")
        return False

    logger.info("Successful login for user: %s", username)
    print(f"Welcome back, {username}!")
    return True


if __name__ == "__main__":
    # Demonstrates: multiple values in one print(), sep, and special characters
    print("=" * 40)
    print("Username:", "Status:", "Result:", sep="\t")  # tab-separated header
    print("=" * 40)

    attempt_login(username="alice", provided_password="wrong_password")
    print("-" * 40)
    attempt_login(username="alice", provided_password="correct_password")
Output
========================================
Username: Status: Result:
========================================
Attempting login for: alice
2024-08-12 14:33:01,234 [DEBUG] UserAuthDebugger.py:30 — Login attempt started for user: alice
2024-08-12 14:33:01,235 [WARNING] UserAuthDebugger.py:39 — Failed login attempt for user: alice
Login failed. Check your credentials.
----------------------------------------
Attempting login for: alice
2024-08-12 14:33:01,236 [DEBUG] UserAuthDebugger.py:30 — Login attempt started for user: alice
2024-08-12 14:33:01,237 [INFO] UserAuthDebugger.py:43 — Successful login for user: alice
Welcome back, alice!
Never Do This: print() as Your Production Logger
I've seen a microservice with 400 print() statements as its only observability layer. When it started misbehaving at 2am, the on-call engineer had zero timestamps, zero severity levels, and zero context on which calls were related to which request. Every line looked identical. The fix was a week-long refactor to replace every print() with logging calls. Switch to logging the moment your script runs unattended or handles real users — doing it later costs five times as much.
Production Insight
print() has no timestamps, no severity levels, no request correlation — it produces unstructured output that's invisible to every log aggregator and alerting system.
A microservice with 400 print() calls is un-debuggable under incident pressure — every line looks the same.
Rule: the moment your code runs unattended or serves real users, switch to the logging module — no exceptions, and no 'I'll do it later'.
Key Takeaway
print() is a learning and light-scripting tool — the moment your code runs unattended, you've already outgrown it.
The logging module gives you timestamps, severity levels, and runtime-configurable output — print() gives you none of these.
If you can't disable your debug output without editing source code, you're using print() where you should be using logging — and you'll pay for it at 2am.
print() vs logging Selection
IfOne-off script, learning exercise, or deliberate user-facing CLI output
UseUse print() — simple, zero setup, appropriate for the context and the audience
IfLong-running service, daemon, or anything that runs unattended
UseUse the logging module — gives you timestamps, severity levels, and configurable output destinations without code changes
IfNeed to disable debug output without code changes in production
UseUse logging with level filtering — set level to WARNING at runtime and all debug calls disappear without touching a single line of source
IfNeed structured output for log aggregators (ELK, Datadog, CloudWatch)
UseUse logging with a JSON formatter — print() produces unstructured text that aggregators cannot parse, index, or alert on
● Production incidentPOST-MORTEMseverity: high

Data Pipeline Output Completely Missing from Docker Logs — Print Buffering Silently Eats 30 Minutes of Debug Output

Symptom
A nightly ETL script running inside a Docker container with stdout piped to a log aggregator showed zero output for 30 minutes. The on-call engineer, watching the dashboard, assumed the script was deadlocked and killed the container. On restart, the same pattern repeated. The script was actually running fine — it just wasn't flushing output.
Assumption
A deadlock or infinite loop in the ETL logic was preventing the script from producing output.
Root cause
Python uses block-buffering when stdout is not connected to a terminal — inside Docker, it isn't. Every print() call wrote to an in-memory buffer that only flushed when it filled up (typically 8KB) or when the process exited cleanly. The ETL script printed ~2,000 progress lines, each about 40 bytes — roughly 80KB total, which accumulated in the buffer globally. When the container was killed with SIGKILL, the buffer was never drained and all output was lost. On the second run, the script completed normally and the buffer flushed on exit, producing the wall of text.
Fix
Three changes: (1) Added PYTHONUNBUFFERED=1 to the Dockerfile ENV to force unbuffered stdout globally. (2) Added flush=True to every print() call inside the progress reporting loop for belt-and-suspenders protection. (3) Added a SIGTERM handler that flushes sys.stdout before exiting gracefully, preventing buffer loss on container stop.
Key lesson
  • Python stdout is block-buffered when not connected to a terminal — Docker, pipes, and subprocesses all trigger this.
  • PYTHONUNBUFFERED=1 or python -u is the fix for containerised Python — add it to every Dockerfile by default, not as an afterthought when an incident forces your hand.
  • flush=True on individual print() calls is the surgical fix for critical progress reporting lines that must appear in real time.
  • SIGKILL kills the process without draining buffers — only SIGTERM with a registered handler preserves buffered output before the process exits.
Production debug guideCommon symptoms when print() output doesn't appear where or when you expect4 entries
Symptom · 01
Print output inside a Docker container appears all at once on container exit
Fix
Python is block-buffering stdout because it's not connected to a terminal. Set PYTHONUNBUFFERED=1 in your Dockerfile ENV, or add flush=True to individual print() calls that must appear in real time.
Symptom · 02
Error messages from your script appear mixed into piped data output
Fix
You're printing errors to stdout instead of stderr. Use print('error msg', file=sys.stderr) for all error output to keep the data stream clean and composable with Unix pipelines.
Symptom · 03
Print output appears in the terminal but not in a redirected file
Fix
The process likely crashed before the file buffer flushed. Add flush=True to critical print() calls that write to the file, or use python -u to disable buffering globally for that process.
Symptom · 04
Printed numbers show too many decimal places or no thousands separators
Fix
Use f-string format specifiers: :.2f for exactly 2 decimal places, :, for thousands separators, :>10 for right-alignment in a fixed-width column. Combine them: f'{value:>15,.2f}' right-aligns with commas and 2 decimal places.
★ Python Output Debugging Cheat SheetQuick fixes for print() output issues in production Python processes
No print output appearing in Docker container logs
Immediate action
Check if PYTHONUNBUFFERED is set in the container environment
Commands
docker exec <container> env | grep PYTHON
docker exec <container> python -c "import sys; print(sys.stdout.line_buffering)"
Fix now
Add ENV PYTHONUNBUFFERED=1 to Dockerfile, or run with docker run -e PYTHONUNBUFFERED=1
Print output lost when container is stopped+
Immediate action
Check if the container received SIGKILL instead of SIGTERM (no graceful shutdown)
Commands
docker inspect <container> --format='{{.State.ExitCode}}'
docker logs <container> --tail 50
Fix now
Use docker stop (sends SIGTERM) instead of docker kill (sends SIGKILL), and add a SIGTERM handler that flushes sys.stdout before the process exits
Mixed error and data output when piping Python script to another command+
Immediate action
Check if error messages are landing on stdout instead of stderr
Commands
python script.py 2>/dev/null | head -5
python script.py 1>/dev/null | head -5
Fix now
Change error print() calls to use file=sys.stderr — data appears on stdout, diagnostics on stderr
Print output missing specific lines in a loop+
Immediate action
Check if those lines are being overwritten by end='\r' or carriage return characters in a progress-line pattern
Commands
python script.py | cat -v | grep '\^M'
python -u script.py 2>&1 | tee full_output.log
Fix now
Remove end='\r' from progress print() calls, or add a final print() with no end override to reset the cursor to a new line after the loop completes
print() vs logging Module Comparison
Feature / Aspectprint()logging module
Setup requiredNone — works immediately~5 lines to configure basicConfig() once at startup
TimestampsNo — you add them manually via f-string if you need themAutomatic via %(asctime)s format specifier
Log levels (DEBUG/INFO/WARNING/ERROR)No — all output looks identical regardless of severityYes — filter by severity at runtime with no code changes
Output destinationstdout or a file via file= parameterMultiple handlers: file, console, rotating file, network socket
Line numbers and filenamesNo — you'd have to add them manuallyYes — automatic via %(filename)s and %(lineno)d in the format string
Disable output without code changesNo — must delete or comment out callsYes — set log level to WARNING at runtime, debug calls disappear
Performance in tight loopsFast — minimal overhead per callSlightly more overhead per call, negligible in practice outside of extremely tight inner loops
Right tool for user-facing terminal outputYes — clean and appropriateTechnically works but overkill for simple terminal feedback
Right tool for production servicesNo — unstructured, no severity, no aggregator supportYes — built for exactly this purpose
flush= supportYes — built-in parameter on every callConfigurable per handler via StreamHandler with flush support

Key takeaways

1
flush=True is not optional in containerised or piped environments
without it, your print() calls are buffered and may never appear if the process crashes before the buffer drains. Add PYTHONUNBUFFERED=1 to every Dockerfile as a default, not as a reactive fix.
2
print() writes to stdout by default
errors belong on stderr via file=sys.stderr, or the moment you pipe your program's output anywhere, error messages corrupt the data stream silently and the downstream tool fails with a confusing error that points nowhere near the real problem.
3
Reach for f-strings over concatenation or .format() every time
they're faster, they fail at parse time instead of runtime, and format specifiers like :.2f and :>10 replace entire formatting libraries for simple output.
4
print() is a learning and light-scripting tool
the moment your code runs unattended, handles real users, or needs timestamps and severity levels, you've already outgrown it and should be using the logging module. The refactor costs five times as much when you do it after an incident.

Common mistakes to avoid

5 patterns
×

Printing inside a loop expecting real-time output in Docker

Symptom
Output appears all at once when the container exits instead of appearing line-by-line during execution. The on-call engineer thinks the script is hung when it's actually running fine and has been running fine for 25 minutes.
Fix
Add flush=True to every print() call inside the loop, or set PYTHONUNBUFFERED=1 in your container's environment variables. For Dockerfiles, add ENV PYTHONUNBUFFERED=1 as a standard practice in your base image so no individual developer has to remember it.
×

Concatenating a string and an integer directly

Symptom
print('Total: ' + 42) raises TypeError: can only concatenate str (not 'int') to str. The script crashes on a line that looks trivially correct, which is especially confusing for developers coming from JavaScript where + coerces types silently.
Fix
Use an f-string instead: print(f'Total: {42}') — Python handles the conversion automatically. Or pass as separate arguments: print('Total:', 42) and let sep handle the spacing. Never use + to concatenate strings with non-string types.
×

Printing error messages to stdout instead of stderr

Symptom
Downstream tools piping your stdout get error text mixed into data, silently corrupting output. The next command in the pipeline receives a mix of valid data and error strings and fails with a confusing, unrelated error message that points nowhere near the real problem.
Fix
Use print('Error: something failed', file=sys.stderr) for all error output. This keeps stdout clean for data and stderr clean for diagnostics — the Unix convention that every tool from curl to grep follows.
×

Using end='' to suppress newlines but forgetting to reset

Symptom
The cursor stays on the same line after the inline sequence finishes — the next print() appends to the same line unexpectedly, producing garbled output like 'Loading...Done!Error: connection timeout' all on one line with no separators.
Fix
Always add an explicit print() with no arguments (which prints just a newline) when you're done with the inline sequence. This resets the cursor to a new line and restores normal print() behaviour for subsequent calls.
×

Formatting floats as currency without a format specifier

Symptom
print(f'Total: £{149.9}') outputs 'Total: £149.9' instead of 'Total: £149.90' — the missing trailing zero looks unprofessional in any user-facing output and outright wrong in financial reporting where two decimal places are a hard requirement.
Fix
Always use :.2f for monetary values: print(f'Total: £{149.9:.2f}'). This guarantees exactly 2 decimal places regardless of the input value, including values that are already round numbers like 150.0.
INTERVIEW PREP · PRACTICE MODE

Interview Questions on This Topic

Q01SENIOR
Python's print() uses stdout buffering. Walk me through exactly what hap...
Q02SENIOR
You're writing a CLI tool that both produces machine-readable output for...
Q03JUNIOR
What's the difference between print(a, b, c) and print(str(a) + str(b) +...
Q01 of 03SENIOR

Python's print() uses stdout buffering. Walk me through exactly what happens to output when a Python script is run inside a Docker container piped to a log aggregator — and what's the specific fix to guarantee every print() call appears in the logs before a crash?

ANSWER
When Python detects that stdout is not connected to a terminal (which is the case inside Docker with output piped to a log driver), it switches from line-buffering to full block-buffering. This means print() writes to an in-memory buffer that only flushes when it reaches its capacity (typically 8KB), when the process exits cleanly via sys.exit(), or when an explicit flush occurs. Inside Docker piped to a log aggregator, this means your print() calls accumulate in the buffer and the aggregator sees nothing until either the buffer fills or the process exits normally. If the process is killed with SIGKILL — which Docker does on docker kill or on OOM kill — the buffer is never drained and all accumulated output is permanently lost. The fix has two layers: (1) Set PYTHONUNBUFFERED=1 in the Dockerfile ENV — this instructs Python to use unbuffered stdout globally, so every print() call writes to the stream immediately without waiting for the buffer. (2) For critical progress lines where you need absolute certainty, add flush=True to individual print() calls as belt-and-suspenders protection. Together, these guarantee output appears in real time in the aggregator and survives container kills up to the last line executed.
FAQ · 4 QUESTIONS

Frequently Asked Questions

01
How do I print without a newline in Python?
02
What's the difference between print() and f-strings in Python?
03
How do I print to a file in Python using print()?
04
Why is my print() output not showing up in Docker logs until the container exits?
🔥

That's Python Basics. Mark it forged?

5 min read · try the examples if you haven't

Previous
Python append(): Add Items to a List (with Examples)
14 / 17 · Python Basics
Next
Python range() Function Explained with Examples