Mid-level 5 min · March 28, 2026

Python append() — Silent None Broke a Payment Batch

Payment batch empty after assigning append() return value.

N
Naren · Founder
Plain-English first. Then code. Then the interview question.
About
 ● Production Incident 🔎 Debug Guide
Quick Answer
  • append() adds one item to the end of a list in place and returns None
  • Amortized O(1) — ideal for collecting items one at a time, but not for prepending
  • Over-allocation minimizes reallocation; append in a loop is cheap for up to ~10M items
  • Production trap: assigning my_list = my_list.append(x) silently replaces the list with None
  • Biggest mistake: using append() to merge two lists — produces a nested list, not a flat one
Plain-English First

Picture a grocery receipt printing at the checkout. Every time the cashier scans an item, it gets added to the bottom of the receipt — one item at a time, in order, without touching anything already printed. Python's append() does exactly that to a list: it staples one new item onto the end, leaves everything else exactly where it was, and costs you almost nothing in speed. The receipt doesn't reprint itself from scratch. It just grows.

The most common bug I've seen in junior Python code isn't a syntax error — it's a developer calling append() inside a loop and silently building a list of None values for ten thousand iterations because they assigned the return value instead of letting it mutate in place. No exception. No warning. Just wrong data flowing downstream into a database insert at 2am. That's the trap. Learn to see it before it bites you.

Lists are Python's workhorse. You'll use them everywhere — collecting API responses, building queues, assembling rows before a bulk insert, accumulating user events. append() is the single most common way to add something to a list, and it's deceptively simple. 'Deceptively' is the key word. Because its simplicity hides a behaviour — in-place mutation with no return value — that will confuse you at exactly the wrong moment if nobody tells you upfront.

By the end of this, you'll know exactly how append() works under the hood, why it returns None (and what that costs you if you forget), how to use it correctly inside real patterns like event collectors and batch processors, and the three specific mistakes that separate developers who actually know this from developers who just got lucky so far.

What append() Actually Does — and Why None Isn't a Bug

Before you write a single line, you need to understand the contract append() makes with you. It takes the list you already have, tacks one item onto its right end, and modifies that exact list in memory. It does not create a new list. It does not return the updated list. It returns None. Full stop.

Why? Because Python's designers made a deliberate choice: functions that mutate an object in place return None to signal 'I changed the thing you gave me — don't go looking for a new thing.' This is called the Command-Query Separation principle in practice. append() is a command. Commands don't return results — they produce side effects.

This matters because every single time I've paired with a junior developer and seen a silent bug, it traced back to this line: my_list = my_list.append(item). That reassignment just torched their list. The original list got the item added correctly. Then they immediately replaced the variable with None. Every subsequent operation on my_list raises AttributeError: 'NoneType' object has no attribute... — or worse, it silently fails downstream where the None gets serialised and stored. Don't assign the return value. Ever.

EventCollector.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# io.thecodeforge — Python tutorial

# Real scenario: collecting incoming webhook events before bulk-inserting into a database.
# We receive events one at a time and batch them up to reduce DB round-trips.

def collect_webhook_events(raw_event_stream):
    """
    Accepts an iterable of raw event dicts from a webhook receiver.
    Returns a list of validated event payloads ready for bulk insert.
    """
    validated_events = []  # Start with an empty list — our 'receipt'

    for raw_event in raw_event_stream:
        # Basic validation — skip anything malformed rather than crashing the whole batch
        if not isinstance(raw_event, dict):
            continue
        if "event_type" not in raw_event or "timestamp" not in raw_event:
            continue

        # Build a clean payload — only the fields we actually need
        clean_payload = {
            "event_type": raw_event["event_type"],
            "timestamp": raw_event["timestamp"],
            "user_id": raw_event.get("user_id", "anonymous"),  # default if missing
        }

        # append() mutates validated_events IN PLACE and returns None.
        # Do NOT write: validated_events = validated_events.append(clean_payload)
        # That would replace your list with None immediately.
        validated_events.append(clean_payload)

    return validated_events


# --- Simulate incoming webhook data ---
incoming_stream = [
    {"event_type": "page_view", "timestamp": "2024-01-15T10:00:01Z", "user_id": "usr_001"},
    {"event_type": "button_click", "timestamp": "2024-01-15T10:00:03Z", "user_id": "usr_002"},
    "this_is_malformed",                         # Will be skipped by our type check
    {"event_type": "checkout", "timestamp": "2024-01-15T10:00:07Z"},  # Missing user_id — defaulted
    {"timestamp": "2024-01-15T10:00:09Z"},       # Missing event_type — will be skipped
]

batch = collect_webhook_events(incoming_stream)

print(f"Collected {len(batch)} valid events for bulk insert:")
for event in batch:
    print(event)

# Prove the return value of append() itself is None
proof_list = [1, 2, 3]
return_value = proof_list.append(4)
print(f"\nappend() returned: {return_value}")   # None
print(f"But the list is now: {proof_list}")      # [1, 2, 3, 4]
Output
Collected 3 valid events for bulk insert:
{'event_type': 'page_view', 'timestamp': '2024-01-15T10:00:01Z', 'user_id': 'usr_001'}
{'event_type': 'button_click', 'timestamp': '2024-01-15T10:00:03Z', 'user_id': 'usr_002'}
{'event_type': 'checkout', 'timestamp': '2024-01-15T10:00:07Z', 'user_id': 'anonymous'}
append() returned: None
But the list is now: [1, 2, 3, 4]
Never Do This: The None Overwrite
Writing my_list = my_list.append(item) silently replaces your entire list with None. You won't get an exception on this line — you'll get AttributeError: 'NoneType' object has no attribute 'append' three lines later when you try to use it again, and you'll spend 20 minutes staring at the wrong line.
Production Insight
The assignment trap is the #1 cause of append-related production incidents I've seen.
It's especially dangerous in loops where the first iteration succeeds silently, then crashes on the second.
Rule: if you ever see = .append(, stop and fix it immediately.
Write append calls as standalone statements — never on the right side of an assignment.
Key Takeaway
append() returns None — always.
Never assign its return value.
Call it as a standalone statement and walk away.
How to Add Items to a List
IfAdding a single item (any type)
UseUse append(). Amortized O(1).
IfAdding multiple items from an iterable, flat merge desired
UseUse extend(iterable). O(k) where k = len(iterable).
IfAdding an item at a specific position (not the end)
UseUse insert(index, item). O(n) — shifts elements after index.
IfMerging two lists without mutating originals
UseUse list_a + list_b. Creates a new list, O(n+m).

append() vs extend() vs insert() — Pick the Wrong One and You Get Nested Lists

Python gives you three ways to add things to a list and they are not interchangeable. Confuse them and you will silently corrupt your data structure with no error to guide you back.

append() adds exactly one object to the end. That object can be anything — a string, a number, a dict, another list. If you pass it a list, you get a list nested inside your list. Not a merged list. A nested one. I've seen this produce a list like [[1,2,3], [4,5,6]] when the developer expected [1,2,3,4,5,6] — and that data went straight into a JSON column in Postgres looking completely valid until the frontend exploded trying to iterate it.

extend() takes an iterable and adds each of its items individually to the end. This is what you want when you're merging two lists. insert() takes an index and an object, and puts that object at the specified position, shifting everything else right. insert() is O(n) — it has to move every element after the insertion point. append() is amortised O(1). For a list with a million items, that difference is not academic.

ShoppingCartMerge.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# io.thecodeforge — Python tutorial

# Scenario: An e-commerce checkout service merges a guest cart (session-based)
# with a logged-in user's saved cart when they authenticate.

guest_cart_items = ["wireless_mouse", "usb_hub"]
saved_cart_items = ["mechanical_keyboard", "monitor_stand", "webcam"]

# --- Scenario 1: WRONG — using append() to merge two lists ---
# This is the exact mistake that produces nested lists in production
wrong_merged_cart = []
wrong_merged_cart.append(guest_cart_items)   # Adds the entire list as ONE item
wrong_merged_cart.append(saved_cart_items)   # Same — another nested list

print("WRONG (append to merge lists):")
print(wrong_merged_cart)
print(f"Item count: {len(wrong_merged_cart)}")  # 2 — not 5!
print()

# --- Scenario 2: CORRECT — using extend() to merge two lists ---
correct_merged_cart = []
correct_merged_cart.extend(guest_cart_items)   # Adds each item individually
correct_merged_cart.extend(saved_cart_items)   # Same — now flat, merged list

print("CORRECT (extend to merge lists):")
print(correct_merged_cart)
print(f"Item count: {len(correct_merged_cart)}")  # 5 — correct
print()

# --- Scenario 3: append() is RIGHT when adding a single new item ---
# Customer adds one more item to their cart after merging
correct_merged_cart.append("laptop_stand")   # One item — append is exactly right here

print("After adding one more item with append():")
print(correct_merged_cart)
print()

# --- Scenario 4: insert() — when ORDER matters and append() isn't enough ---
# A priority item (same-day delivery eligible) must be placed at the front
priority_item = "express_delivery"
correct_merged_cart.insert(0, priority_item)  # index 0 = front of list — O(n) cost

print("After insert() at index 0 for priority item:")
print(correct_merged_cart)
print(f"First item: {correct_merged_cart[0]}")  # express_delivery
Output
WRONG (append to merge lists):
[['wireless_mouse', 'usb_hub'], ['mechanical_keyboard', 'monitor_stand', 'webcam']]
Item count: 2
CORRECT (extend to merge lists):
['wireless_mouse', 'usb_hub', 'mechanical_keyboard', 'monitor_stand', 'webcam']
Item count: 5
After adding one more item with append():
['wireless_mouse', 'usb_hub', 'mechanical_keyboard', 'monitor_stand', 'webcam', 'laptop_stand']
After insert() at index 0 for priority item:
['express_delivery', 'wireless_mouse', 'usb_hub', 'mechanical_keyboard', 'monitor_stand', 'webcam', 'laptop_stand']
First item: express_delivery
Senior Shortcut: The Flat-Merge One-Liner
If you need to merge two lists into a new third list without mutating either original, use merged = list_a + list_b. This creates a brand new list and leaves both originals untouched — critical when you're working with shared state across threads or need an audit trail of original carts.
Production Insight
I've debugged a production incident where a recommendation engine returned nested lists of candidate items.
The root cause: the developer used append() to combine user history with trending items.
The frontend expected a flat list and crashed trying to iterate nested structures.
Rule: append() adds one element; extend() adds many elements flat.
Choose based on what you're adding — not what feels similar to 'adding'.
Key Takeaway
append() adds one object — if that object is a list, you get nesting.
extend() flattens the iterable into the target list.
Use append() for single items, extend() for merging iterables.

Appending Inside Loops — The Pattern That Powers Real Data Pipelines

The single most common place you'll use append() in production code is inside a loop — transforming, filtering, or enriching a dataset one item at a time before handing it off somewhere else. This pattern is so common it has a name: the accumulator pattern. Master it and you'll use it every day.

The trap here isn't append() itself — it's forgetting that you're mutating a shared list. If you define your accumulator list outside the function and reuse it across calls, you will accumulate state across invocations. I've seen this exact bug in a rate-limiter: the list of blocked IPs was defined at module level, never cleared between requests, and by hour six of production traffic it held thirty thousand stale entries and every lookup was O(n). The service started timing out. Alerts fired. Not a Python bug — a scoping bug made worse by mutation.

Always define your accumulator inside the function unless you explicitly want shared, persistent state. And if you do want persistent state, document it loudly.

LogLineProcessor.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# io.thecodeforge — Python tutorial

# Scenario: A log ingestion service reads raw log lines from a file,
# filters out noise (DEBUG level), enriches each line with a severity score,
# and returns a clean batch ready for forwarding to an alerting system.

def process_log_batch(raw_log_lines):
    """
    Filters and enriches a batch of raw log strings.
    Returns only WARNING and above, with a numeric severity attached.
    The accumulator list is LOCAL — no bleed between calls.
    """
    severity_map = {
        "DEBUG": 1,
        "INFO": 2,
        "WARNING": 3,
        "ERROR": 4,
        "CRITICAL": 5,
    }

    # Accumulator defined INSIDE the function — resets to empty on every call.
    # Defining this outside the function is the classic shared-state trap.
    processed_entries = []

    for raw_line in raw_log_lines:
        # Guard: skip blank lines or anything that isn't a string
        if not isinstance(raw_line, str) or not raw_line.strip():
            continue

        parts = raw_line.strip().split(" ", 2)  # Split into max 3 parts: timestamp, level, message
        if len(parts) < 3:
            continue  # Malformed line — skip rather than crash

        timestamp, level, message = parts

        # Only forward WARNING and above to the alerting system
        if level not in severity_map or severity_map[level] < 3:
            continue

        enriched_entry = {
            "timestamp": timestamp,
            "level": level,
            "message": message,
            "severity_score": severity_map[level],  # Numeric score for downstream sorting
        }

        # append() adds this one enriched dict to the end of our accumulator
        processed_entries.append(enriched_entry)

    # Sort by severity descending so CRITICAL bubbles to the top of the alert queue
    processed_entries.sort(key=lambda entry: entry["severity_score"], reverse=True)

    return processed_entries


# --- Simulate a raw log batch from a web server ---
raw_logs = [
    "2024-01-15T10:00:01Z DEBUG  Health check passed",
    "2024-01-15T10:00:03Z INFO   User usr_042 logged in",
    "2024-01-15T10:00:05Z WARNING  Database connection pool at 80% capacity",
    "2024-01-15T10:00:06Z ERROR   Payment gateway timeout after 30s",
    "2024-01-15T10:00:07Z DEBUG  Cache hit ratio: 0.94",
    "2024-01-15T10:00:08Z CRITICAL  Disk usage at 99% on /var/log — writes failing",
    "",                          # Blank line — will be skipped
    "malformed_no_spaces",       # Malformed — will be skipped
]

alerts = process_log_batch(raw_logs)

print(f"Forwarding {len(alerts)} alerts to alerting system (sorted by severity):\n")
for alert in alerts:
    print(f"[{alert['severity_score']}] {alert['level']:8s} | {alert['timestamp']} | {alert['message']}")
Output
Forwarding 3 alerts to alerting system (sorted by severity):
[5] CRITICAL | 2024-01-15T10:00:08Z | Disk usage at 99% on /var/log — writes failing
[4] ERROR | 2024-01-15T10:00:06Z | Payment gateway timeout after 30s
[3] WARNING | 2024-01-15T10:00:05Z | Database connection pool at 80% capacity
Production Trap: Module-Level Accumulators
If you put your accumulator list at module level instead of inside the function, every call to the function adds to the same list forever. In a long-running server process, this leaks memory until the process OOMs or your lookups degrade to O(n) with thousands of stale entries. Define accumulators inside the function unless shared persistence is deliberate and documented.
Production Insight
A production rate-limiter that stored blocked IPs in a module-level list grew to 30,000 stale entries within hours.
Lookups became O(n) and the service started timing out during peak traffic.
Rule: always localize accumulator lists inside functions.
If you need shared state, use a bounded data structure (e.g., collections.deque(maxlen=1000)).
Key Takeaway
Accumulator lists must be local to the function — not module-level.
Localization prevents memory leaks and cross-request data corruption.
If you must share state, use a bounded structure and document it clearly.

Appending to Lists You Don't Own — Mutation, Copies, and When append() Becomes a Bug

append() mutates the list in place. That's its entire value proposition. But mutation becomes a liability the moment your list is shared — passed into a function, stored as a default argument, or referenced from multiple variables. This is where beginners get hurt in ways that feel like black magic.

The most notorious version of this is Python's mutable default argument trap. If you write def add_item(item, collection=[]), that empty list [] is created exactly once when the function is defined — not each time it's called. Every call that uses the default shares the same list. Your third call to that function will have items from the first two calls sitting in collection. I've seen this quietly corrupt a recommendation engine's candidate list across user sessions in production. The fix is always the same: use None as the default and initialise inside the function.

The second version is reference aliasing: cart_a = cart_b. That doesn't copy the list. Both variables now point to the same list in memory. Appending to cart_a modifies cart_b too. If you need an independent copy, use cart_a = cart_b.copy() for a shallow copy, or copy.deepcopy(cart_b) if the list contains nested mutable objects you also need to isolate.

UserSessionCart.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# io.thecodeforge — Python tutorial

import copy  # For deep copying nested structures

# ============================================================
# TRAP 1: Mutable default argument — the most infamous Python gotcha
# ============================================================

# WRONG: default argument [] is created ONCE at function definition time
def add_to_order_broken(item, order_items=[]):
    order_items.append(item)
    return order_items

print("=== Mutable Default Argument Trap ===")
order_one = add_to_order_broken("coffee")
order_two = add_to_order_broken("muffin")   # Should start fresh — but it won't
print(f"Order 1 (expected: ['coffee'])   : {order_one}")   # ['coffee', 'muffin'] — WRONG
print(f"Order 2 (expected: ['muffin'])   : {order_two}")   # ['coffee', 'muffin'] — same object!
print()

# CORRECT: use None as sentinel, initialise inside the function
def add_to_order_correct(item, order_items=None):
    if order_items is None:
        order_items = []   # Fresh list on every call that doesn't pass one in
    order_items.append(item)
    return order_items

order_three = add_to_order_correct("coffee")
order_four  = add_to_order_correct("muffin")
print("=== Fixed Version ===")
print(f"Order 3 (expected: ['coffee'])   : {order_three}")  # ['coffee'] — correct
print(f"Order 4 (expected: ['muffin'])   : {order_four}")
print()

# ============================================================
# TRAP 2: Reference aliasing — two names, one list
# ============================================================

print("=== Reference Aliasing Trap ===")

user_a_cart = ["laptop", "mouse"]
user_b_cart = user_a_cart          # NOT a copy — both point to the same list in memory

user_b_cart.append("keyboard")     # Intending to only modify user B's cart

print(f"User A cart (expected unchanged): {user_a_cart}")  # ['laptop', 'mouse', 'keyboard'] — WRONG
print(f"User B cart                     : {user_b_cart}")  # ['laptop', 'mouse', 'keyboard']
print()

# CORRECT: shallow copy for a flat list
user_c_cart = ["laptop", "mouse"]
user_d_cart = user_c_cart.copy()   # Independent copy of the top-level list
user_d_cart.append("keyboard")

print("=== Fixed with .copy() ===")
print(f"User C cart (untouched) : {user_c_cart}")  # ['laptop', 'mouse'] — correct
print(f"User D cart             : {user_d_cart}")  # ['laptop', 'mouse', 'keyboard']
print()

# CORRECT: deep copy when list contains nested mutable objects (e.g. dicts)
user_e_cart = [{"sku": "laptop", "qty": 1}, {"sku": "mouse", "qty": 2}]
user_f_cart = copy.deepcopy(user_e_cart)   # Full independent clone, including nested dicts
user_f_cart[0]["qty"] = 99                 # Change only user F's quantity

print("=== Deep Copy for Nested Objects ===")
print(f"User E laptop qty (untouched): {user_e_cart[0]['qty']}")  # 1 — correct
print(f"User F laptop qty            : {user_f_cart[0]['qty']}")  # 99
Output
=== Mutable Default Argument Trap ===
Order 1 (expected: ['coffee']) : ['coffee', 'muffin']
Order 2 (expected: ['muffin']) : ['coffee', 'muffin']
=== Fixed Version ===
Order 3 (expected: ['coffee']) : ['coffee']
Order 4 (expected: ['muffin']) : ['muffin']
=== Reference Aliasing Trap ===
User A cart (expected unchanged): ['laptop', 'mouse', 'keyboard']
User B cart : ['laptop', 'mouse', 'keyboard']
=== Fixed with .copy() ===
User C cart (untouched) : ['laptop', 'mouse']
User D cart : ['laptop', 'mouse', 'keyboard']
=== Deep Copy for Nested Objects ===
User E laptop qty (untouched): 1
User F laptop qty : 99
The Classic Bug: Mutable Default Argument
Using a mutable object like [] or {} as a default function argument is one of Python's most infamous gotchas. The list is created once at function definition time and shared across every call. The symptom is data bleeding between function calls with no obvious cause. The fix is always def fn(items=None) with if items is None: items = [] inside the body.
Production Insight
A recommendation service used a mutable default argument to cache candidate lists per user session.
Because the list was shared across all default calls, user A's candidates leaked into user B's session.
Symptoms: users saw recommendations from other users' browsing history.
Rule: never use mutable objects as default argument values.
Use None and initialize inside the function.
Key Takeaway
Mutation is a contract — append() changes every reference to that list.
Copy before appending if you need isolation.
Never use [] or {} as default arguments.

Performance Characteristics of append(): Amortized Cost and When Not to Use It

You've seen how to use append() correctly. Now understand the cost and the edge cases where it becomes a bottleneck.

append() is amortized O(1). That means most calls are constant time, but occasionally a call triggers a resize that costs O(n) — copying the entire existing list to a larger underlying array. The key is the 'over-allocation' strategy: Python's list implementation allocates extra capacity (≈12.5% extra) so that many subsequent appends happen without a reallocation. For most workloads this is excellent: appending 10 million items one by one takes under a second in CPython.

  1. Prepending to the front: If you need to add items at index 0, don't use insert(0, item) or append in reverse. insert(0) is O(n) every time. For a queue, use collections.deque which offers O(1) appendleft() and popleft().
  2. Building a list where you know the final size in advance: If you know you'll collect exactly N items, preallocate with [None] N and assign by index. This avoids reallocation overhead entirely. Example: results = [None] N; for i, val in enumerate(source): results[i] = transform(val).
  3. Real-time or latency-sensitive systems: An amortized O(1) operation still has worst-case O(n) resizes. For applications that cannot tolerate occasional latency spikes, use a linked-list structure or preallocate. In practice, this matters only at very high frequencies (millions of appends per second) or when every microsecond counts.
PerformanceComparison.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# io.thecodeforge — Python tutorial

import time
from collections import deque

# Scenario: Build a list of 1 million integers using different strategies
N = 1_000_000

# --- Strategy 1: append() in loop (the common pattern) ---
start = time.perf_counter()
result = []
for i in range(N):
    result.append(i)
append_time = time.perf_counter() - start
print(f"append() loop: {append_time:.3f}s")

# --- Strategy 2: Preallocate with [None]*N and assign ---
start = time.perf_counter()
result = [None] * N
for i in range(N):
    result[i] = i
prealloc_time = time.perf_counter() - start
print(f"Preallocate + assign: {prealloc_time:.3f}s")

# --- Strategy 3: list comprehension (most Pythonic) ---
start = time.perf_counter()
result = [i for i in range(N)]
lc_time = time.perf_counter() - start
print(f"List comprehension: {lc_time:.3f}s")

# --- Strategy 4: insert(0, item) for building a queue (the wrong way) ---
start = time.perf_counter()
result = []
for i in range(N):
    result.insert(0, i)
insert_time = time.perf_counter() - start
print(f"insert(0, i) loop: {insert_time:.3f}s (avoid! ~ O(n^2))")

# --- Strategy 5: deque appendleft for building a queue ---
start = time.perf_counter()
d = deque()
for i in range(N):
    d.appendleft(i)
deque_time = time.perf_counter() - start
print(f"deque appendleft: {deque_time:.3f}s")

print()
print(f"append() vs preallocate ratio: {append_time / prealloc_time:.2f}x")
print(f"insert(0) vs deque ratio: {insert_time / deque_time:.2f}x — always use deque for left-side adds")
Output
append() loop: 0.089s
Preallocate + assign: 0.064s
List comprehension: 0.058s
insert(0, i) loop: 124.5s (avoid! ~ O(n^2))
deque appendleft: 0.048s
append() vs preallocate ratio: 1.39x
insert(0) vs deque ratio: 2594x — always use deque for left-side adds
Memory Over-Allocation Detail
CPython's list over-allocates by roughly 12.5% when appending. For a list of length n, the underlying array may have capacity ≈ n + ceil(n/8). This means append is amortized O(1), but the memory footprint is slightly larger than the data itself. For huge lists (millions of items), consider whether you need all items in memory simultaneously or if a streaming approach is better.
Production Insight
A high-frequency trading system used append() inside a real-time data feed handler.
Occasional list resizes caused microsecond latency spikes that triggered circuit breakers.
The fix: preallocate the buffer with known capacity and use index assignment.
Rule: if latency spikes are unacceptable, preallocate.
For append-based accumulators in critical paths, benchmark with realistic workloads.
Key Takeaway
append() is amortized O(1) but worst-case O(n) on resize.
Preallocate when you know the final size.
Use deque for left-side additions instead of insert(0).
● Production incidentPOST-MORTEMseverity: high

The Silent None: How Assigning append() Return Value Corrupted a Payment Batch

Symptom
Payment batch processing returned an empty list. No exception raised. Downstream insert completed with zero rows. Finance team reported missing transactions the next morning.
Assumption
The engineer assumed append() returns the updated list, like some other languages' push methods. The code batch = batch.append(record) looked natural because it followed the pattern of immutable operations.
Root cause
list.append() mutates the list in place and returns None. The assignment batch = ... overwrote the list variable with None on the first iteration. Subsequent iterations tried to call batch.append(record) on None, raising AttributeError inside the loop — but the exception was caught by a generic except: clause that logged nothing and continued.
Fix
Remove the assignment: change batch = batch.append(record) to batch.append(record). The list is already updated. Also remove the bare except: and replace it with specific exception handlers that log and escalate.
Key lesson
  • Never assign the return value of append() — it's always None.
  • Bare except: clauses are dangerous — they swallow exceptions, including the AttributeError that would have revealed the bug immediately.
  • Always validate that an accumulation loop produces the expected item count. A len(batch) check after the loop would have caught the zero-length result.
Production debug guideCommon symptoms and targeted actions to resolve append-related bugs quickly4 entries
Symptom · 01
List variable becomes None after an append call
Fix
Search for = .append( in the codebase. The assignment is overwriting the list with None. Change to just .append(). Also check for bare except clauses that might be swallowing the AttributeError.
Symptom · 02
List contains nested sub-lists instead of flat items
Fix
Check if append() was used to combine two lists. Replace with extend() for flat merging. Example: result.extend(other_list) instead of result.append(other_list).
Symptom · 03
Function returns a list that accumulates data across calls (data bleeding)
Fix
Look for a mutable default argument like def fn(items=[]). Replace with def fn(items=None) and initialize items = [] inside the function body.
Symptom · 04
List grows unbounded in a long-running process causing OOM
Fix
Check if the accumulator list is defined at module level. Move it inside the function or clear it periodically. If persistent state is intentional, document and consider using a bounded data structure like collections.deque(maxlen=N).
★ Quick Debug Cheat Sheet for append() IssuesThree most common append-related failures and immediate diagnostic commands
List is None after loop
Immediate action
Find all lines with `your_list.append` and check if they are on the right side of an assignment
Commands
grep -n '= .*\.append(' *.py
python -c "import ast; ast.parse(open('file.py').read())" (syntax check won't catch it — grep is the tool)
Fix now
Remove the assignment. Change x = x.append(y) to x.append(y).
List contains nested sublists+
Immediate action
Check if you used append() where you meant extend()
Commands
grep -n '\.append\(' *.py
Review each append inside loops that combine data sources
Fix now
Replace result.append(other_list) with result.extend(other_list) for flat merging.
Accumulator list never empties across function calls+
Immediate action
Check function definition for mutable default argument
Commands
grep -n 'def .*=\[\]' *.py
python -c "from inspect import signature; print(signature(your_module.your_function))"
Fix now
Change def fn(items=[]) to def fn(items=None) and add if items is None: items = [].
Python List Addition Methods
MethodWhat It AddsMutates Original?ReturnsTime ComplexityUse When
append(item)Exactly one object (any type)YesNoneAmortised O(1)Adding a single item — a new event, a parsed row, one API result
extend(iterable)Each item from an iterable individuallyYesNoneO(k) where k = len of iterableMerging two lists flat — combining two result sets without nesting
insert(index, item)Exactly one object at a specific positionYesNoneO(n) — shifts all elements after indexOrder matters and the item must go somewhere other than the end
list + listEach item from second list individuallyNo — creates new listNew listO(n+m)You need a merged list without touching the originals
list comprehensionTransformed/filtered items from iterableNo — creates new listNew listO(n)You're transforming while collecting — cleaner than append in a loop

Key takeaways

1
append() returns None
always. The moment you write my_list = my_list.append(x) you've destroyed your list. Call it as a standalone statement and walk away.
2
append() adds one object. If that object is a list, you get a list nested inside your list
not a merged flat list. The symptom is len() reporting 1 when you expected 5, with no error to guide you. Use extend() to merge.
3
Reach for append() when you're collecting items one at a time inside a loop
the accumulator pattern. Define the accumulator list inside the function, not at module level, unless you explicitly want state to persist across calls.
4
Mutation is a contract. append() changes the list every variable pointing to that object can see. If you pass a list into a function and append inside it, the caller's list changes too. Copy first if you need isolation.
5
append() is amortized O(1) but worst-case O(n) on resize. Preallocate when you know the final size. Use deque for left-side appends.

Common mistakes to avoid

4 patterns
×

Assigning the return value of append()

Symptom
The list variable becomes None after the append call. Subsequent code raises AttributeError: 'NoneType' object has no attribute...
Fix
Never assign the result of append(). Call my_list.append(item) as a standalone statement.
×

Using append() to merge two lists

Symptom
The resulting list contains nested sub-lists (e.g., [[1,2], [3,4]]) instead of a flat list ([1,2,3,4]). len() reports the number of original lists, not total items.
Fix
Use extend() to add all items from an iterable individually: result.extend(other_list).
×

Using a mutable list as a default function argument

Symptom
Data from previous function calls bleeds into subsequent calls. The list accumulates items across all calls that use the default.
Fix
Use None as the default and initialize inside the function: def fn(items=None): if items is None: items = [].
×

Aliasing a list instead of copying before appending

Symptom
Modifying one variable by appending also changes all other variables that reference the same list. No error — just corrupted shared state.
Fix
Use copy() for flat lists or copy.deepcopy() for nested mutable objects before appending if you need an independent copy.
INTERVIEW PREP · PRACTICE MODE

Interview Questions on This Topic

Q01SENIOR
Python lists are dynamic arrays under the hood. When you call append() r...
Q02SENIOR
You're building a high-throughput event ingestion service where multiple...
Q03SENIOR
A colleague's code builds a result list using append() inside a nested l...
Q01 of 03SENIOR

Python lists are dynamic arrays under the hood. When you call append() repeatedly in a loop, Python doesn't allocate memory for every single item — it over-allocates in chunks. What are the performance implications of this for very large lists, and at what point would you stop using a list with append() in favour of a different data structure like collections.deque or a pre-allocated array?

ANSWER
The over-allocation strategy gives append() amortized O(1) time but occasional O(n) resizes. For most workloads this is fine — appending 10 million integers takes ~0.1s in CPython. However, there are three cases where you should reconsider: 1. Prepending: If you need O(1) left-side additions, use collections.deque with appendleft(). 2. Known final size: If you know the exact N items, preallocate with [None] * N and assign by index — avoids reallocation overhead. 3. Latency-sensitive systems: If sporadic resize delays are unacceptable (e.g., real-time trading), preallocate or use a data structure with guaranteed O(1) per operation. For extremely large lists (>10 million items), memory fragmentation becomes a concern. Consider using array.array('i') for typed data or a database for persistence.
FAQ · 5 QUESTIONS

Frequently Asked Questions

01
Why does Python append() return None instead of the updated list?
02
What's the difference between append() and extend() in Python?
03
How do I append multiple items to a list at once in Python?
04
Is Python's list.append() thread-safe for concurrent writes from multiple threads?
05
When should I preallocate a list instead of using append()?
🔥

That's Python Basics. Mark it forged?

5 min read · try the examples if you haven't

Previous
How to Read Python Documentation
13 / 17 · Python Basics
Next
Python print() Function: Syntax, Formatting and Examples