Home CS Fundamentals TCP vs UDP Explained: When to Use Each Protocol and Why It Matters

TCP vs UDP Explained: When to Use Each Protocol and Why It Matters

In Plain English 🔥
Imagine you're sending a birthday card versus shouting across a football field. When you mail a card, the postal service confirms it arrived, resends it if it got lost, and makes sure the pages are in order — that's TCP. When you shout to your friend, you just yell and hope they hear it — no confirmation, no retry — that's UDP. One is careful and reliable; the other is fast and fire-and-forget.
⚡ Quick Answer
Imagine you're sending a birthday card versus shouting across a football field. When you mail a card, the postal service confirms it arrived, resends it if it got lost, and makes sure the pages are in order — that's TCP. When you shout to your friend, you just yell and hope they hear it — no confirmation, no retry — that's UDP. One is careful and reliable; the other is fast and fire-and-forget.

Every time you load a webpage, stream a Netflix show, or jump into an online game, your computer is making a silent but critical decision: should I send this data carefully or as fast as possible? That decision comes down to two protocols — TCP and UDP — and picking the wrong one can mean the difference between a snappy app and one that feels broken. Most developers know the names but can't articulate why one exists alongside the other, which is exactly the knowledge gap that trips people up in system design interviews and production outages alike.

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are both built on top of IP, but they solve fundamentally different problems. TCP was designed in 1974 to guarantee that data arrives completely and in order — critical for things like file transfers or HTTP requests where a missing byte corrupts everything. UDP was designed for situations where speed trumps perfection, where a lost packet is better than a late one, and where the application itself can decide how to handle unreliable delivery.

By the end of this article you'll understand the internal mechanics of both protocols, read and run working Java socket examples that show the difference in behaviour, and be able to confidently answer 'which protocol would you use and why?' for any use-case thrown at you — in an interview room or a design document.

How TCP Actually Guarantees Delivery — The Three-Way Handshake and Beyond

TCP's reliability isn't magic — it's engineering. Before a single byte of your data travels anywhere, TCP performs a three-way handshake: the client sends SYN, the server replies SYN-ACK, and the client confirms with ACK. Only then does data flow. This ceremony establishes sequence numbers on both sides, which is how TCP tracks every segment and detects anything that goes missing.

Once connected, every segment the sender transmits must be acknowledged by the receiver. If an ACK doesn't arrive within a timeout window, the segment is retransmitted automatically — your application never sees this retry logic because the OS handles it inside the kernel. TCP also uses flow control (the receiver advertises how much buffer space it has) and congestion control (the sender backs off when the network is overwhelmed). Together, these mechanisms make TCP self-healing but inherently slower.

The cost is latency. Each round trip for a handshake takes time. Each lost packet stalls the entire stream because TCP enforces in-order delivery — a phenomenon called Head-of-Line Blocking. For loading a bank statement or downloading a ZIP file, that's a perfectly acceptable trade-off. For a live video call, it's catastrophic.

Understanding this helps you make smarter architecture decisions. HTTPS runs on TCP because an incomplete HTML response is useless. DNS often uses UDP because a single question-answer fits in one packet and a retry is trivial if it fails.

TcpEchoServer.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
import java.io.*;
import java.net.*;

/**
 * A minimal TCP echo server that demonstrates reliable, ordered delivery.
 * Run TcpEchoServer first, then run TcpEchoClient in a separate terminal.
 */
public class TcpEchoServer {

    private static final int PORT = 9090;

    public static void main(String[] args) throws IOException {

        // ServerSocket binds to a port and listens for incoming TCP connections.
        // The OS completes the three-way handshake before accept() returns —
        // by the time we get a Socket object, the connection is already established.
        try (ServerSocket serverSocket = new ServerSocket(PORT)) {

            System.out.println("[Server] Listening on port " + PORT + " (TCP)");

            // accept() blocks until a client connects. Each call returns one
            // dedicated Socket for that client — full duplex, stream-oriented.
            try (Socket clientSocket = serverSocket.accept()) {

                System.out.println("[Server] Client connected from: "
                        + clientSocket.getInetAddress());

                // Wrap the raw byte stream in readers/writers for convenience.
                // The underlying stream guarantees every byte arrives in order.
                BufferedReader inFromClient = new BufferedReader(
                        new InputStreamReader(clientSocket.getInputStream()));
                PrintWriter outToClient = new PrintWriter(
                        clientSocket.getOutputStream(), true); // autoFlush = true

                String receivedMessage;
                // Read lines until the client closes the connection (readLine returns null).
                while ((receivedMessage = inFromClient.readLine()) != null) {
                    System.out.println("[Server] Received: " + receivedMessage);

                    // Echo the message back in uppercase so we can visibly confirm
                    // it made the round trip intact.
                    String response = "ECHO: " + receivedMessage.toUpperCase();
                    outToClient.println(response);
                    System.out.println("[Server] Sent back: " + response);
                }

                System.out.println("[Server] Client disconnected.");
            }
        }
    }
}
▶ Output
[Server] Listening on port 9090 (TCP)
[Server] Client connected from: /127.0.0.1
[Server] Received: hello from tcp client
[Server] Sent back: ECHO: HELLO FROM TCP CLIENT
[Server] Client disconnected.
🔥
Why Head-of-Line Blocking Matters:If packet #4 of a TCP stream is lost, packets #5, #6, and #7 sit in the receiver's buffer waiting — even though they arrived fine. HTTP/3 replaced TCP with QUIC specifically to eliminate this problem for web traffic. Knowing this nuance will impress any interviewer asking about modern protocol design.

How UDP Works — Fast, Lightweight, and Deliberately Unreliable

UDP's design philosophy is the opposite of TCP's: get the data out as fast as possible and let the application decide what to do if something goes wrong. There is no handshake, no acknowledgement, no retransmission, and no ordering guarantee. You send a datagram (a self-contained packet) and it either arrives or it doesn't.

This sounds reckless, but it's actually brilliant for certain workloads. Consider a live video stream. A video frame from two seconds ago is worse than useless — it actively hurts the viewer experience. UDP lets the application discard late or missing frames and move on. The same logic applies to DNS lookups (tiny single-packet exchanges), online gaming (position updates become stale in milliseconds), and VoIP calls.

The UDP header is only 8 bytes (source port, destination port, length, checksum) compared to TCP's minimum 20 bytes. Combined with no connection setup overhead, UDP achieves dramatically lower latency. This is why modern protocols like QUIC (used by HTTP/3), DTLS (secure datagrams), and WebRTC are all built on UDP and implement their own selective reliability on top.

Here's the key mental model: UDP is a raw postcard with no tracking. If you need tracking, you add it yourself at the application layer — and only where you need it. That selective reliability is far more efficient than TCP's blanket guarantees.

UdpEchoServer.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
import java.net.*;

/**
 * A UDP echo server. Notice there is NO accept(), NO handshake, NO connection.
 * The server just sits and waits for datagrams to land in its socket buffer.
 * Run UdpEchoServer first, then run UdpEchoClient in a separate terminal.
 */
public class UdpEchoServer {

    private static final int PORT = 9091;
    // Max UDP payload that safely avoids IP fragmentation on most networks.
    private static final int MAX_PACKET_SIZE = 1024;

    public static void main(String[] args) throws Exception {

        // DatagramSocket is connectionless — no client needs to connect first.
        // It simply listens on a port for any datagram that arrives.
        try (DatagramSocket serverSocket = new DatagramSocket(PORT)) {

            System.out.println("[Server] Listening on port " + PORT + " (UDP)");

            byte[] receiveBuffer = new byte[MAX_PACKET_SIZE];

            // UDP servers typically loop forever, processing one datagram at a time.
            while (true) {

                // DatagramPacket is both the envelope and the letter —
                // it carries the data AND the sender's address/port.
                DatagramPacket incomingPacket =
                        new DatagramPacket(receiveBuffer, receiveBuffer.length);

                // receive() blocks until a datagram arrives. Unlike TCP's readLine(),
                // each call here processes exactly one independent packet.
                serverSocket.receive(incomingPacket);

                String receivedMessage = new String(
                        incomingPacket.getData(),
                        0,
                        incomingPacket.getLength() // IMPORTANT: use actual length, not buffer length
                );

                System.out.println("[Server] Received datagram from "
                        + incomingPacket.getAddress() + ":" + incomingPacket.getPort()
                        + " — Message: " + receivedMessage);

                // To reply, we need the sender's address and port from the incoming packet.
                // There is no persistent connection object — we manually address each reply.
                String responseMessage = "ECHO: " + receivedMessage.toUpperCase();
                byte[] responseBytes = responseMessage.getBytes();

                DatagramPacket replyPacket = new DatagramPacket(
                        responseBytes,
                        responseBytes.length,
                        incomingPacket.getAddress(), // send back to whoever sent to us
                        incomingPacket.getPort()
                );

                serverSocket.send(replyPacket);
                System.out.println("[Server] Replied: " + responseMessage);
            }
        }
    }
}
▶ Output
[Server] Listening on port 9091 (UDP)
[Server] Received datagram from /127.0.0.1:52341 — Message: hello from udp client
[Server] Replied: ECHO: HELLO FROM UDP CLIENT
⚠️
Watch Out: The Buffer Length TrapAlways use incomingPacket.getLength() when converting the byte array to a String — NOT receiveBuffer.length. The buffer is pre-allocated at 1024 bytes, but the actual datagram might be 12 bytes. If you use the buffer length, you'll get 1012 null characters appended to every message, causing silent data corruption that's surprisingly hard to debug.

Choosing TCP or UDP in the Real World — A Decision Framework

The choice between TCP and UDP isn't about which is 'better' — it's about which constraints match your problem. Run through this mental checklist every time you're designing a networked feature.

Ask: 'What happens if a packet is lost?' If the answer is 'the entire operation is corrupt or meaningless' (file download, database query, user login), use TCP. If the answer is 'the app can recover or the data is stale anyway' (live video frame, game position update, telemetry ping), UDP is worth considering.

Ask: 'How many messages per second?' UDP's stateless nature means a single server socket can handle thousands of different senders without maintaining connection state for each. A gaming server receiving 60 position updates per second from 1,000 players would be crushed by the per-connection overhead of TCP.

Ask: 'Do I need ordered delivery or just fast delivery?' UDP datagrams can arrive out of order. If your application can sequence them itself (most game engines do this with a simple timestamp comparison), you get the speed of UDP with the ordering your game logic needs.

Finally: 'Are you reinventing a solved problem?' If you catch yourself implementing retransmission, acknowledgements, and flow control on top of UDP, you've essentially built a worse version of TCP. Use TCP instead, or use a battle-tested library like KCP or QUIC that already solves this correctly.

UdpEchoClient.java · JAVA
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
import java.net.*;

/**
 * Companion client for UdpEchoServer.
 * Demonstrates fire-and-forget sending and how to read a reply.
 * Also shows that if the server is down, send() returns immediately —
 * there is NO error thrown. That's the fundamental nature of UDP.
 */
public class UdpEchoClient {

    private static final String SERVER_HOST = "localhost";
    private static final int SERVER_PORT = 9091;
    private static final int TIMEOUT_MS = 3000; // how long to wait for a reply
    private static final int MAX_PACKET_SIZE = 1024;

    public static void main(String[] args) throws Exception {

        // A client DatagramSocket with no arguments lets the OS pick a free port.
        // Unlike TCP, this does NOT send anything to the server — no handshake.
        try (DatagramSocket clientSocket = new DatagramSocket()) {

            // Set a timeout on receive() so we don't block forever if the reply is lost.
            // This is manual reliability — something TCP does for you automatically.
            clientSocket.setSoTimeout(TIMEOUT_MS);

            InetAddress serverAddress = InetAddress.getByName(SERVER_HOST);
            String messageToSend = "hello from udp client";
            byte[] sendBuffer = messageToSend.getBytes();

            // Construct the datagram with the data AND the destination baked in.
            DatagramPacket sendPacket = new DatagramPacket(
                    sendBuffer,
                    sendBuffer.length,
                    serverAddress,
                    SERVER_PORT
            );

            // send() dispatches the packet and returns immediately.
            // If the server isn't running, this line still succeeds with no exception —
            // the packet just disappears into the network. This is UDP's core behaviour.
            clientSocket.send(sendPacket);
            System.out.println("[Client] Sent: " + messageToSend);

            // Prepare a buffer to receive the server's echo reply.
            byte[] receiveBuffer = new byte[MAX_PACKET_SIZE];
            DatagramPacket replyPacket = new DatagramPacket(receiveBuffer, receiveBuffer.length);

            try {
                // receive() will block until a datagram arrives OR the timeout fires.
                clientSocket.receive(replyPacket);
                String reply = new String(replyPacket.getData(), 0, replyPacket.getLength());
                System.out.println("[Client] Received reply: " + reply);

            } catch (SocketTimeoutException timeoutEx) {
                // This is how UDP 'reliability' works at the app layer —
                // you catch the timeout and decide: retry, fail, or move on.
                System.out.println("[Client] No reply within " + TIMEOUT_MS + "ms — packet may be lost.");
            }
        }
    }
}
▶ Output
[Client] Sent: hello from udp client
[Client] Received reply: ECHO: HELLO FROM UDP CLIENT
⚠️
Pro Tip: When to Layer Reliability on UDPIf you need low latency AND some reliability (e.g., game events like 'player died' that must arrive but can't tolerate TCP's head-of-line blocking), look at Reliable UDP libraries like ENet or QUIC. They give you per-stream reliability without blocking the whole connection on a single lost packet — the exact problem that motivated HTTP/3's switch from TCP to QUIC.
Feature / AspectTCPUDP
Connection modelConnection-oriented (three-way handshake required)Connectionless (no setup, fire and forget)
Delivery guaranteeGuaranteed — lost segments are automatically retransmittedNo guarantee — packets may be lost silently
Ordering guaranteeYes — bytes always arrive in send orderNo — datagrams may arrive out of order
Error checkingChecksum + acknowledgement + retransmitChecksum only (no recovery on failure)
Speed / LatencyHigher latency due to handshake and ACK overheadLower latency — minimal overhead per packet
Header size20–60 bytes (minimum 20)8 bytes fixed
Flow controlYes (sliding window, receiver advertises buffer size)No — sender can overwhelm the receiver
Congestion controlYes (slow start, AIMD algorithms built into kernel)No — application must implement if needed
State on serverPer-connection state maintained by OSStateless — one socket handles all senders
Typical use casesHTTP/HTTPS, email (SMTP), file transfer (FTP/SFTP), SSHDNS, live video/audio streaming, online gaming, VoIP, IoT telemetry
Modern protocol built on itHTTP/1.1, HTTP/2, TLS (over TCP)QUIC (HTTP/3), DTLS, WebRTC, DNS-over-UDP

🎯 Key Takeaways

  • TCP's reliability comes from sequence numbers + acknowledgements + retransmission, all handled by the OS kernel — your application code never sees the retry logic, but you always pay the latency cost for it.
  • UDP's superpower is statelessness: one server socket handles unlimited senders with zero per-connection overhead, which is why DNS, game servers, and streaming services choose it despite the lack of guarantees.
  • The real question isn't 'which is better?' — it's 'what happens in my application if a packet is lost?' If the answer is 'catastrophic', use TCP. If the answer is 'move on', use UDP.
  • HTTP/3 runs on QUIC which runs on UDP — meaning the future of the web runs on UDP, not TCP. Modern protocol design layers selective reliability on UDP rather than accepting TCP's head-of-line blocking.

⚠ Common Mistakes to Avoid

  • Mistake 1: Defaulting to TCP for everything because it 'feels safer' — Symptom: a real-time game or video stream has noticeable lag spikes because stale retransmitted packets delay the delivery of newer, more relevant data — Fix: Identify whether stale data is worse than no data. If a 200ms-old position update is useless, use UDP and let the application discard outdated packets by comparing timestamps.
  • Mistake 2: Using the receive buffer length instead of the datagram length when reading UDP data — Symptom: every string parsed from a UDP packet ends with hundreds of null characters ('\u0000'), causing string comparison failures and garbled logs — Fix: Always construct your String with new String(packet.getData(), 0, packet.getLength()) — the third argument caps the read at the actual received bytes, not the pre-allocated buffer size.
  • Mistake 3: Assuming send() on a UDP socket throws an exception if the server is unreachable — Symptom: UDP client appears to work fine in testing but silently drops all messages in production when the server is down, with no log output — Fix: Always implement an application-level acknowledgement with a timeout (setSoTimeout) and a retry counter. UDP's send() returns successfully even when the destination doesn't exist — reliability is your responsibility, not the protocol's.

Interview Questions on This Topic

  • QYou're building a multiplayer racing game. Players send their car's position 60 times per second. Would you use TCP or UDP, and why? What would you do about important game events like 'player finished the race'?
  • QExplain what Head-of-Line Blocking is in TCP and describe a real-world protocol that was redesigned specifically to avoid it.
  • QA candidate says 'UDP is faster than TCP.' Is that always true? Describe a scenario where TCP could outperform UDP for a specific workload.

Frequently Asked Questions

Is UDP faster than TCP?

Generally yes, because UDP skips the three-way handshake, acknowledgements, and retransmission logic. But the real advantage is latency predictability — UDP doesn't stall when a packet is lost, whereas TCP halts the stream until the missing segment is recovered. For small, frequent, time-sensitive messages, UDP's lower latency is significant.

Can UDP lose data? What happens when it does?

Yes, UDP packets can be silently dropped by any router or switch in the network path when buffers are full. Nothing notifies your application — send() returns successfully and you never know the packet was lost. If you need to detect loss, you must build it yourself: assign sequence numbers to datagrams and have the receiver detect gaps, then request retransmission of only what matters.

Why does DNS use UDP instead of TCP if reliability matters?

A standard DNS query and response both fit comfortably in a single UDP datagram (under 512 bytes historically, 4096 bytes with EDNS). The round trip is one packet each way with no handshake overhead. If a response is lost, the resolver simply resends the query — which is functionally identical to TCP's retransmission but with far less overhead. DNS does switch to TCP for responses exceeding the datagram size limit, such as DNSSEC signatures or zone transfers.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousTCP/IP ModelNext →HTTP and HTTPS Explained
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged