Table of Contents
Jump to a Section
- Why the Transport Layer Matters
- TCP: Connection-Oriented Reliability
- UDP: Connectionless Speed
- TCP Header Fields Explained
- UDP Header Fields Explained
- Port Numbers: Identifying Applications
- TCP Flow Control and Windowing
- Choosing TCP vs UDP: The Decision Guide
- CCNA Exam Tips
- Summary Comparison Table
Understanding the Transport Layer is one of the most tested areas on the CCNA 200-301 exam. This guide breaks down TCP and UDP from the ground up — covering headers, port numbers, flow control, and real-world application selection. By the end of this article you will be able to confidently answer any exam question about Layer 4 behavior.
1. Why the Transport Layer Matters
The OSI model divides networking functions into seven layers, each with a specific job. Layer 3 — the Network Layer — is responsible for getting packets from one network to another by routing them based on IP addresses. But routing alone is not enough. When a packet arrives at a destination host, the operating system needs to know which application should receive that data. That is the job of Layer 4, the Transport Layer.
The Transport Layer is responsible for end-to-end communication between applications running on different hosts. It provides process-to-process delivery using port numbers to identify which application on the destination host should receive incoming data. Think of an IP address as a street address for a building, and a port number as the specific apartment number inside that building. The IP address gets the packet to the right machine; the port number gets the data to the right application running on that machine.
Layer 4 Core Responsibilities
Multiplexing / Demultiplexing: Multiple applications can communicate simultaneously over a single network connection using different port numbers.
Segmentation: Large data streams from applications are broken into smaller units called segments (TCP) or datagrams (UDP).
Connection Management: TCP establishes, maintains, and terminates logical connections between applications.
Reliability (TCP only): Ensuring all segments arrive, are in order, and are error-free through sequencing and acknowledgment.
Flow Control (TCP only): Preventing a fast sender from overwhelming a slow receiver.
The two dominant Transport Layer protocols you must know for the CCNA are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). These two protocols represent a fundamental design trade-off in networking: reliability versus speed. TCP provides a reliable, ordered, error-checked connection-oriented service. UDP provides a best-effort, connectionless, low-overhead service. Neither is universally better — the right choice depends entirely on what the application needs.
Key Point: Layer 4 vs Layer 3
Layer 3 (IP) provides host-to-host delivery. It gets a packet from one IP address to another. Layer 4 (TCP/UDP) provides process-to-process delivery. It ensures that data reaches the correct application process on the destination host. Without Layer 4, you could communicate between machines but not between specific applications running on those machines. Both layers are essential and work together for complete communication.
Both TCP and UDP are defined in RFCs published by the IETF. TCP is defined in RFC 793 (originally) and updated by RFC 9293. UDP is defined in RFC 768. Both protocols operate at Layer 4 of the OSI model and Layer 4 of the TCP/IP model. On the CCNA exam, you will be expected to know the characteristics of each protocol, their header fields, the port numbers associated with common applications, and how to select the appropriate protocol for a given scenario.
2. TCP: Connection-Oriented Reliability
TCP is a connection-oriented protocol, meaning that before any application data is exchanged, TCP requires that a connection be formally established between the two communicating endpoints. This connection is not a physical circuit but a logical agreement between the two parties that both are ready to communicate and have agreed on initial parameters such as sequence numbers.
The Three-Way Handshake
TCP establishes connections using a process called the three-way handshake. This process uses three messages: SYN, SYN-ACK, and ACK. Understanding each step is critical for the CCNA exam.
Step 1 — SYN (Synchronize): The client sends a TCP segment with the SYN flag set. This segment includes the client's Initial Sequence Number (ISN), which is randomly chosen. The random ISN is a security measure to prevent TCP sequence number prediction attacks. In this example, the client chooses ISN=100.
Step 2 — SYN-ACK (Synchronize-Acknowledge): The server responds with a segment that has both the SYN and ACK flags set. The server acknowledges the client's SYN by setting the Acknowledgment Number to the client's ISN + 1 (101), meaning "I received byte 100, now send me byte 101." The server also announces its own ISN (300 in this example).
Step 3 — ACK (Acknowledge): The client acknowledges the server's SYN by setting the ACK flag and Acknowledgment Number to 301 (server's ISN + 1). The connection is now fully established and data transfer can begin.
Exam Tip: SYN Flood Attack
The CCNA exam may reference SYN flood attacks. In a SYN flood, an attacker sends many SYN packets with spoofed source IPs. The server allocates resources and sends SYN-ACK responses, but since the source IPs are fake, no ACK is ever returned. The server's connection table fills up and legitimate connections are refused. This is a Denial-of-Service (DoS) attack. Modern mitigation includes SYN cookies, which allow servers to avoid allocating state until the three-way handshake completes.
The Four-Way Connection Teardown
TCP connections are terminated gracefully using a four-way process. Either side can initiate the teardown. Each side must independently close its end of the connection, which is why it takes four messages instead of two.
After step 4, the client enters a TIME-WAIT state for 2x the Maximum Segment Lifetime (typically 60-120 seconds). This ensures the final ACK was received and prevents old duplicate packets from being mistaken for new connections.
Reliability Through Sequencing and Acknowledgment
TCP's reliability mechanism is built on sequence numbers and acknowledgment numbers. Every byte of data transmitted is assigned a sequence number. The receiver tracks which bytes have been received and sends acknowledgments telling the sender which byte it expects next. If a segment is lost, the sender detects the loss (because the acknowledgment for that sequence range never arrives) and retransmits the missing segment.
TCP uses a cumulative acknowledgment scheme. If segments 1, 2, and 3 are sent but only segments 1 and 3 arrive, the receiver sends ACK=2, indicating it expects segment 2 next. Modern TCP also supports Selective Acknowledgment (SACK), which allows the receiver to inform the sender exactly which segments were received, even if they are out of order, enabling more efficient retransmission.
Real-World Analogy: Registered Mail
Think of TCP as sending a registered letter with signature confirmation through a postal service. Before the letter leaves, you establish that both sender and recipient are ready. Every package is numbered in order. If a package is lost, the postal service tracks it and resends it. The recipient must sign for each delivery, confirming receipt. You never wonder if your letter arrived — you receive confirmation. This reliability has overhead: it takes more time and resources than simply dropping something in a mailbox, but you have complete assurance of delivery.
TCP Key Characteristics Summary
Connection-oriented: Three-way handshake before data transfer
Reliable: Sequencing and acknowledgment ensure all data arrives
Ordered delivery: Data is reassembled in the correct sequence
Error detection: Checksum on every segment
Retransmission: Lost segments are automatically resent
Flow control: Sliding window prevents buffer overflow
Congestion control: Slow start and congestion avoidance algorithms
Full-duplex: Both sides can send and receive simultaneously
3. UDP: Connectionless Speed
UDP is a connectionless protocol. Unlike TCP, UDP does not establish a connection before sending data. There is no handshake, no session setup, and no formal teardown. The sender simply formats a datagram and sends it to the destination. Whether the datagram arrives is irrelevant to the UDP protocol itself — that concern is either handled by the application layer or simply not handled at all.
Best-Effort Delivery
UDP provides best-effort delivery. This means:
- No guarantee of arrival: Datagrams may be dropped anywhere in the network without any notification to the sender or receiver.
- No guarantee of order: Datagrams may arrive at the destination in a different order than they were sent.
- No duplicate prevention: The same datagram may arrive multiple times due to network conditions, and UDP will deliver all copies to the application.
- No retransmission: If a datagram is lost, UDP makes no attempt to resend it. The data is simply gone.
- No flow control: UDP has no mechanism to throttle the sender based on the receiver's capacity.
Low Overhead and Minimal Latency
The absence of all these reliability features is not simply a deficiency — it is a deliberate design choice that yields significant advantages. UDP's header is only 8 bytes compared to TCP's minimum 20 bytes. There is no connection setup delay, meaning the first datagram can be sent immediately. There is no waiting for acknowledgments before sending the next packet. The cumulative result is dramatically lower latency and higher throughput for applications that can tolerate some data loss.
Key Point: Application-Level Reliability
Just because UDP itself provides no reliability does not mean applications using UDP are inherently unreliable. Many applications implement their own reliability mechanisms on top of UDP when needed. For example, QUIC (used by HTTP/3) is built on UDP but implements its own ordering, loss detection, and retransmission mechanisms. DNS implements its own timeout and retry logic at the application layer. This gives application developers complete control over what reliability mechanisms to use, rather than being forced to accept TCP's all-or-nothing approach.
Real-World Analogy: Radio Broadcast
UDP is like a radio broadcast. The radio station transmits its signal without knowing who is listening or whether anyone receives it clearly. If you are in a tunnel and miss part of a song, the station does not replay those seconds just for you. Everyone listening either receives the transmission or they do not. This makes radio broadcasts extremely efficient — one transmission reaches millions of listeners simultaneously — but it also means there is no guarantee of reception. This model is perfect for real-time streaming where replaying old data would be worse than simply moving on.
Primary Use Cases for UDP
UDP is the right choice when one or more of the following conditions apply:
- Real-time data where latency matters more than perfection: VoIP calls, video conferencing, online gaming. A retransmitted voice packet arriving 500ms late is useless and disruptive — better to have a brief audio glitch than a noticeable delay.
- Query-response transactions: DNS queries are typically a single small question and answer. The overhead of a TCP three-way handshake would double the latency for something that only takes microseconds otherwise. If a DNS query is lost, the client simply retries.
- Broadcast or multicast: TCP is point-to-point and cannot be used for broadcast. UDP supports sending a single datagram to many recipients simultaneously. DHCP, for example, uses UDP broadcasts because the client does not yet have an IP address to establish a TCP connection.
- Simple request-response protocols: TFTP uses UDP because it implements its own simple stop-and-wait acknowledgment mechanism. SNMP traps use UDP because they are fire-and-forget notifications.
- High-volume telemetry: NTP time synchronization uses UDP because a dropped packet simply means a missed time update, which is acceptable.
Exam Tip: Know Which Applications Use UDP
The CCNA exam frequently tests whether you know which protocol specific applications use. The key UDP applications to memorize are: DNS (53), DHCP (67/68), TFTP (69), NTP (123), SNMP (161), SNMP Trap (162), Syslog (514), VoIP/RTP (various), and video streaming. Remember: anything real-time or broadcast-based typically uses UDP. Anything that requires guaranteed delivery (web browsing, email, file transfer) uses TCP.
4. TCP Header Fields Explained
The TCP header is a minimum of 20 bytes long (without options) and contains all the control information TCP needs to provide its reliable, ordered, connection-oriented service. Understanding each field helps you understand why TCP works the way it does.
| Field | Size | Description |
|---|---|---|
| Source Port | 16 bits | Port number of the sending application. For clients, typically a dynamic/ephemeral port (49152-65535). |
| Destination Port | 16 bits | Port number of the receiving application. For servers, typically a well-known port (e.g., 80 for HTTP, 443 for HTTPS). |
| Sequence Number | 32 bits | Identifies the byte offset of the first byte of data in this segment. Used for ordering and acknowledgment. |
| Acknowledgment Number | 32 bits | The next sequence number the sender of this segment expects to receive. Only valid when the ACK flag is set. |
| Data Offset (Header Length) | 4 bits | Specifies the size of the TCP header in 32-bit words, indicating where data begins. Minimum value is 5 (20 bytes). |
| Reserved | 6 bits | Reserved for future use; must be zero. |
| URG Flag | 1 bit | Urgent: Indicates the Urgent Pointer field is significant. Marks data as urgent. |
| ACK Flag | 1 bit | Acknowledge: Indicates the Acknowledgment Number field is valid. Set in all segments except the initial SYN. |
| PSH Flag | 1 bit | Push: Tells the receiver to pass data to the application immediately without waiting to fill a buffer. |
| RST Flag | 1 bit | Reset: Abruptly terminates a connection. Used when an error occurs or an unexpected packet arrives. |
| SYN Flag | 1 bit | Synchronize: Used during connection establishment to synchronize sequence numbers. |
| FIN Flag | 1 bit | Finish: Indicates the sender has finished sending data and wishes to close the connection. |
| Window Size | 16 bits | The number of bytes the receiver is willing to accept before requiring an acknowledgment. Used for flow control. |
| Checksum | 16 bits | Error-detection field covering the TCP header and data, plus a pseudo-header from the IP layer. |
| Urgent Pointer | 16 bits | Points to the end of urgent data in the segment. Only valid when URG flag is set. |
| Options | 0-320 bits | Variable-length options such as Maximum Segment Size (MSS), Window Scale, SACK, and timestamps. |
Mnemonic: TCP Control Flags — "Unskilled Attackers Pester Real Security Folk"
To remember the six original TCP control flags in order:
Unskilled = URG (Urgent)
Attackers = ACK (Acknowledge)
Pester = PSH (Push)
Real = RST (Reset)
Security = SYN (Synchronize)
Folk = FIN (Finish)
5. UDP Header Fields Explained
UDP's header is remarkably simple — only 8 bytes total, compared to TCP's minimum 20 bytes. This simplicity is a direct result of UDP not providing reliability, ordering, connection management, or flow control. There are only four fields, each 16 bits wide.
| Field | Size | Description |
|---|---|---|
| Source Port | 16 bits | Port number of the sending application. Optional in UDP — may be set to zero if not used. |
| Destination Port | 16 bits | Port number of the receiving application. This is how the OS knows which application gets the datagram. |
| Length | 16 bits | Total length of the UDP header plus data in bytes. Minimum value is 8 (header only, no data). |
| Checksum | 16 bits | Error detection over the UDP header and data. Optional in IPv4 (but recommended), mandatory in IPv6. |
UDP vs TCP Header Overhead Comparison
UDP header: 8 bytes fixed — Source Port (2), Destination Port (2), Length (2), Checksum (2)
TCP header: 20-60 bytes — Source Port (2), Destination Port (2), Sequence Number (4), Acknowledgment Number (4), Header Length (1), Flags (1+), Window Size (2), Checksum (2), Urgent Pointer (2), Options (0-40)
Savings per datagram: At minimum 12 bytes per packet. For small DNS queries of 50 bytes total payload, that is a 24% reduction in overhead per packet.
No connection state: TCP requires both endpoints to maintain state tables for each connection. UDP requires zero state at the transport layer.
The UDP checksum is computed over a "pseudo-header" that includes source IP, destination IP, protocol number, and UDP length — similar to TCP. In IPv4, the checksum is technically optional (a value of zero means no checksum), though virtually all modern implementations compute it. In IPv6, the UDP checksum is mandatory because IPv6 removed the header checksum from the IP header itself.
6. Port Numbers: Identifying Applications
Port numbers are 16-bit integers ranging from 0 to 65535. They are the mechanism by which TCP and UDP enable multiple applications to share a single network connection simultaneously. The combination of an IP address and a port number is called a socket. A TCP connection is uniquely identified by a four-tuple: (Source IP, Source Port, Destination IP, Destination Port).
Port Number Ranges
| Range | Name | Description |
|---|---|---|
| 0 – 1023 | Well-Known Ports | Reserved for common, standardized services. Assigned and maintained by IANA. Servers listen on these ports. Requires root/administrator privileges to bind on most systems. |
| 1024 – 49151 | Registered Ports | Registered by vendors for specific applications. Less strictly controlled than well-known ports. Examples: 3389 (RDP), 8080 (HTTP alternate), 3306 (MySQL). |
| 49152 – 65535 | Dynamic / Ephemeral Ports | Randomly assigned by the OS to client applications when they initiate a connection. Also called private or temporary ports. After the connection closes, the port is returned to the available pool. |
Essential Port Numbers for the CCNA Exam
| Port | Protocol | Service | Notes |
|---|---|---|---|
| 20 | TCP | FTP Data | Active mode FTP data channel. The actual file data travels on this port. |
| 21 | TCP | FTP Control | FTP command and control channel. Login, directory listing commands go here. |
| 22 | TCP | SSH | Secure Shell — encrypted remote management. Replaced Telnet for secure access. |
| 23 | TCP | Telnet | Unencrypted remote management. Deprecated for security reasons; avoid in production. |
| 25 | TCP | SMTP | Simple Mail Transfer Protocol — sends email between mail servers. |
| 53 | TCP & UDP | DNS | UDP for standard queries (<512 bytes). TCP for zone transfers and large responses (>512 bytes). |
| 67 | UDP | DHCP Server | DHCP server listens on this port to receive client requests. |
| 68 | UDP | DHCP Client | DHCP client listens on this port to receive server offers and acknowledgments. |
| 69 | UDP | TFTP | Trivial File Transfer Protocol — simple file transfer, no authentication. Used for IOS images, configs. |
| 80 | TCP | HTTP | Hypertext Transfer Protocol — unencrypted web traffic. |
| 110 | TCP | POP3 | Post Office Protocol v3 — downloads email from server to client. |
| 123 | UDP | NTP | Network Time Protocol — synchronizes clocks. Uses UDP because dropped packets are tolerable. |
| 143 | TCP | IMAP | Internet Message Access Protocol — manages email on the server; keeps mail server-side. |
| 161 | UDP | SNMP | Simple Network Management Protocol — manager polls agents for device statistics. |
| 162 | UDP | SNMP Trap | Agent sends unsolicited notifications (traps) to the manager when events occur. |
| 179 | TCP | BGP | Border Gateway Protocol — routing protocol between autonomous systems. Uses TCP for reliability. |
| 443 | TCP | HTTPS | HTTP over TLS/SSL — encrypted web traffic. Default for modern web browsing. |
| 514 | UDP | Syslog | System logging protocol — sends log messages to a central syslog server. |
| 3389 | TCP | RDP | Remote Desktop Protocol — Microsoft's graphical remote access protocol. |
| 8080 | TCP | HTTP Alternate | Alternative HTTP port, commonly used for web proxies and development servers. |
Exam Tip: DNS Uses Both TCP and UDP on Port 53
One of the most common trick questions on the CCNA exam involves DNS port 53. DNS uses both TCP and UDP on port 53. UDP port 53 is used for standard DNS queries where the response is small enough to fit in a single packet (under 512 bytes, or up to 4096 bytes with EDNS0). TCP port 53 is used for DNS zone transfers (AXFR) — when an authoritative server replicates its entire zone database to a secondary server — and for any DNS response that exceeds the UDP size limit. Never answer "DNS uses only UDP" — it uses both.
Key Point: How the OS Uses Port Numbers
When a packet arrives at a host, the OS examines the destination port number to determine which application should receive it. The OS maintains a table of active sockets — combinations of IP address, port, and protocol — and delivers incoming data to the correct application process. When you open a web browser and make two separate HTTP requests to the same server, the OS keeps them separate using different ephemeral source port numbers (e.g., 50234 and 50235), even though both connect to destination port 80.
7. TCP Flow Control and Windowing
TCP's flow control mechanism prevents a fast sender from overwhelming a slow receiver. If a sender transmits data faster than the receiver can process it, the receiver's buffer fills up and data is dropped — causing retransmissions that waste bandwidth and increase latency. TCP's solution is the sliding window mechanism.
The Receive Window
Every TCP segment includes a Window Size field in its header. This field is set by the receiver to advertise how many bytes the receiver is willing to accept before requiring an acknowledgment. This is called the receive window (or rwnd). The sender must not transmit more unacknowledged data than the receiver's advertised window size.
The window "slides" forward as the receiver acknowledges data. As the sender gets ACKs, it can send new segments to keep the pipeline full. The window size dynamically adjusts throughout the connection based on the receiver's current buffer availability.
TCP Slow Start
When a TCP connection first starts, the sender does not immediately use the full window size. Instead, TCP implements a slow start algorithm to avoid congesting the network. The sender begins with a small congestion window (cwnd) — typically 1 to 10 Maximum Segment Sizes (MSS) — and doubles it with each round-trip time (RTT) as long as no packet loss is detected.
Window Scaling
The TCP Window Size field is only 16 bits, limiting the window to a maximum of 65,535 bytes. On high-speed networks (gigabit and above) or long-latency connections (satellite links), this limitation becomes a bottleneck. The Window Scale option (RFC 7323) allows the window size to be scaled up by a factor of 2^n (up to 2^14), enabling window sizes up to about 1 gigabyte. The Window Scale option is negotiated during the three-way handshake.
Flow Control vs Congestion Control
Flow Control: Prevents the sender from overwhelming the receiver. The receiver advertises its buffer capacity using the Window Size field. This is an end-to-end mechanism between the two communicating hosts.
Congestion Control: Prevents the sender from overwhelming the network (routers and links between sender and receiver). Uses algorithms like Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery. This is based on the sender detecting packet loss as a signal of network congestion.
Both mechanisms work simultaneously. The sender's effective transmission rate is limited to the minimum of the receive window and the congestion window.
8. Choosing TCP vs UDP: The Decision Guide
Selecting the right transport protocol is a fundamental networking design decision. The choice directly impacts application performance, reliability, and complexity. Here is a comprehensive comparison followed by specific application examples with reasoning.
| Characteristic | TCP | UDP |
|---|---|---|
| Connection Setup | Required (3-way handshake) | None — send immediately |
| Reliability | Guaranteed delivery of all data | Best-effort, no guarantee |
| Ordering | Segments delivered in order | May arrive out of order |
| Retransmission | Automatic retransmission of lost segments | No retransmission |
| Flow Control | Yes — sliding window | No |
| Congestion Control | Yes — slow start, AIMD | No |
| Header Size | 20-60 bytes | 8 bytes (fixed) |
| Connection State | Stateful — both endpoints track state | Stateless |
| Speed/Overhead | Higher overhead, lower throughput for small data | Low overhead, minimal latency |
| Error Checking | Checksum + retransmission on error | Checksum only (no correction) |
| Broadcast Support | No — point-to-point only | Yes — supports broadcast and multicast |
| Full-Duplex | Yes | Yes |
| PDU Name | Segment | Datagram |
| RFC | RFC 9293 | RFC 768 |
Real-World Application Protocol Selection
| Application | Protocol | Reason |
|---|---|---|
| Web browsing (HTTP/HTTPS) | TCP | Web pages must arrive completely and correctly. A missing image or corrupted JavaScript would break the page. Reliability is essential. |
| VoIP / Voice calls | UDP | Realtime audio cannot tolerate the delay of retransmission. A 200ms delayed audio packet is more disruptive than a brief glitch. Application handles concealment of lost packets. |
| Video streaming (Netflix/YouTube) | TCP (buffered) or UDP (live) | Buffered video (VOD) uses TCP — the extra latency of retransmission is hidden by the buffer. Live streaming uses UDP — real-time delivery beats reliability. |
| DNS queries | UDP (queries), TCP (zone transfers) | DNS queries are small and need fast responses. UDP eliminates connection setup overhead. Zone transfers need guaranteed delivery of entire database. |
| Email (SMTP, IMAP, POP3) | TCP | Email must arrive completely and without corruption. Users would be upset if emails arrived garbled or incomplete. |
| File transfer (FTP, SCP, SFTP) | TCP | Files must be transferred completely and without data corruption. A partially transferred executable or archive is useless or dangerous. |
| Online gaming | UDP | Game state updates (player positions, events) must be delivered in real time. Stale retransmitted data about where a player was 100ms ago is worthless. |
| DHCP | UDP | Client has no IP address yet and cannot establish a TCP connection. DHCP uses UDP broadcasts on ports 67/68. |
| TFTP (IOS image transfer) | UDP | Simple protocol using UDP with its own stop-and-wait ACK mechanism. No authentication needed, useful for network booting. |
| BGP routing updates | TCP | BGP routing tables must be reliably maintained. A dropped update could cause routing loops or black holes. BGP uses TCP port 179. |
| SNMP monitoring | UDP | Management queries are simple request-response. Dropped polls are simply retried. Syslog notifications (traps) are fire-and-forget. |
| NTP time sync | UDP | Time synchronization packets are sent frequently. A missed sync is acceptable since the next one arrives shortly. |
Exam Tip: VoIP Uses UDP
The CCNA exam may ask which protocol VoIP uses. The answer is UDP. VoIP uses Real-time Transport Protocol (RTP) which runs over UDP. The reason is that voice communication is extremely latency-sensitive — humans detect audio delays greater than 150ms. If a voice packet is lost in transit, TCP would retransmit it, but by the time the retransmitted packet arrived (one full RTT later), it would be too late to play it back at the right time. It is better to have a tiny glitch in the audio than to have the entire conversation delayed waiting for retransmission. VoIP codecs include packet loss concealment algorithms to handle occasional dropped UDP packets gracefully.
9. CCNA Exam Tips for TCP and UDP
The following exam tips are based on the actual CCNA 200-301 exam objectives and common question patterns. Study these carefully before your exam date.
Exam Tip 1: The Three-Way Handshake Sequence
- Always remember the exact sequence: SYN → SYN-ACK → ACK. Never mix up the order.
- Only the first SYN has no ACK flag set. Every subsequent segment in a TCP connection (including SYN-ACK) has the ACK flag set.
- The ACK number is always the other side's sequence number + 1, meaning "I received up to X, now send me X+1."
- The connection teardown uses FIN → ACK → FIN → ACK (four steps, not two) because each side independently closes its half of the connection.
- RST immediately terminates a connection without a graceful teardown. You may see this in Wireshark when a connection is refused.
Exam Tip 2: Port Number Ranges and Well-Known Ports
- Well-known ports: 0-1023 — memorize the most common ones listed in the table above.
- Registered ports: 1024-49151 — vendor-assigned ports like 3389 (RDP).
- Ephemeral ports: 49152-65535 — dynamically assigned to clients.
- The highest possible port number is 65535 (2^16 - 1). This is a common fill-in-the-blank question.
- The exam may ask you to identify whether a port number is in the well-known, registered, or ephemeral range — know the boundaries.
- FTP uses TWO ports: 21 for control and 20 for data. Knowing both is frequently tested.
- DHCP uses BOTH port 67 (server) AND 68 (client). Know which is which.
Exam Tip 3: DNS Uses Both TCP and UDP on Port 53
- DNS queries (client asking a resolver) use UDP port 53.
- DNS zone transfers (between authoritative servers) use TCP port 53.
- Large DNS responses that exceed 512 bytes also fall back to TCP.
- If the exam asks "What port does DNS use?" the answer is 53, and if asked protocol, say "both TCP and UDP."
- Do NOT write an ACL that only permits TCP 53 — this will break DNS queries which use UDP 53.
Exam Tip 4: Identifying TCP vs UDP by Application Characteristics
- If an application requires guaranteed delivery, it uses TCP: web browsing, email, file transfer, remote login (SSH/Telnet).
- If an application requires low latency and can tolerate loss, it uses UDP: VoIP, streaming, online gaming, DNS, DHCP, TFTP, NTP, SNMP.
- The exam may describe a scenario and ask which protocol to use. Look for keywords: "real-time" and "latency-sensitive" point to UDP; "reliable" and "must not lose data" point to TCP.
- BGP is the one routing protocol that uses TCP (port 179). OSPF, EIGRP, and RIP run directly over IP or use their own mechanisms, not TCP or UDP.
Exam Tip 5: TCP Windowing and Flow Control Concepts
- The window size field in the TCP header controls flow control. A larger window means the sender can transmit more data before waiting for ACKs.
- If the receiver's buffer is full, it advertises a window size of zero — this is called a "zero window" and tells the sender to stop transmitting until further notice.
- Window sizes are negotiated dynamically throughout the connection — they are not fixed at the start.
- Slow start begins with a small congestion window and grows exponentially until the slow start threshold (ssthresh) is reached, then grows linearly.
- Packet loss is interpreted by TCP as a sign of congestion, triggering a reduction in the congestion window.
10. Summary Comparison Table: TCP vs UDP
Use this final comparison table as a quick reference when reviewing for the exam. Memorizing these key distinctions will prepare you for any TCP vs UDP question on the CCNA 200-301.
| Feature / Dimension | TCP | UDP |
|---|---|---|
| Full Name | Transmission Control Protocol | User Datagram Protocol |
| RFC | RFC 9293 (original: RFC 793) | RFC 768 |
| OSI Layer | Layer 4 — Transport | Layer 4 — Transport |
| Connection Type | Connection-oriented | Connectionless |
| Handshake | Three-way (SYN, SYN-ACK, ACK) | None |
| Reliability | Guaranteed delivery | Best-effort only |
| Sequencing | Yes — sequence numbers on every byte | No |
| Acknowledgment | Yes — cumulative ACKs | No |
| Retransmission | Yes — automatic on timeout | No |
| Ordering | In-order delivery guaranteed | May arrive out of order |
| Flow Control | Yes — sliding window (receive window) | No |
| Congestion Control | Yes — slow start, congestion avoidance | No |
| Header Size | 20-60 bytes | 8 bytes (fixed) |
| PDU Name | Segment | Datagram |
| Speed | Slower (overhead of reliability) | Faster (minimal overhead) |
| Broadcast Support | No | Yes |
| Multicast Support | No | Yes |
| Checksum | Yes (mandatory) | Yes (optional in IPv4, mandatory in IPv6) |
| Applications | HTTP, HTTPS, SSH, Telnet, FTP, SMTP, IMAP, POP3, BGP | DNS (queries), DHCP, TFTP, NTP, SNMP, Syslog, VoIP, Video streaming |
| Use When | Data integrity and completeness are critical | Speed and low latency are critical; application can handle loss |
Mnemonic: UDP Applications — "DNS Does Trivially Need Simple Network Streaming"
To remember the most important UDP applications:
DNS — Domain Name System (port 53)
DHCP — Dynamic Host Configuration Protocol (ports 67/68)
Trivially = TFTP — Trivial File Transfer Protocol (port 69)
Need = NTP — Network Time Protocol (port 123)
Simple = SNMP — Simple Network Management Protocol (port 161/162)
Network = No connection = UDP characteristic
Streaming = Video/Voice streaming over RTP
Key Point: Putting It All Together
The Transport Layer is where networking becomes truly application-aware. TCP and UDP are both indispensable — TCP provides the reliable foundation that makes the web, email, and file transfer work correctly, while UDP provides the low-latency backbone that makes real-time communication, time synchronization, and network management practical. As a network engineer, your job is to ensure both protocols flow correctly through your network infrastructure, which means understanding firewall rules, access control lists, and QoS policies that must handle both TCP and UDP traffic appropriately for different applications.
Mastering TCP and UDP is not just about passing the CCNA exam — it is foundational knowledge that you will apply every single day as a network engineer. When troubleshooting connectivity, you will use telnet <IP> <port> to test TCP connectivity. When analyzing traffic in Wireshark, you will identify protocols by their port numbers and flags. When writing access control lists on Cisco routers and switches, you will specify whether to permit or deny TCP or UDP traffic on specific ports. This knowledge is truly fundamental.