Home > Experiential Learning > Understanding Transport Layer Basics – Experiential Learning Series

Understanding Transport Layer Basics – Experiential Learning Series

TCP Streaming

 
TCP at the transport layer is implemented only in end systems. Intermediate routers do not implement TCP (for routers, the network stack is limited to just the link layer and the network layer) and are essentially unaware1 of TCP data packets.

Reliable delivery of TCP implies that it will try to recover from any error that occurs, either at the underlying network or at the transport layer, and it will deliver error-free data at the other end. As we have seen, errors in message communication can occur due to packet loss, packet corruption and/or packet duplication, etc. When TCP receives a corrupted packet (i.e., its own computation of the checksum is different from the checksum in the TCP header), it simply discards the packet and treats this packet at par with packet loss. The TCP sender waits for acknowledgement of the data it has sent. When it does not receive such an acknowledgement within a certain timeout period, it assumes that the packet is lost and it retransmits the packet. Each time, it retransmits the packet, it doubles the timeout value. This is because TCP assumes that packets are being lost due to congestion in the network. Thus, to ensure that retransmission does not add to existing congestion, it slows down the rate of packet transmission by doubling the timeout period. Since the network may still be facing congestion, retransmitted packets may also be lost. Successive retransmissions after doubling the timeout period go on for a number of times (as configured/implemented in the TCP stack). If none of the retransmitted packets are acknowledged, TCP finally assumes that network has encountered a major disturbance and breaks the connection. In such a situation, packet delivery naturally fails, and hence it is erroneous to believe that TCP can guarantee packet delivery. TCP reliability has to be understood in this context. TCP does, however, guarantee that if and when it succeeds in delivering data to the receiver application, the data will be delivered in-order, error-free and without any duplication. (There is no guarantee that all the data it has received from the sender will be delivered to the receiver.) TCP is also characterized as a streaming service, which implies that it treats the entire input data from the sender as a stream of bytes. In particular, it does not distinguish between different data segments given to it by the sender application. For example, if sender sends 100 bytes in first send request, 200 bytes in the second send request, and 250 bytes in the third send request, it treats all of this data as a single stream of 550 bytes. The receiver application at the other end would not know whether the sender had sent three segments as mentioned above, or a single data segment of 550 bytes, or 550 segments of 1 byte each, etc. For TCP, the entire data is treated as a stream of bytes and delivered as such at the receiving end. All that TCP guarantees is that the receiving application will receive these bytes in the order sent i.e. byte number N+1 will be given to application after byte number N, and before byte number N+2 for all values of N.

The TCP byte-streaming concept between two end points (applications) can be understood using the following analogy. Consider a water pipe connecting two containers CA and CB so that water can flow from CA to CB. Now, water can be poured in at CA one bottle at a time, or one bucket at a time, or one spoon at a time, etc. When the water comes out at CB’s end, the water pipe does not provide any information about how the water was poured in at CA’s side – it just arrives as a stream of water drops. Further, the water pipe does not guarantee that all the water poured in at CA will be delivered at CB (e.g., the pipe may break).

Now we will discuss this concept with the experimental exercise as below. Consider the setup as shown in Figure 1, and repeat the exercise as discussed in UDP Message Orientation. The Client program will send C distinct messages at an interval of D seconds, where the size of each message is X bytes. The first message consists of the letter ‘A’ repeated X times, the second message consists of the letter ‘B’ repeated X times, and so on. The Server program will read Y bytes on the TCP socket and display the same on the console. As an example, suppose X = 50 and Y = 30. When the Server invokes the first read, it will read 30 letters ‘A’. When it invokes the next read, it will get the remaining 20 letters ‘A’, because TCP is a streaming protocol and all bytes are delivered as a stream. On the next read, it will get 30 ‘B’s, and so on. If the server program is changed to use the value of Y = 5, then it will read 5 letters at a time. Its very first read request will read 5 letters ‘A’, the second read will get the next 5 letters ‘A’, and so on until the tenth read request will get the last 5 letters ‘A’. The next read will result in reading 5 letters ‘B’, and this will occur 9 more times, etc. A similar pattern can be observed when Y = 1. In all these three exercises, the Client (sender) program remains the same. This demonstrates that TCP is a streaming protocol and does not honour message boundaries of data sent by the Sender application.

The steps for experimental exercises to experience TCP streaming are detailed in Exercise 4.

TCP Packet Loss, Timeouts and Recovery

 
In addition to providing streaming service for data delivery, TCP also provides reliable communication of data in the sense that if any packets are lost or corrupted in transit, TCP will take care to retransmit these so as to ensure that receiver receives this byte stream in-order and uncorrupted. To understand the TCP Reliable delivery, let us re-implement the exercise UDP Packet Loss using TCP. Suppose the Client application sends 10 TCP data packets (segments) at an interval of 5 seconds between successive transmissions, starting at time T0. Thus, the tenth data packet will be transmitted at time T0+45. Now suppose we break the link between switches S1 and S2 at time T0+16 (say), and we restore this link at time T0+32 (say). At time T0+20, when the Client sends the fifth packet, it will be dropped at switch S1 since there is no link to S2. Thus, this packet will not reach server Hs. Hence, client Hc will not receive any acknowledgement and, after the timeout (which is likely to be few milliseconds since the earlier deliveries and acknowledgements would have completed in a few in a LAN setup), the Sender client will retransmit the packet and double the timeout value. When the acknowledgement for the retransmitted packet also does not come, the Client will retransmit again with double the timeout for acknowledgement. The tool tcpdump or wireshark can be used (at the at Sender side) to verify these retransmissions at increased (doubled) intervals. All this retransmission occurs at the TCP layer, and the Client application is unaware of it. At time T0+25, the Client will send the sixth data segment containing the letter ‘F’, which will lie in the TCP buffer at the Client host. Similarly, at time T0+30, the Client application will send the seventh data segment containing letters ‘G’, which will again be stored in the TCP connection buffer. Depending upon the TCP flow control window size, this data segment may also be transmitted along with the previous data during the next retransmission. When the next data retransmission takes place after time T0+32 (i.e., after the link is restored), the Server (receiver) will receive all the data segments (fifth to seventh), and it will send acknowledgements for all data segments received so far. TCP follows cumulative acknowledgement, thus the acknowledgement from the receiver will be for the seventh segment, which also implies that the fifth and sixth segments have been received. With this acknowledgement, the TCP timeout value will be reset to the response time delay of the retransmitted packet, and subsequent TCP data segments will be transmitted as data segments one to four were.

Analysis of the Server side behavior will show that it receives the data segment containing ‘A’s at time T0, ‘B’s at T0+5, ‘C’s at T0+10, and ‘D’ at T0+15. During the time window from T0+16 to T0+32, no data is received. At time T0+32, it receives ‘E’s, ‘F’s and ‘G’s, all together (in order), and these are displayed on the Server application terminal. An analysis of tcpdump/wireshark capture can be used to analyze these packets to demonstrate how TCP implements reliable delivery on packet loss. The tcpdump/wireshark capture at the Client side shows multiple retransmissions, but at the Server side there will be only one segment received after time T0+32.

The experimental steps to understand packet loss and TCP recovery from packet loss are described in Exercise 5.

TCP Server Not Running

 
To develop a comprehensive understanding of TCP protocol basics, we also need to study what happens when the Server program crashes during communication, or when the Server application is not running when the Client starts communication. In all such cases, when the Server host receives a TCP packet for a destination port number and no application is ready to receive this data by binding to this port and listening on it, the TCP stack generates a TCP Reset response. The TCP Reset is indicated by TCP flag bit ‘R’ in the TCP header. When the sender of the communication receives the TCP Reset bit with value 1, the TCP stack implementation at sender host simply closes the connection, and the sender application receives an error (system call error) when it makes use of the associated socket.

The set of steps to understand TCP Reset are described in Exercise 6.

Summary

 
We have explored the basic working of transport layer protocols TCP and UDP in the TCP/IP stack. The key features of both these protocols are discussed. We have developed hands-on exercises to enhance understanding of UDP (with its unreliable delivery that nevertheless honours message boundaries) and TCP (to demonstrate how it provides streaming service, reliability of data transmission using retransmission and timeouts, and in-order delivery). We also experimentally investigated the behaviour of both UDP and TCP when a Client application initiates a connection to a non-running Server program. These experimental exercises with detailed steps will help the reader assimilate these concepts clearly, and will develop a deeper understanding of these two transport layer protocols.

In the next article, we will discuss the TCP state transition diagram, and how TCP connections move from one state to another. The understanding of TCP connection states will be very useful for Information Technology professionals to diagnose and debug application connectivity when web applications behave unexpectedly.

1.If intermediate devices implement Network Address Translation, then these devices may need to look up TCP port numbers.

Pages ( 3 of 5 ): « Previous12 3 45Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *