Home > Experiential Learning > Experiential Learning of Networking Technologies Understanding Network Delays

Experiential Learning of Networking Technologies Understanding Network Delays

Experiment 1: Transmission Delay

 

For this experiment, two machines (e.g., two desktops or laptops) are connected directly on an Ethernet LAN, as shown in Figure. 1. By default, laptop/desktop network interfaces are configured with DHCP protocol. Since there is no DHCP server here, configure these addresses manually (i.e., assign static IP Addresses).

Figure. 1: Initial setup for Experiment 1

In general, LAN connectivity by default is either 100Mbps or 1Gbps (1000Mbps). To measure transmission delay with reasonable accuracy, the ethernet interface needs to be configured with a link speed of 10Mbps.  On an Ubuntu machine with ethernet interface eth0, we can use the following command (this command should be executed on both the machines): sudo –s eth0 speed 10 duplex full

To measure the network delay, we will send request packets from one machine to another, and receive the response packets (which is echo back of the request packet). The time elapsed will correspond to twice the end to end network delay. For our experiment, we will send five ping (ICMP) packets of various sizes between the two machines. In the terminal window of one of the machines, enter this command:
ping –c 5 –s N IP

 

Table-1: Response time between two directly connected hosts on a 10 Mbps Ethernet

Table-1: Response time between two directly connected hosts on a 10 Mbps Ethernet

Using basic concepts discussed earlier, the theoretical transmission delay for N = 200×2 bytes (i.e., 400×8 bits) with R = 10Mbps (i.e., 107 bits per second) is seconds, or 0.32 ms.

Let us now compare this with the experimental data. Note that the times shown in Table-1 include all four sub-components of end to end delay, although propagation and queuing delays are negligible in this setup. The key for students is to realize that increasing packet sizes primarily increases only the transmission delay. Thus, the transmission delay for 200×2 byte packets can be estimated by computing the difference between successive rows in Table-1 (as shown in the final column). The difference between the observed value 0.42 ms and the computed value 0.32 ms can be attributed partly to measurement error and partly to the slightly different processing delays. The reader should note that the Ethernet link has a maximum packet size of 1500 bytes (MTU- Message Transmission Unit).
Thus, increasing the ping packet size beyond 1500 bytes would result in a ping packet getting split into multiple ethernet packets at the transmitting machine and joined at receiving machine. This is likely to add greater noise in the experimental measurements, but reader is encouraged to explore this option after the basic understanding is developed. This experiment makes it easy for reader to understand that transmission delay is directly related to packet size when the link bandwidth is fixed. By repeating this experiment with a bandwidth of 100 Mbps, students can grasp the relationship between transmission delay, packet size and link bandwidth.
As an extension to this experiment, add one network element, such as a simple Ethernet switch (4-port or higher) between the two machines, and again measure the response time for pings. By adding a network switch, the end to end delay will correspond to 4 links: Machine 1 to Switch, Switch to Machine 2, Machine 2 to Switch, and Switch to Machine 1. The result of our set of experiments with one intermediate switch added to the setup in Figure 1 is shown in Table-2.

Table-2: Response time between two hosts connected on a 10 Mbps Ethernet via a switch

 

Let us understand this increased transmission delay. The intermediate Ethernet switch works in a store-and-forward mode i.e., it forwards a packet onto the next link only after it receives the entire packet. Hence, each link contributes to the overall transmission delay. The transmission delay for 200 byte packets corresponding to 4 links would be 0.64 ms (twice the 0.32 ms delay computed earlier, if we assume that a host sends only one packet at a time to the other host). The values of 0.55 ms and 0.54 ms in third and fifth rows are likely to be attributed to measurement errors. The average of these 4 values is 0.63 ms, which is very close to the theoretical expectation. The reader is recommended to conduct the experiment multiple times with different packet sizes (less than 1500 bytes).

Experiment 2: Processing Delay

 

To understand processing delay, we use our own client and server programs (instead of ping) for data communication. This ensures that data is handled at the application level, where processing delay can be introduced in a controlled manner. The key part of python code (using UDP protocol [2]) for a sample client application, running on Machine 1 and a server application running on Machine 2 is given in Appendix I. To simulate computation, the server application sleeps for a specified time tsleep. (In an actual application, this time would be spent on some meaningful action depending on the business logic, such as interacting with a payment gateway or querying a database server) The client sends several packets (e.g., 1000) and computes the delay in response.
The reader can observe how the delay varies with changes to tsleep. In our experiments, using the setup of two machines (connected via one switch) we observed the response time to be 3.22 ms when tsleep = 0 ms, and 13.23 ms when tsleep = 10 ms. The difference of 10.01 ms is the difference in processing delay of two packets. The reader is recommended to use Wireshark capture [6] on the server to look at the time when the client packet is received, and when the response is transmitted by the server. The Wireshark capture at the client will provide insights into the end to end network delay, because the time difference between a response packet and a request packet will include propagation and transmission delay (for all the links).

Pages ( 2 of 4 ): « Previous1 2 34Next »