Home > Experiential Learning > Experiential Learning of Networking Technologies: Understanding Web Performance

Experiential Learning of Networking Technologies: Understanding Web Performance

No record found.

Domain Sharding

A web browser makes a limited number of concurrent connections to a hostname. For example, the Firefox browser by default makes at most 6 connections to a single hostname. Domain sharding attempts to overcome this limitation by logically dividing the URLs of a website domain (e.g., example.com) into multiple subdomains (e.g. sales.example.com, engineering.example.com, etc.) These subdomains can be hosted on the same web server or on multiple web servers. If we assume that a browser can make only 6 concurrent connections to one hostname, the webpage can contain multiple embedded URLs from (say) 3 subdomains. Now, the browser can make 3 X 6 = 18 concurrent TCP connections instead of just 6, thereby increasing concurrency.

Experiment 4: Domain Sharding.
To experience use of domain sharding, consider the web page pictures.html with 10 images as described in [11]. Cluster the first 5 images in one subdomain (e.g., img1.myweb.com) and the remaining 5 images in another subdomain (e.g., img2.myweb.com). Make the changes to the web page as shown in Table 5. To access these hostnames, either update your DNS server to resolve these names or, more simply, make entries for img1.myweb.com and img2.myweb.com in the /etc/hosts file on the machine where the browser is running to overcome the DNS resolution issues. Experimentally determine whether domain sharding improves the web performance.

Table 5: Embedded objects with Domain Sharding

Resource In-lining

If a website contains multiple small images that cannot be easily agglomerated into sprites (e.g., because they are not rectilinear), the latency delay associated with multiple HTTP requests for individual files can be reduced by in-lining these resources directly into the HTML page, without any external references. The format for resource in-lining is:

data: [mediatype] [;base64] , data

where mediatype could be image/jpg or any other MIME type. An example is available at the web page http://rprustagi.com/accs/r-inline.htm. It should be noted that Base64 encoding (which uses A..Za..z0..9+/ as the character set to encode 6 bits of original data) increases the data size by 33%. Thus, as the size of images increases, one needs to make a judgmental call whether the improved latency of inlining is offset by the increased download time (and overall size) of the web-page.

 

Experiment 5: Resource in-lining of images
To develop a better understanding and experience the resource inline concept, the reader is encouraged to create a web-page with multiple small images. Encode each image into Base64 format (using the Linux command base64) and incorporate the encoded content into the HTML page. Experimentally determine whether inlining improves web-performance.

Compression

A growing fraction of users access the internet on mobile devices. These have low bandwidth connectivity at times, and hence download times play a significant role in web performance for mobile users. To save on download time, content can be compressed before being sent to the end user device (where it must be decompressed before it can be rendered3). This requires HTTP requests to contain information on what kind(s) of compression the user device can support. For example, sending the HTTP header “Accept-Encoding: gzip” in the request implies that user device can understand gzip compression. If the web server supports gzip compression, it will send compressed content, and the compression ratio will dictate improved download times. If a user device supports multiple compression methods, it can specify all such methods separated by comma character e.g. “Accept-Encoding: gzip, deflate, bzip2” etc. The order also specifies the preference, and the web server will use the supported compression method in this preference order.

 

Experiment 6: Compression as per user device and bandwidth.
Use wget command with its command line option to specify desired header (by default, wget does not send any compression preference). Use wget with the option –-header=”Accept-Encoding: gzip” with your preferred website and note down the size received, and then invoke the same request without this header option and note down the size received and then you can note the difference in download size. For example, invoking ‘wget -d –no-check-certificate -O /tmp/index.html https://www.google.co.in’ will get the content from Google without compression and invocation of ‘wget -d –no-check-certificate –header=”Accept-Encoding: gzip, deflate” -O /tmp/index.html.gz https://www.google.co.in’ will get the content with compression. The former is likely show received data size about 15K (‘/tmp/xyz.html’ saved [14724]), and the latter is likely show received data size about 5K (‘/tmp/xyz.gz’ saved [5496/5496]). Please note that not every website may support compression, and thus when the website does not support compression, it will send the result uncompress e.g. author’s website (rprustagi.com) does not support compression.

 

Summary

The default web protocol being used today in the Internet is HTTP/1.1. We have discussed some of the key mechanisms that are used in this protocol, but performance optimization is not limited only to optimizing protocol performance. A detailed analysis would also consider all other layers of the networking stack. At the physical and link layers, one needs to consider issues such as bandwidth and latency, beacon management and frame conflicts in WiFi, etc. HTTPS requests constitute 59% of web requests (Table 1), and thus one needs to consider optimizations at the Transport Layer Security (TLS) level as well. Similarly, at the transport layer, TCP level optimizations would require explicit understanding of three-way handshake, slow start, congestion and flow control mechanisms and their impact on throughput.

The new version HTTP/2 [16] of the web protocol was standardized in 2015, with the key design goal to make web applications faster. It does away with some of the workarounds used in HTTP/1.1 and introduces new mechanisms such as server-side push, stream support, minimization of HTTP header communication, compression at the HTTP header level, while also addressing head of line blocking. We will examine this protocol in one of our subsequent articles.


[3]Note that this imposes an additional computational burden on the mobile device, where battery life is also a limiting factor.

Pages ( 3 of 4 ): « Previous12 3 4Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *