Home > Experiential Learning > Experiential Learning of Networking Technologies: Understanding Web Performance

Experiential Learning of Networking Technologies: Understanding Web Performance

In this article, we will discuss ways to measure web performance, and we will describe a few mechanisms to improve the web performance. The reader must keep in mind that the list of mechanisms discussed here are by no means exhaustive. A complete understanding would span multiple books and would exploit various features of the evolving HTTP protocol2 (e.g., caching, connection reuse, content encoding, cookies), aspects of TCP behavior (e.g., flow and congestion control handling, connection setup and teardown, TCP window size), as well as browser characteristics (including preference settings).

Measuring Web Performance

 

A crucial first step in web performance analysis is to understand how long downloading and rendering a URL takes. An excellent analysis is provided by the website http://webpagetest.org, which gives a detailed breakdown for each component and the overall response time for any website by simply providing its URL [5] . By default, this analysis is based on averaging data from 3 URL accesses, but can be changed as per users preference. Further, this analysis is stored in the history and can be accessed later as well, even after few months. As an example, the analysis for the website (http://acc.digital) hosting this journal article [6] is presented as a Waterfall model in Figure 2. Modern browsers such as Chrome, Firefox, Safari, Chrome also provides some form of waterfall model analysis. Details of carrying out such analysis in the Chrome locally via the Network tab under Developer Tools are available at [7].)

Figure 2: Waterfall model for http://acc.digital

Analysis of http://acc.digital shows that loading the website main page issues a total of 68 URL requests and makes 18 TCP connections. Based on the data provided in this analysis, a timeline of key events is presented in Figure 3.

Figure 3: Timeline of events when accessing http://acc.digital

This analysis helps us understand exactly where time is being spent, and these parts can be the initial targets of optimization. For this website, the above analysis reveals that a substantial fraction of the overall time (from about 2.5s to about 7.0s) is due to images. Drilling more deeply, we find that across all three test accesses, URL request number 47 took about 4s to download 532KB and URL request number 62 took about 2.7s to download 252KB. These URLs correspond to large images from a third-party domain (not belonging to acc.digital). By optimizing the time for downloading these resources, the overall web page response time can clearly be improved.

Another interesting implication of this analysis is evident by carefully examining the bandwidth analysis in the Connection View (see Figure 4).

Figure 4: Connection View – Bandwidth utilization

This analysis reveals that for the initial 1.6s, the available bandwidth (5Mbps) was not being used at all, whereas from 1.6s to 6.0s it was almost fully used, from 6.0s to 6.6s it was about 50% used, and lastly from 6.6s to 7.5s it was once again nearly unused. This indicates that increasing bandwidth (e.g., to 10Mbps) would have essentially no impact during first 1.6s and the last 0.9s. In other words, about one third of the total time is agnostic to the bandwidth of the internet connection. This is consistent with the findings of a study by Belshe [8], which showed that increasing the bandwidth of a network connection is far less likely to significantly improve web performance compared with decreasing network latency. The latter improvement brings near linear improvement in PLT (i.e., PLT decreases linearly with decrease in latency), whereas improving bandwidth beyond 3Mbps has very little impact on improving PLT for typical pages.

Experiment 1: Analyzing web performance

Analyze the performance of a web page of interest to you with http://webpagetest.org and identify opportunities for optimizing specific components for better web performance. As an added exercise, drill deeper into the overall and component-wise analysis as we have illustrated.

Web performance improvement is not limited to lowering the time for downloading and rendering contents – it involves several other techniques including HTTP protocol optimizations, wise utilization of TCP connections (e.g., using multiple connections concurrently), browser performance optimizations, prioritizing URL fetching, caching, speculative optimization (e.g. pre-fetching, DNS pre-resolutions), network optimizations, content compression etc. Steve Souders has attempted to summarize the key techniques into 14 rules to help improve web performance [9] (with additional detailed explanations posted in Yahoo Developer Network forums [10]). We discussed some of these techniques (e.g., persistent HTTP connections, concurrent connections, caching, partial content delivery using chunked transfer encoding) in our previous article [11]. We discuss a few more of these techniques as well as others in the subsequent sections.

Prefetching Content

As we have seen, modern web pages typically contain several other URLs, and many of these refer to other host/domain names. The key idea of prefetching is that instead of waiting for the user to click on some of these links, the browser pre-resolves these host/domain names to IP addresses using DNS. For even better performance, it can establish TCP connections and even pre-fetch contents from these URLs. Thus, by the time the user clicks on such a link, the content is already available in the browser cache and can be served instantly, resulting in a very satisfactory user experience. Naturally, if the user does not click these links, prefetching becomes a wasted effort.

In the Chrome browser, prefetching can be enabled via the Privacy and Security tab under Advanced Settings, as shown in Figure 5.

Figure 5 : Enabling prefetching in Chrome

As an exercise, the reader is encouraged to examine and analyse the web traffic (e.g., using Wireshark ,<a href=”” title=”https://developers.google.com/web/tools/chrome-devtools/network-performance/”>[7]</a>) for evidence of prefetching while accessing a chosen website. Note that such predictive behaviour is browser dependent, and one may not always see the prefetched contents.

As described above, prefetching is enabled by the user’s browser setting. The web application developer can also enforce prefetching via the web-page contents. As an example, the URLs http://www.rprustagi.com/index.html and http://www.rprustagi.com/accs/prefetch.html [10] differ only in text highlighted in bold font in Table 2.

Table : HTML content to ensure prefetch (changes in bold font)

The two pages are visually identical – the difference between them lies in subsequent navigation. Clicking on any of the links will provide better user experience for the latter page. The first two changes inform the browser to pre-resolve the DNS names for www.iisc.ac.in and www.cse.iitd.ernet.in. The next change asks the browser to prefetch the PDF file and store it in the browser cache. The final change not only asks to prefetch the contents, but to prerender it in the browser’s memory. The impact of DNS pre-resolution may be imperceptible in its improved performance, but it does help.

Experiment 2: Prefetching contents
Develop a web page that incorporates the HTML tag <link> and the attribute “rel” with various prefetch values. Analyze the web traffic using Wireshark capture and verify the impact on web performance with and without prefetch.

Image Sprites

Web-pages for highly interactive websites (e.g., e-commerce websites) include several images. Many of these are small images used to create aesthetic navigation menu icons. In a naïve design, each of these images would be separate files that would be downloaded individually. While each file is individually small, the latency for initiating each download plays a significant role contributing to overall performance delay of the web-page. The key idea of an image sprite is to combine multiple (say N) images into a single image, replacing the latency of N HTTP requests with the latency of a single HTTP request. The image sprite is logically divided into multiple parts, each of which is identified by specifying its location coordinates and size. Taking the examples of 9 images from the HTTP Persistent Connection exercise described in [11], we create a single image by combining these 9 images into the sprite as shown in Figure 6.

Figure 6: An image sprite consisting of 9 images

CSS stylesheets can now be used to extract individual portions of the sprite for separate display as needed. The trick is to define the main source for each image as a transparent empty image (of size of 1 pixel) using the src attribute of the HTML <img> tag, and specifying the sprite image as a background image. The starting coordinates of each image within the sprite as well as its size (i.e., its height and width in pixels) together with an identifier is specified in the style sheet, as shown in Table 3.

Table 3 : Style sheet specifying individual images.

Finally, the web page uses these identifiers in HTML <img> tags to render specific segments of the sprite, as shown in Table 4.

Table 4: Web page content using image sprites

The code in Table 4 is used in the URL http://rprustagi.com/accs/sprite.htm, and is rendered as expected by the browser as shown in Figure 7.

Figure : A web-page displaying only two images.

 

Experiment 3: Image Sprites
As an exercise, the reader should develop a web page with multiple images and compare the web performance for two versions of the site: with each image in an individual file, and with all images combined into a single sprite.


[2] HTTP has evolved from the initial version 0.9(unofficial) to 1.0, and to HTTP-1.1 (The default version today). The most recent version (called HTTP/2) includes additional features, but we will not discuss these in this article.

Pages ( 2 of 4 ): « Previous1 2 34Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *