Home > Experiential Learning > Experiential Learning of Networking Technologies: Understanding Web Performance

Experiential Learning of Networking Technologies: Understanding Web Performance

Authors

 

Ram P. Rustagi (rprustagi@cavisson.com)

Mentor – Technology at Cavisson Systems

Viraj Kumar (viraj.kumar@pes.edu)

Department of Computer Science and Engineering, PES University

 

Abstract

 

It is almost impossible to overstate the importance of the internet in transforming every aspect of modern life. The number of internet users continues to grow, and users increasingly expect immediate responses while interacting with web content. Any perceived delay can be off-putting for users and can result in loss of business and revenue for the corporation offering the web service. Thus, optimizing web delivery is an increasingly important challenge. In this article we discuss several modern best-practices to improve web performance. As always, we design a series of experiments to help readers gain hands-on experience in web performance enhancement and better understand the underlying mechanisms. The performance mechanisms discussed here are primarily applicable to the HTTP/1.1 protocol, which is the default web protocol used today.

NOTE: Several readers have requested access to the source code described in these series of articles. We are delighted at this response from our readers, and we will be making all our code available for this article at this link: https://github.com/rprustagi/EL-Web-Performance-I.git. The code and example samples for earlier articles is going to be accessible at https://github.com/rprustagi/Experiential-Learning.git.

Introduction

 

As internet and web browser technology has matured, web content has evolved from simple HTML content in the 1990s, to rich graphical content in early 2000s, to the present scenario with interactive multimedia content. As of February 2018, a single webpage access typically downloads about 3.8MB of content [1] , whose complex composition is detailed in Table 1. This complexity is rapidly growing. For instance, in January 2011, a typical page size was 800KB, with just 77 URL requests.

Table 1 : A typical web page (February 2018) [1]

Sl.No. Web page component Value
1 Number of URL requests 108
2 Total transfer size 3.8MB
3 Number of HTML requests 10
4 Transfer size of HTML content 72KB
5 Number of JavaScript requests 25
6 Transfer size of JavaScript 508kb
7 Number of CSS requests 7.5
8 Transfer size of CSS requests 99kb
9 Number of images 55
10 Transfer size of images 1.8MB
11 Number of fonts requests 3.3
12 Transfer size of fonts 119kb
13 Number of TCP connections 32
14 Number of DOM elements 995
15 Number of domains 20
16 Max requests in 1 domain 52
17 Cacheable resources 51%
18 Sites using Google API (libraries) 20%
19 Sites with custom fonts 71%
20 Ratio of compressed responses 71%
21 Number of HTTPS requests 59%
22 Web pages with errors (4xx, 5xx) 30%
23 Web pages with 3xx redirects 89%
24 Sites hosting content on CDN 23%

The time taken to view a web page in the browser – the Page Load Time1 (PLT) – naturally depends on the user’s internet access speed, but it is crucial to understand that this is not the whole story. In the 1990s, the typical user’s bandwidth was a few Kbps and the PLT for a simple page (typically a single HTML file) was dominated by the time it took to download the file. This was just a function of the file size and the bandwidth – the time taken to render the HTML was negligible. In contrast, many users today have access to bandwidth in excess of 10 Mbps (and more than 100 Mbps in advanced countries), but loading a modern web-page tasks the browser with downloading multiple embedded objects (images, graphics, style sheets, etc.) while managing multiple concurrent TCP connections, laying out the page contents and executing scripts (e.g., JavaScript programs) to make the page interactive. This process is shown in Figure 1, and described in greater detail by Gregorik [2] .

Figure : CSS, HTML, JavaScript processing pipeline

 

Document rendering requires parsing the HTTP Document Object Model (DOM) as well as the CSS Object Model (CSSOM). However, executing JavaScript can affect this parsing. This impacts the rendering engine, which in turn increases the PLT. Thus, while download times remain important, an increasing fraction of the PLT is the time spent by the browser in preparing the page before users can interact with the downloaded content.

For e-commerce websites, lowering the PLT translates quite directly into increased revenues [3] . A study shows that users react very favorably to web pages that render in less than 100ms, and begin to perceive delays when this rises to between 100ms and 300ms [4] . Users appear willing to accept page rendering times between 300ms to 1s, but they tend to lose focus when this delay is between 1s and 10s. Delays beyond 10s are very likely to lead to cause frustration, page abandonment, and result in users switching to competitor web-sites. Thus, web performance is crucial in meeting business goals, and is an increasingly important metric to measure accurately.

 


[1] The time until the loading spinner stops spinning in browser

Pages ( 1 of 4 ): 1 234Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *