Wednesday, June 20

CSS clean-up @ build time

Here is a simple Maven goal to optimize CSS delivery at build time and low cost.

<!-- set path to CSS files -->
<set var="css.cleanup.src.dir" value="path_to_css_files" />

<goal name="cleanup-css" description="Remove comments and blank lines from CSS files">
  <echo message="Removing comments and blank lines from files in ${css.cleanup.src.dir}" />
    remove single+multiline comments,
    see docs@
  <replaceregexp byline="false" flags="gm">
    <regexp pattern="\/\*(.\s)*?\*\/" />
    <substitution expression="" />
    <fileset dir="${css.cleanup.src.dir}">
      <include name="*.css" />

  <!-- remove multiple blank lines -->
  <replaceregexp byline="false" flags="gm">
    <regexp pattern="^\r\n^\n" />
    <substitution expression="" />
    <fileset dir="${css.cleanup.src.dir}">
      <include name="*.css" />

Is it really worth it?
I tested it on 5 CSS files (test1.css, test2.css,test3.css, test4.css, test5.css) of different sizes and obtained a respectable file size reduction. In addition, dev comments are not shipped and exposed to the outside world.

Below is the output of the Python stats scripts I wrote to analyse situations like this (testX_c.css is the compressed version of the original file):

>>> describe(cmpfilesizelist(d, listfiles(d, "*.css")))

file test1.css is 4.57% larger than test1_c.css
file test2.css is 33.40% larger than test2_c.css
file test3.css is 21.93% larger than test3_c.css
file test4.css is 34.55% larger than test4_c.css
file test5.css is 36.54% larger than test5_c.css

('Sample Size: 5', 'Min/Max: ', (4.5725935634192512, 36.543124350536885), 'Mean: 26.20', 'Median: 33.40', 'Std. Dev: 13.37', 'Std. Error: 5.98')

The compressed files also validated on the w3c CSS validator

Monday, June 18

Measuring client-side performance of Web-apps

About 80% of Web-application loading time is spent on the browser. Since users prefer faster web sites, the user experience is affected by what happens on the browser.
Thus, it is possible to improve the user experience by breaking down, measuring, and optimizing the activities on the browser.

Enter Page Detailer, a graphical tool that measures client side performance of Web pages.
Page Detailer assesses performance from the client's perspective by showing how the page was delivered to the browser. It provides detailed graphical and tabular views about the timing, size, data flow, and identity of each item in a page, shown in the order started by the browser.

It has a very simple interface and is very easy to use -it will automatically capture, time, and plot calls made by the browser.

It is very useful to:

  • track number of items requested by page (how many files? From how many different servers?)
  • track response time per request/item
  • track connection time per request/item (loading many small files from too many servers? Requesting too many small files?)
  • check overall page size and load time
  • check page structure and data flow
  • optimize organization of content
  • compare page load time using different transports mechanisms

The captured data does not include the browser's rendering time, and does not time separately any intelligent use of lazy loading techniques. Thus, it is not an exact mirror of the end user experience.

There are trade-offs between client-side performance optimization and improvements to the (holistic) user experience, e.g.: uncompressed items 'travel slower' but render faster on the browser and vice-versa.

Download from IBM alphaWorks @

Similar tools

  • Web Page Analyzer - Web-based Website performance tool and Web page speed analysis
  • WebWait - simple Web-based website timer: type a URL and see how long it takes to load; useful to benchmark a website or test the speed of a web connection.
  • Pingdom - Web-based monitoring suite which also tests the load time of a web page including all its objects.
  • eValid - testing & analysis suite with recording and playback options.