
Problem
Let’s pretend you are you, a 21st century human and I, the writer, a caveman. We have both been set a challenge as follows.
You, being the smart modern human, will know this action needs a hammer. Now for myself, the caveman. I don’t know what a hammer is. I need instructions on how to recreate a hammer using the tools I know about. Presumably tying a rock to a stick using bamboo leaves and hitting this ‘nail’.
In this example, you are a modern browser like Chrome and I am Internet Explorer.
The instructions I mentioned, are what is known as polyfilling. Adding instructions to re-create existing features that a modern browser already has, but which are missing for older browsers.
Let’s talk about a second problem. Reading this you understand English. When you write in English, it reads like this article. But what if you had to write in Shakespearian English?
We like writing in modern English because it’s easier for us. Same applies in JavaScript. It gets updated with features that make it easier and simpler for us to write our code. These features could be shorter ways of writing programming logic, or prebuilt functions that help us when handling data.
They are usually given a name like ES2015, ES6+ and so on. Think of these as modern English in this scenario.
So now we know programming language evolves to where we no longer have to write these weird old time-y commands. Problem is old browsers want a Shakespearian translation of our commands.
Let’s say we have to support Internet Explorer 11. We know it doesn’t understand modern ES6+ JavaScript. We know it’s missing a lot of basic features and requires polyfilling.
So currently we write modern code and translate it down to a JS version IE will understand (Ye old English). We also add a polyfills for things IE + Safari might be missing.
Sounds good until you realise we don’t have to do this for modern versions of Chrome/Firefox/Edge etc…
We are dumbing down our modern JS for all of these browsers even if they don’t need it.
We are including polyfills for things most of these browsers understand. This adds extra kilobytes of JavaScript code which the end user doesn’t ever use, so why ship it?
Solution
Why don’t we serve different javascript bundles based on what browser the user has? A bundle is what we call the code we deliver to our user.
Implementation
First option is relying on the following script attributes in bold.
nomodule/module example
<script type="module" src='bundle.js'> <script nomodule src='bundle.js'>
Now Philip Walton has done an amazing job writing about this topic so I will link his post. Read Philip Walton’s post here.
But if you want a dead simple version, a browser which understands module will be able to understand ES2015+ features. It will also be smart enough to not run the nomodule script so that only the modern bundle executes.
The old browsers won’t know what to do with module and run nomodule one instead…in theory.
Why this didn’t work for us.
Both of the scripts ended up being downloaded on desktop. On desktop you will rarely face a limited data plan where this is an issue. But in general the less requests and data we have to transfer to our user is going to be my preferred solution. That said, this bug could be fixed in the future. When that happens, solution one with nomodule and module will become the standard.
Second option and the solution we opted for is doing things server side. When a user requests our website, we make decisions based on what browser they use.
I opted for Bowser as the library of choice to read the useragent value. Useragent contains information about what device version and browser version is requesting our website. This space is a bit controversial due to companies using the useragent values to track certain user information (known as fingerprinting). Thankfully modern browsers are trying to fix this issue which is great. Downside is this solution might become outdated in the near future, but having read about proposed solutions in browsers to this problem, differential serving will still be possible.
Lastly to put you at ease, we only reference useragent for a browser version to send you a better website experience so any privacy improvements are welcomed by us. Read an article on browser fingerprinting.
Back to what happens in this solution. Bowser will check if the user’s browser can handle esmodules (modern English). If yes, we send them the modern Js bundle otherwise we serve them our legacy bundle (Shakespeare time!) .
It does this based on a configuration we set, stating what versions of a browser should be able to handle esmodules, and compares the users browser against these.
Example configuration: ( e.g chrome: ‘>=61’
means chrome version 61 or above for non-technical crowd). The function below will then check if the user’s browser matches our configuration.
bowser example
const es6SupportedBrowsers = { chrome: '>=61', safari: '>=11', firefox: '>=60', } export const supportsEs6Features = (useragent: string | undefined): boolean => { let isEs6Browser if (useragent) { const userBrowser = bowser.getParser(useragent) isEs6Browser = userBrowser.satisfies(es6SupportedBrowsers) } return isEs6Browser || false }
- Is the user using Google Chrome?
- Is the version of Google Chrome 61 or higher?
If yes, we serve the modern bundle.
Now that we have 2 different bundles, we moved our polyfills (caveman instructions) into a separate file. If a user is using an old browser, we run the code in that file before running our main app. If a user is using a modern browser, then that file isn’t used or downloaded. (as you can see in my terrible graphic below)

Results
So let’s talk metrics.
The javascript bundle we used to serve was 316-317kb (kilobytes) gzipped.
I talked up differential serving, so let’s talk how much kb we reduced by shipping modern esmodules code.
This change cut about 13kb from our bundle, so roughly 4%. Combine it with conditional polyfilling and the modern browser bundle is now lighter a further 13kb.
A total of 26kb, roughly 8% of our bundle is gone.
What does this mean for our users?
Firstly, they have to download less data, which on limited plans saves them money. Even with unlimited plans we can take into consideration download speeds. For example on a slow 3G connection, you can download roughly 100kb per second on a fast 3G connection it’s roughly 200kb per second.
In our example on a slow 3G connection, our slightly improved bundle (288-290kb) should take just under 3 seconds to download.
oversimplified = 100kb = 100ms on a slow 3G connection.
There is a hidden improvement outside of the bundle size. The speed at which the browser can now read and make sense of our code. As I mentioned earlier, we are no longer shipping old-timey Shakespearian code.
It turns out having to translate it to ES2015 (Shakespeare) produces more lines of code. Which is why this change saved 13kb (4%).
Infographic of how some key metrics changed comparing weekly Dareboost measurements. (Dareboost is a tool we use to measure and aggregate data to see how our page metrics change)

One metric we value is visually complete. This means 100% of what you see on your screen currently looks ready. This doesn’t mean that the page is fully loaded or complete; if you were to scroll down bits might be missing or you might not be able to click a button. But as far as the user is concerned the things they see look to be ready.
With that in mind, visually complete metric has decreased for desktop by about 200ms. For our code a 200ms decrease means this metric has improved by 10%.
We’ve had positive results on TBT (Total Blocking Time), decreasing by around 100ms, roughly 30% improvement across mobile and desktop.
Total Blocking Time shows us how long our code takes to be read and executed by the browser before it can move onto the next task it needs to do. We can really see the difference in serving modern code (Modern English) to the browser.
Another metric we improved by 200-300ms (10-15% faster) was Speed index. This metric tells us how quickly the contents of a page are visibly populated.
Lastly for Fully loaded, meaning the page is now completely ready, you can click and interact with every element of the page. We managed to cut the time by about 100ms on high end phones with good internet connections.
When we measure these metrics on a low-end hardware phone like Moto G4 with a slow 3G connection, these changes carry over.
For example TBT becomes -300ms instead of -100ms, it might seem like it’s cutting more time, but lower hardware means reading our code takes longer.
Therefore, overall these changes match their % changes, so the -300ms is still a 30% improvement, same as on a higher-end phone with good connection producing -100ms and so on.
Highlights
I want to highlight Total Blocking Time, as you can see in the infographic I gave an oversimplified description of what it does.
Total Blocking Time shows us how long our code takes to be read and executed by the browser before it can move onto the next task it needs to do.
That holds true; however, let’s say you had some code that takes 200ms to execute. Your Total Blocking Time would be 150ms. This is because any task (or code in this example) that takes longer than 50ms is considered to be blocking other tasks from executing. If you have 4 tasks that take 50ms each, your TBT would be 0ms, even though they take the same amount of time as 1 task taking 200ms.
That’s how Total Blocking Time can be ‘gamed’ or decreased, but there is a reason I chose to highlight it and why it’s valid in this scenario.
The code we’re comparing did not split up the code into more shorter tasks. It’s the same code in a more modern syntax, without polyfills. Because it’s not splitting up things to run in different tasks, this is a direct comparison since the code works exactly as it did before. And we can see that it brings a 30% improvement across mobile and desktop in how quickly a user’s browser can read and execute our code, just on a 8% code size decrease.
Why we care about performance
Performance is ultimately user experience. Using tools like Lighthouse or field data collected through providers such as Dareboost allows us to track how the user experiences our page. But these tools also highlight issues that impact SEO and Google’s opinion of us (Google ranks our page on their Core Web Vitals. Read more about core web vitals and what they are).
Example of Core Web Vitals on mobile for the last month which uses real user data from Chrome UX Report.

You might have noticed I didn’t reference any of these in the infographic. Here’s why.
Largest Contenful Paint
On a Rightmove property details page, the Largest Contentful Paint is usually the main image of the property. Therefore these times can vary based on image size, and the server response of where that image is being sent from (think of lag in video games). I’m not optimising images with this change.
First Input Delay (FID)
I would have loved to measure this one, but Dareboost had this feature in beta and it didn’t look super reliable. That’s why Total Blocking Time was a good alternative to measure if there is code running preventing the user from interacting with the page.
Cumulative Layout Shift (CLS)
I’m not changing or influencing the layout with this change.
Learnings
Whilst these gains might seem marginal, in a larger application plagued by polyfills and legacy workarounds the results might be more impressive.
I inherited a pretty good FE app when I took over this project. This has allowed me to experiment with things like differential serving. And who knows, one day those 100ms differences will tick us over into the green in all mobile web vitals.
For now this will bring reduced cost of javascript for our end users. Mainly benefiting those with data limited plans and slower connection speeds.