The web is a great place to find information, but it can also be a great place to find problems. In this article, we’ll take a look at some of the core web vital statistics and how they can improve your user experience. First and foremost, the web is a global platform. This means that users can access information from anywhere in the world. Additionally, the web is constantly growing and evolving, so it’s important to keep up with changes. For example, new search engines are being developed all the time, so it’s important to be aware of what new features are being added and how they might impact your users. Another important aspect of user experience is speed. The web is a fast platform, so it’s important to make sure your pages are as fast as possible. This includes ensuring all your pages load quickly and that any images or videos are played correctly. Additionally, make sure all your pages have an easy-to-use navigation system so users can quickly find what they’re looking for. Finally, make sure you use effective user feedback techniques when developing your pages or applications. By using effective user feedback methods, you can help improve your user experience overall and ensure that everyone who uses your site feels satisfied with their experience.


Google plans to use Core Web Vitals as signals in its search ranking index. Originally planned for May 2021, the update has now been postponed into June. If your Web Vitals aren’t yet in order, now’s the time to act before the search changes arrive.

The Three Metrics

The wider Web Vitals project targets several facets of the web experience. For Core Web Vitals, Google’s focusing in on three specific metrics. Poor scores on these vitals can expose a site with a frustrating user experience.

Here are the three metrics you need to be aware of:

Largest Contentful Paint (LCP) – This metric measures page loading performance. An LCP under 2. 5 seconds is “good”; anything above 4. 0 seconds is “poor”. First Input Delay (FID) – First input delay measures the time taken for a page to become responsive to user interactions such as clicks and scrolls. A good site will be interactive within 100ms; a poor site takes longer than 300ms. Cumulative Layout Shift (CLS) – CLS is a measure of the amount of “layout shift” that occurs while a page is loading. Layout shift occurs when asynchronously loaded page elements such as images and ad banners pop into view after the rest of the page, causing content beneath to visibly jump down. This jarring effect can frustrate and confuse users.

Optimising for these metrics improves the user experience by reducing loading time and on-page latency. Cutting down on layout shift enhances the visual consistency of your page. Each metric is easily measurable and should deliver immediate real-world improvements for your site.

Google’s recommended target is to hit the “good” thresholds on the 75th percentile of your page loads. A page will be considered to “pass” the Core Web Vitals tests if it can reach this level of conformance. Sites that consistently fall below the threshold at the 75th percentile could eventually find themselves ranking lower in search results, after the Google indexing change goes live.

Measuring Core Web Vitals

You’ve got several options available to help you get on top of your Web Vitals performance. The top choice for developers is the Lighthouse report available in Google Chrome’s F12 Developer Tools.

Head to your website in Chrome. Launch the Developer Tools and switch to the Lighthouse tab. Click the “Generate report” button to begin the Lighthouse analysis. It may take several seconds to complete.

Once the report loads, scroll down to the Performance section. Six metrics are displayed, including the three that comprise the Core Web Vitals. In Chrome, “First Contentful Paint” represents “Largest Contentful Paint” and “Time to Interactive” is the same as “First Input Delay”.

If any of these metrics are coloured orange or red, you may have work to do to improve your site’s position. Lighthouse will display suggestions below each issue. Some changes can be quick and easy wins that give you several extra points for a few minutes of work.

You should run Lighthouse twice, once in Desktop mode and then again with the Mobile radio button selected. Mobile and Desktop reports are selected on the Lighthouse tab landing page, where the “Generate report” button is displayed. Metrics are evaluated differently by device type; Google search will respect this segmentation too. If your mobile site performs better than your desktop experience, you might find your website starts to rank higher when viewed from a smartphone (or vice versa).

Several Google web management systems also expose Core Web Vitals data. You can view your metrics within the Chrome User Experience Report, PageSpeed Insights and your Google Search Console dashboard.

Checking these tools is important, even if you get good scores using Lighthouse. Google properties calculate your Web Vitals score based on real anonymised user data, via the Chrome User Experience Report. It’s therefore important to check Google’s stats actually align with Lighthouse tests running on your machine.

Discrepancies can easily occur if you don’t test regularly with your user base. By and large, developers tend to run high-end hardware that makes light work of even the most complex website. This can give you an encouraging Lighthouse score that doesn’t reflect your real-world user experience. Google might produce a very different number if the majority of your users visit from a tired mid-range smartphone.

Adding Your Own Instrumentation

It’s not just Google’s tools that can measure the Core Web Vitals. The project also provides an npm library which you can integrate with your site’s JavaScript. This lets you programmatically measure Web Vitals performance. You can then send the collected data to your own analytics service.

The library exposes a measurement function for each of the Web Vitals (e.g. LCP for Largest Contentful Paint). Calling the function will measure the metric’s value. The functions accept a callback that will receive the value after the measurement completes.

It may take some time for the metric to report its data. Some metrics are dependent on the user actually interacting with the page. If the user never clicks or scrolls, the measurement won’t be made.

Don’t call each measurement function more than once in a single session. Each call registers a performance observer; consecutive calls won’t prompt an immediate measurement and could lead to excessive memory usage.

Your callback might be invoked multiple times. A change will be reported whenever the measurement value is updated. Some metrics continually monitor the page and will adjust themselves through its lifetime. Your function will be called each time a new value is determined.

Measuring With Browser APIs

Aside from the web-vitals npm package, you can use the underlying browser APIs directly if you want a hands-on approach to performance testing. This lets you finetune exactly how measurements are made but could lead to you drifting away from the Web Vitals spec.

The Performance Observers API is a browser mechanism that lets you hook into performance events occurring on the page. The Web Vitals library uses this API internally. You can attach your own performance observers to receive new events directly from the browser. Each observer is given a metric to monitor.

This code snippet demonstrates how to setup your own measurement system for the Largest Contentful Paint metric. The callback passed to the PerformanceObserver receives a list of performance observer entries. These describe the performance state transitions occurring in the browser.

Summary

Getting a good score on the Core Web Vitals indicates your site is optimised for the three fundamental aspects that make a good user experience. Reducing loading time, time to interactive and layout shift helps eliminate UX friction. This increases the chance that users will return to your site.

The current Core Web Vitals aren’t static. Additional metrics could be added over time. This would be based on developer community feedback and real-world observation of common performance issues. The current three metrics were chosen as they’re “relevant to all web pages” and have the largest overall impact on the user experience.