Lighthouse performance metrics: –
Performance is a measure of how quickly a browser can assemble a webpage.
Lighthouse uses a web browser called Chromium to create pages and runs tests on pages as they are built.
The tool is open-source (i.e. maintained by the community and free to use).
Each audit falls into one of five categories:
- Performance.
- Flexibility.
- Best practices.
- SEO.
- Progressive Web App.
Use the lighthouse to indicate the series of tests executed by the partner GitHub Repo regardless of the method of execution.
Lighthouse and WebCore Vitals:
- The Chromium project has announced a set of three metrics that measure Google-supported open-source browser performance.
- Measurements, also known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality codes.
- The goal of these metrics is to measure web performance in a user-centred manner.
- Created with a modified version of the WebCore Vitals as part of the Lighthouse v6 update.
- With the release of Chrome 84, the integrated metrics of Lighthouse v6 have been adopted in Google products.
- Chrome DevTools Audit Panel was renamed Lighthouse.
- PageSpeed Insights and the Google Search Console also represent these unified metrics.
- This change in vision sets new, even better goals.
- The three metrics prescribed by Core Web Vital are part of Lighthouse Performance Scoring.
- Larger content, full paint, total blocking time and cumulative layout shifts make up 70% of the lighthouse weighted performance score.
- It is the same scale, but measured with the same page load, not counting from page loads worldwide.
- For real users, how quickly the page is assembled depends on factors such as their network connection, the device’s network processing power, and the user’s physical distance to the site servers.
- Lighthouse performance data is not the cause of all these factors.
- Instead, the tool mimics a mid-range device and throttles the CPU to mimic the average user.
- These are lab tests collected in a controlled environment with a pre-defined device and network settings.
- Lab data can help with debugging performance issues.
- This does not mean that the experience of the native machine in a controlled environment refers to the experiences of real humans in the wild.
- The good news is that there is no need to choose between lighthouse and core web vitals.
- They are designed as part of a single workflow.
- Always start with field data from the Chrome User Experience Report to identify issues affecting actual usage.
- Affect the lighthouse’s extended testing capabilities to identify the code that is causing the problem.
Calculation of lighthouse performance metrics:
- The lighthouse performance score is made up of seven criteria, each of which gives a percentage of the total performance score.
- Browser extensions, internet connection, A / B tests, or ads displayed at specific page loads also have an effect.
- If interested/excited to learn more, see the documentation on Performance Testing Variability.
Largest Contentfull Paint (LCP):
- Indicates the user’s awareness of the loading experience.
- Lighthouse performance scores 25%.
- Measures the point on the page load timeline when the largest image or text block of the page appears in the viewport.
- Lighthouse captures LCP data from Chrome’s tracing tool.
- LCP scoring.
- The goal is to achieve LCP in <2.5 seconds.
Elements that are part of the LCP.
- Text.
- Images.
- Videos.
- Background images.
The page is calculated as LCP
- LCP usually varies depending on the page template.
- Measure a few pages using the same template and define LCP.
- Lighthouse provides accurate HTML of the LCP element, but it is also useful to know the node when communicating with developers.
- The node name is fixed, but the exact on-page image or text may vary depending on the content rendered by the template.
To define LCP using Chrome Devtools
- Open the page in Chrome.
- Navigate to the Devices function panel (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Hover over the LCP marker in the Time section.
- The element (s) for the LCP are described in the corresponding node field.
Poor LCP usually comes from four problems:
- Slow server response times.
- Render-blocking JavaScript and CSS.
- Resource load times.
- Client-side rendering.
To fix poor LCP
If the reason is slow server response time:
- Optimize the server.
- Redirect users to the nearest CDN.
- Cash assets.
- Provide a cache of HTML pages first.
- Establish third-party connections in advance.
If the reason is to render-block JavaScript and CSS:
- Minimize CSS.
- Postpone non-critical CSS.
- Inline complex CSS.
- Minimize and compress JavaScript files.
- Postpone unused javascript.
- Cut down on unused polyfills.
If the reason is resource load times:
- Optimize and compress images.
- Preload important resources.
- Compress text files.
- Deliver different assets based on the network connection (adaptive serving).
- Cash assets using a service worker.
If the reason is rendering on the client-side:
- Reduce complex javascript.
- Use another rendering strategy.
Resources to improve LCP:
- Largest Contentfull Paint (LCP) web. dev
- Optimize the largest contentfull paint web. dev
- Lighthouse Largest Contentful Full Paint web. dev
Total blocking time (TBT):
- Response to user input.
- Lighthouse Performance Score Waiting: 30%
- TBT first measures the time between contentful paint and interactive time.
- TBT is the lab equivalent of the first input delay (FID) – the field data used in the Chrome user experience report and Google’s upcoming page experience ranking signal.
- The total time occupied by tasks that take more than 50ms to complete the main thread. If it takes 80mc to execute a task, then 30mc is calculated in TBT. If it takes 45ms for a task to run, 0ms will be added to TBT.
- Is Web Core crucial for total blocking time? Yes! This is the lab data equivalent to the first input delay (FID).
TBT scoring:
- Goal: Achieve a TBT score of fewer than 300 milliseconds.
- The first input delay, field data similar to TBT, has different thresholds.
- Long tasks and total blocking time.
- TBT measures long tasks – those that take more than 50ms.
- When the browser loads the site, a line queue of scripts must be waiting to execute.
- Any input from the user should go into the same queue.
- When the browser is unable to respond to user input because other tasks are running, the user perceives this as a log.
- Most importantly, long tasks are like the guy in the favourite coffee shop who takes the longest time to order a drink.
- Just like when someone orders 2% Venti Four-Pump Vanilla, Five-Pump Mocha Full-Fat Frame, long tasks are a major source of bad experiences.
- Massive JavaScript will cause more TBT on the page.
To address poor TBT:
- Break down long tasks.
- Optimize the page for mutual readiness.
- Use a web worker.
- Reduce JavaScript execution time.
Resources to improve TBT:
- First Input Delay (FID) web.dev
- Total blocking time (TBT) web.dev
- Optimize the first input delay web.dev
- Lighthouse: Total blocking time web.dev
First Contentfull Paint (FCP):
FCP refers to the time when the first text or image was painted (visible).
Lighthouse Performance Score Waiting: 10%
The time I can view the page I requested is responsive. I can stop hovering over the button on the back of my toe.
The FCP score in the lighthouse is measured by comparing the page’s FCP times to the actual website data stored by the HTTP archive.
It will increase if the FCP is faster than other pages in the HTTP archive.
FCP scoring
- Goal: Achieve FCP in <2 seconds.
Elements may be part of the FCP:
- FCP is the time it takes to render the first visible element to the DOM.
- The element that provides non-white content per page (excluding iframes) is counted towards any FCP that occurs before.
- Since iframes are not considered part of FCP if they are the first content to be rendered.
- FCP will continue counting until the first non-frame content is loaded, while iframe load time will not be counted towards FCP.
- The documentation around FCP is also often affected by the font load time and there are tips for improving font loads.
Using FCP Chrome Devtools
- Open the page in Chrome.
- Navigate to the Devices function panel (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Click on the FCP marker in the Timings section.
- The Summary tab contains a timestamp with FCP in ms.
To improve FCP:
- In order for the content to be displayed to the user,
- The browser must download,
- Interpret and process all the external stylesheets it encounters before displaying any content or presenting it to the user screen.
- The fastest way to bypass external resource delays is to use inline styles for Ebor-the-fold content.
- To keep the site consistently scalable, use automated tools such as the penthouse and Apache’s mod_pagespeed.
- These solutions come with some limitations to functionalities, require testing and may not be for everyone.
- Worldwide, all enhance our site time to contentful paint first by reducing the range and complexity of style calculations.
- If the style is not used, remove it.
- Detect unused CSS with the Chrome Dev tool’s built-in code coverage functionality.
- Use better data to make better decisions.
- Like TTI, capture real user scales for FCP using Google Analytics to integrate improvements with KPIs.
Resources to improve FCP:
- First ContentFull Paint, Web. dev
Speed Index:
- It refers to what and how much is visible at a time during loading.
- Lighthouse Performance Score Waiting: 10%
- What it measures: Speed index is the average time at which visible parts of a page are displayed.
- How it is measured: The lighthouse speed index measurement comes from a node module called Speedline.
- Ask kind wizards at webpagetest.org for specifications, but overall, Speedline scores have an algorithm to calculate the viewport size (read as device screen) and the completeness of each frame.
SI scoring:
- Goal: Achieve SI in 4. 4.3 seconds.
To improve SI
- Speed score reflects the critical rendering path of the site.
- A “critical” resource is the resource requirement for the first paint or the key to the main functionality of the page.
- If the path is long and dense, the site will be slow to provide a visual page.
- If the route is optimized, deliver content faster to customers and score higher on the Speed Index.
Critical Path Effects Rendering.
Lighthouse recommendations are usually associated with a slow critical rendering path:
- Reduce main-thread work.
- Reduce JavaScript execution time.
- Reduce the depth of critical requests.
- Remove render-blocking resources.
- Postpone offscreen images.
Resources to improve SI:
- Speed Index, Web. dev
Interactive time
- It refers to: load response; Determining whether a page looks responsive but not yet.
- Lighthouse Performance Score Waiting: 10%
- The time it takes for the main resources of the page to load and the time it takes for the user to respond to input.
- TTI measures how long it takes for a page to become fully interactive.
The page is considered fully interactive:
- 1. The page displays useful content, which is first measured by the content full paint.
- 2. Event handlers are registered for the most visible page elements.
- 3. The page responds to user interactions in 50 milliseconds.
TTI Scoring:
- Goal: Achieve a TTI score of fewer than 3.8 seconds.
Resources to improve TTI:
- Interactive time, web. dev
Cumulative Layout Shift (CLS)
- User’s perception of the page’s visual stability.
- Lighthouse Performance Score Waiting: 15%
- It transfers page elements by the end of the page load.
- CLS is not measured in time.
- Instead, it is a metric calculated based on the number of frames in which the elements move and the total distance in pixels to which the elements move.
CLS Scoring:
- Goal: Achieve a CLS score of less than 0.1.
Elements that can be part of CLS:
- Any visual element that appears on the fold at some point in the load.
- That’s true – if to load the footer first and then the hero content on the page, the CLS will be damaged.
Causes of poor CLS:
- Images without dimensions.
- Ads are embedded and dimensionless iframes.
- Dynamically injected content.
- Web fonts that cause FOIT / FOUT.
- Actions waiting for network response before updating DOM.
To define CLS using Chrome Devtools
- Open the page in Chrome.
- Navigate to the Devices function panel (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Hover over and move the screenshots of the load from left to right (make sure the screenshots checkbox is checked).
- Look for bouncing elements after the first paint to identify the elements that cause CLS.
To improve CLS
Once have found the elements (s) incorrectly, need to update them so that they are consistent when the page loads.
For example, if slow-loading ads cause a high CLS score, one may want to use the same size placeholder images to fill in the space when the ad is loaded to prevent page transfers.
Here are some common ways to improve CLS:
- Always include width and height sizing properties on images and video elements.
- Reserve space for ad slots (and do not tear it down).
- Avoid inserting new content over existing content.
- Be careful when placing non-sticky ads on top of the viewport.
- Preload fonts.
CLS Resources:
- Optimize the Cumulative Layout Shift web. dev
- Cumulative Layout Shift (CLS) web. dev
- Cumulative Layout Shift (CLS) in AMP – AMP Blog
- Cumulative Layout Shift (CLS) Calculator
To test performance using a lighthouse
Methodology topics
- Outside the box, the lighthouse audits a single page at a time.
- A single page score does not represent the site and a fast homepage does not mean a fast site.
- Test multiple page types on the site.
- Identify the main page types, templates, and goal conversion points (signup, subscribe, and checkout pages).
- If having 40% blog posts on the site, turn 40% of the testing URLs into blog pages!
- Example Page Test Inventory
- Before starting optimizing, run Lighthouse on each of the sample pages and save the report data.
- Record a list of the scores and improvements.
- Prevent data loss by saving JSON results and using the Lighthouse Viewer when detailed result information is needed.
- Get the backlog to bite back using ROI
- It is difficult to get development resources for action SEO recommendations.
- An in-house SEO expert can ruin their pancreas by having a birthday cake for every backlogged ticket birthday. Or at least learn to hate cake.
- In my experience as an in-house enterprise SEO pro, the idea of prioritizing performance initiatives is to have numbers that support investment.
- These initial data turn into dollar signals that can be used to justify and reward development efforts.
- With the lighthouse test, can recommend specific and direct changes (preload this font file) and attach the change to a specific metric.
- Chances are that more than one area will be flagged during the tests.
- If wondering what changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator
To run lighthouse tests:
- This is the context of the many roads that lead to Oz.
- Sure, some scarecrows may be particularly loud about a particular brick shade but it is related to the goals.
- Time to learn some NPM.
- Two one-off reports should do the trick.
- Unless there is a special utility case for the desktop, mobile will be the default to execute anyway.
For One-Off Reports: PageSpeed Insights.
- Test one page at a time in PageSpeed Insights.
- Enter the URL.
Benefits of running a lighthouse from PageSpeed Insights.
- Detailed lighthouse report is bundled with URL-specific data from the Chrome user experience report.
- Opportunities and analyzes are filtered to specific metrics.
- This is exceptionally useful when creating tickets for the engineers and tracking the outcome of changes.
- PageSpeed Insights is already running version 9.
The dangers of running a lighthouse from page speed insights.
- One report at a time.
- Performance tests only run (if need SEO, accessibility or best practices, should run them separately)
- Can not test local builds or standardized pages.
- Reports are not saved in JSON, HTML or Gist format. (Saving as PDF via browser functionality is an option.
- Need to save the results manually.
To compare test results: Chrome DevTools or Web. dev
- Use the incognito example with all extensions and the browser cache is disabled, as the report emulates the user experience using the browser.
- Pro-tip: Create a Chrome profile for testing. Keep it local (sync is not enabled, password saving or link to an existing Google Account) and do not install extensions for the user.
To run a test lighthouse using Chrome DevTools
- Open an anonymous example of Chrome.
- Navigate to the Chrome Dev Tools Network Panel (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Tick the box to disable cash.
- Navigate to the lighthouse panel.
- Click Create Report.
- Click the dots to the right of the URL in the report
- How to run a test lighthouse using Chrome DevTools
- Open an anonymous example of Chrome.
- Navigate to the Chrome Dev Tools Network Panel (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Tick the box to disable cash.
- Navigate to the lighthouse panel.
- Click Create Report.
- Click the dots to the right of the URL in the report
- Save in the favourite format (JSON, HTML or Gist)
Keep in mind that the lighthouse version may change depending on the version of Chrome are using. v8.5 Used in Chrome 97.
Lighthouse v9 will be shipped in Chrome 98 with DevTools.
To run a test lighthouse using Web. Dev
It’s similar to DevTools but one has to remember to disable all those annoying extensions!
- Go to the web. dev/measure.
- Enter the URL.
- Click Run Audit.
- Click to view the report.
Benefits of running a lighthouse from DevTools / web.dev
- Test local builds or standardized pages.
- Reports saved using the Lighthouse CI Diff Tool can be compared.
Disadvantages of running Lighthouse from DevTools / web.dev
- One report at a time.
- Need to save the results manually.
For testing at scale (and sanity): Node command line
- 1. Install npm.
- 2. Install the lighthouse node module with npm Install -g lighthouse
- 3. Execute the same text with it Lighthouse <url>
- 4. Run the tests on the utility lists by running the tests programmatically.
Pros of Lighthouse Running from Node
- Multiple reports can be executed simultaneously.
- Can be set to run automatically to track change over time.
Disadvantages of Lighthouse Running from Node:
- Requires some coding knowledge.
- More time-intensive setup.
The complexity of performance metrics reflects the challenges facing all sites.
Use performance metrics as a proxy for user experience – that is, factor in some unicorns.
Tools like Google’s Test My Site can help make customer-based arguments about why conversion and performance are important.
The project includes traction, and these definitions help to translate the lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.
Track data and shout from the roofs.
No matter how hard Google tries to quantify qualitative experiences, SEO professionals and devs must decode how to code a concept.
Test, repeat and share what to learn!