Using the Chrome DevTools Audit Feature to Measure and Optimize Performance (Part 1)

Jim Medlock - Jan 16 '19 - - Dev Community

Creating your Performance Tuning Process

Photo by Christian Kaindl on Unsplash

In an earlier article, “React Application Performance Analysis,” we touched on some of the Chrome DevTools features that are useful when analyzing application performance, and described a workflow for performance measurement and tuning.

Lets now take a deeper dive into Chrome DevTools see how the Audit feature can be used to pinpoint and correct performance issues, primarily using these four tools,

  • Audit feature of Chrome Developer Tools
  • React plugin for Chrome
  • Scientific method
  • Your knowledge & experience

As crucial as tools are to easing the “grunt” work associated with any task, what makes the most difference to the outcome is combining your knowledge and experience with a repeatable process. Relying more on these, rather than on the tools, will allow you to swap different tools in and out of the process based on changes in requirements and landscape. It’s also a superior method of expanding your knowledge and experience since it reinforces what you already know to be true and challenges what you think to be true.

Let’s get started!

The Meteorite Explorer Application

The application we’ll be analyzing is Meteorite Explorer. The sole purpose of this application is to present data about meteorite landings from the Meteorite Landing dataset found on Nasa’s Open Data Portal. Information about each of the approximately 46K reported meteorite strikes going back to 301 A.D. are presented in a tabular format along with the capability to search on the name of the landing location.

Figure 1 — Meteorite Explorer

Rather than pointing out the flaws in the application as it currently stands, try it out and ask yourself,

  1. Are there any areas where performance impacts the user experience (i.e., UX)?
  2. Is performance consistent across all use cases, or are there specific ones where it’s particularly impactful?
  3. Is performance generally acceptable, or is its impact such that it limits the effectiveness of the application?

The Meteorite Explorer repo is hosted on GitHub and the starting point, before any tuning, resides in the branch feature/01-initial-app.

Performance Tuning Workflow

From www.pexels.com

Our performance tuning workflow is conceptually simple, but like most “simple” things it can be complex to implement and follow. The high-level steps that make up our process, or “workflow,” are,

  1. Take a baseline snapshot of performance.
  2. Review the snapshot and identify problem areas.
  3. Develop ideas for how the application can be changed to improve its performance.
  4. Test each idea in isolation and measure performance against the baseline.
  5. Choose the idea with the most positive impact, implement it, and create a new performance baseline as part of your production deployment procedure.

At this point, you might be thinking “This process sounds like the Scientific Method!” There is a good reason for this. It’s because it represents the core principles of the Scientific Method — observe, hypothesize, experiment, and refine.

Practical Shortcuts

It’s important to point out that there are shortcuts that can be taken to make the process faster. The most obvious is that the review of the original baseline (step #2) may not show any problems. If that is the case, then there’s no need to perform any tuning.

Likewise, you may find there are area’s of concern, but nothing severe enough to justify additional tuning. In this situation, make a record of your observation so you’ll have the information you need to give the problem the attention it deserves if it becomes more impactful. However, take caution if you choose to defer resolving the issue so it doesn’t become another instance of technical debt.

As you are developing ideas (e.g., hypothesis) for how to correct a problem don’t overanalyze its importance. It’s advisable to be creative when identifying possible solutions but to rely on your knowledge of the situation and application, coupled with the severity of the issue, to create a more focused list of avenues to pursue.

“It’s okay to use your intuition as a starting point.”

Having a short list of ideas to test is more practical, more efficient, and it follows that it is then more achievable. It’s okay to use your “intuition” since a self-correcting process is being followed. If your intuition is incorrect, the process will reveal that the solution didn’t have the desired impact and another is needed.

It’s Okay to be Wrong

Also, keep in mind that it is entirely acceptable to be wrong since the magnitude of what we learn from our mistakes is often more significant than what we learn from success. One of the strengths of the observe-hypothesize-test-refine approach is it both catches and builds upon incorrect ideas.

“You build on failure. You use it as a stepping stone. Close the door on the past. You don’t try to forget the mistakes, but you don’t dwell on it. You don’t let it have any of your energy, or any of your time, or any of your space.” … Johnny Cash

There is no disgrace in being wrong. Any stigma should be the result of repeating the same mistakes over and over and over again.

Baseline the Application

Photo by Sarandy Westfall on Unsplash

Just as a photograph captures the image of a particular moment in time the baseline captures a profile of the performance of an application at a specific point in its life. The baselines purpose is to establish a position from which the effect of changes can be analyzed to determine if they improved or degraded the original issue.

“If you can’t measure it, you can’t manage it.” … Peter F. Drucker

In a previous article, we examined how to use basic features of Chrome Developer Tools and the React Component Profiler to measure application performance. However, over the past year, two changes have come about which alter the tools we’ll now be using.

First, React release 16 deprecated the React Component Profiler, and it is instead recommended that native browser profiling tools should be used. Second, the ‘Audit’ feature added Google Lighthouse to Chrome’s Developer Tools in version 60.

Capturing the Baseline

To create a baseline snapshot first start Developer Tools ( Option+Command+I on MacOS) and then select ‘Audits’ item from the ‘>>’ menu.

Figure 2 — Selecting the Audit feature (aka Lighthouse) in Chrome Dev Tools

The Audit feature displays various capabilities and options within the Dev Tools pane in the browser window. The ‘Device’ section lists the types of devices used for the audit, ‘Audits’ indicates which audit to perform, and the ‘Throttling’ section defines which network conditions will be simulated.

For this discussion select ‘Desktop’, ‘Performance’, and ‘Simulated Fast 3G, 4x CPU slowdown’, and click the ‘Run audits’ button at the bottom of the pane.

Figure 3 — Audit Feature Options

Running the Performance Audit

Figure 4 — Audit Performance Overview

The Performance Audit takes some time to run, so be patient. Once the audit has completed the results will be displayed in the Audit pane and are divided into four categories,

Metrics

The first part of the Results section provides an indicator for the health of key performance measures, along with an overall score representing aggregate performance. Each metric consists of a standard red-yellow-green health indicator and its elapsed time. Hovering over an item will display an overview and a link to more information.

The ‘View Trace’ button at the end of this section displays a trace of the activity performed by, and for, the application. Advantageous is the fact that screenshots are included showing the state of the UI along the activity timeline.

Figure 5 — Audit Metrics Trace Output

Opportunities

The Opportunities part of the Metrics section provides a list of “quick wins” — ways you can improve the application by optimizing resource consumption. Each opportunity displays its anticipated savings and clicking on it will reveal an overview, a link to additional information, and supplemental information.

Figure 6 — Audit Opportunity

Diagnostics

Diagnostics provides additional areas impacting the application’s performance. Each diagnostic consists of a red or yellow severity indicator as well as an associated measure. Like other Audit components, clicking on a diagnostic will display an overview, supplemental metrics, and links to additional information.

Figure 7 — Audit Diagnostics

Passed Audits

The Metrics section not only provides an overview of what’s went wrong with the application but also what went right. ‘Passed audits’ lists successfully completed tests, and as before, clicking on an audit item provides more information.

Figure 8 — Passed Audits

Saving the Baseline

Keep in mind that the Performance Audit will produce slightly different results when running on different computers, and even when running on the same machine, but at different times. This is one reason why it’s essential to save the baseline.

At this point, we’re not going to worry about how to interpret the results of the performance audit. We’ll leave that for the next section. However, the audit results should be saved to establish a new baseline.

Figure 9 — Audit Download button

Click on the Download button and select the file name and location for the Audit’s JSON file. Since performance tuning requires multiple baselines, a descriptive file name must be used, including the creation date and time. In addition, it’s a good idea to store these in a permanent location, like a Google Drive since at the very least the baseline for your current production release will need to be retained.

To view a saved Audit report open a browser tab to the URL https://googlechrome.github.io/lighthouse/viewer/ and then drag and drop the previously saved JSON file into the viewer window.

Wrapping It Up

Photo by NEW DATA SERVICES on Unsplash

Success isn’t the result of luck — it is the consequence of understanding the problem at hand, having a good set of tools and the knowledge of how to properly use them, and creating a plan for achieving the desired objective. Another vital component of the process is accepting that the plan may need to be adjusted as new information is discovered.

In this article, we’ve presented questions to help frame the application performance problem, defined a performance tuning, and shown how to use the Performance Audit feature of Chrome Developer Tools to create a performance baseline.

In Part 2 — Tuning the Application, we’ll demonstrate how to use these to tune the Meteorite Explorer application to improve the user experience.

Disclosure : This article was based on an earlier article, “Using the Chrome DevTools Audit Feature to Measure and Optimize Performance” which has been refactored into two parts.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .