Test Optimization for Continuous Integration

DavidTz40 - Jan 20 '23 - - Dev Community

“Test frequently and early.” If you’ve been following my testing agenda, you’re probably sick of hearing me repeat that. However, it is making sense that if your tests detect an issue soon after it occurs, it will be easier to resolve. This is one of the guiding concepts that makes continuous integration such an effective method. I’ve encountered several teams who have a lot of automated tests but don’t use them as part of a continuous integration approach. There are frequently various reasons why the team believes these tests cannot be used with continuous integration. Perhaps the tests take too long to run, or they are not dependable enough to provide correct results on their own, necessitating human interpretation.

I begin my evaluation of these suites with a simple task. I start by drawing two axes on a whiteboard. The vertical axis represents the test value, while the horizontal axis represents the time it takes the suite to execute. The team and I then write the name of each test suite on a sticky note and adhere it to the proper spot on the board. The graph below depicts an example of a grid that illustrates how each test suite measures.

Here’s an example that you may adapt to your own situation:

RUNING TIME OF THE TEST SUITE\ IMPORTANCE OF THE TEST SUITE <15 MINUTES <45 MINUTES >45 MINUTES
High TS 1 TS 5 TS 3
Medium N\A TS 2 N\A
Low TS 6 N\A TS 4

We establish the significance of the tests on the team’s personal viewpoint; thus, we keep the options simple: Low Value, Medium Value, High Value. This viewpoint is based on the tests’ dependability, or their ability to produce correct findings each time they are run, as well as the amount of confidence the tests provide the team in the system’s quality. Some test suites, for example, are required when making a choice, but the results are inconsistent, and when they fail for no apparent reason, a person must manually re-execute those failing tests. We may still label this test suite medium, but if it worked perfectly every time it becomes “High Value”.

On the other hand, there may be a test suite that is run because it is part of a checklist, but no one understands what the findings indicate. Perhaps the original creator has left the team and nobody has taken control of that suite. That suite falls into the Low-Value category. The horizontal axis is straightforward to identify: It’s just the amount of time it takes to run the suite. Now that you’ve evaluated each suite, consider how you can improve them by making them more useful or by having them execute faster. I prefer to divide the tests for continuous integration into these categories:

  1. High-value tests that run in 15 minutes or less — These tests can be executed on any build. These are used to accept the build for further testing; until these tests pass, the team should consider the build broken. Your developers will not be happy if they have to wait more than 10 minutes for the build results.

  2. High-value tests that can be completed in 45 minutes or less — These tests can be run continuously. For example, you may arrange these tests to run every hour and start again as soon as they are complete. If there isn’t a new build available yet, you can wait till the next build is finished.

  3. High-value tests that take more than an hour to run — These tests may be done on a daily or nightly basis, so that the outcomes are ready when your team’s business day begins.

  4. Medium-Valued Tests — These tests can be performed once a week or once per release cycle.

You’ll see that I excluded tests with really no value. These should be excluded from your execution or enhanced to deliver value. Keeping test suites that don’t bring value is pointless. Based on feedback from the development teams, I established time limits of 15 and 45 minutes. They demand immediate feedback. Consider a developer who is waiting for the build results to complete successfully before leaving at lunchtime. Your timings may vary depending on your circumstances; this is only a framework to demonstrate the thinking processes behind picking tests that run with the build versus hourly.

A significant advantage of running the tests thus often is that you are expected to have very few code changes between a successful test run and a failed test run, making it easy to identify the change that prompted the test to fail. Several solutions have proven useful in enhancing existing tests for continuous integration suites. Here are five proven and effective methods.

Here’s a Free online tool random YAML generator to generate data structures.

Automatically initiate tests

You may have numerous test suites that are normally triggered by an employee throughout the testing phase of a project. Including these tests in the continuous integration, the suite is often as simple as a little PowerShell scripting. Performance, Load, and security tests are two examples of tests that might be performed by an expert who is not a member of the traditional test squad and hence cannot be set for automated execution. Another benefit of doing these tests on a regular basis is that the problems that are detected are typically difficult to overcome; so, if the problem is identified sooner, the team has more time to resolve it. These tests are generally classified as Very Significant, but because they take more than an hour to complete, they are typically performed on a regular basis.

Remove uncertainty

The entire purpose of automation is to get reliable, accurate test results. When a test fails, specialists must figure out what went wrong. However, as the number of false positives and inconsistencies increases, so does the time needed to analyze mistakes. To avoid this, remove unstable tests from regression packs. Furthermore, older automated tests may overlook critical verifications. Prevent this by conducting enough test planning before performing any tests. Always keep an eye on whether each exam is up to date. Ensure that the sanity and validity of automated tests are thoroughly checked across test cycles.

Be clever with your wait times

We’ve all done it: a problematic test consistently fails because the back end didn’t respond fast enough or because a resource is still processing, so we add a sleep statement. We meant it to be a temporary solution, but that was almost a year ago. Look for those awful sleep statements and see whether you can swap them with a better wait statement that completes when the event occurs rather than after a predetermined amount of time.

Collective Ownership of Tests

Don’t delegate full automated testing initiatives to a single tester or programmer. The remainder of the team will be unable to contribute meaningfully if they are not kept up to date at all times. To properly incorporate automation into the testing infrastructure, the entire team must be on board at all times. This allows every team member to be aware of the process, communicate more clearly, and make educated decisions about how to set up and execute the appropriate tests.

Restructure the test configuration

Tests generally have a setup, then perform verification. For example, one team had a suite of UI-driven tests that took a long time to run and had many false errors owing to timing difficulties and small UI adjustments. We refactored that suite to execute test setup using API commands and verification via the UI. This improved suite had the same functional coverage, but it ran 85% faster and had approximately half the false errors caused by updates.

Also, check this Free online tool to generate random data from regexp as per the written regular expression.

Run tests in parallel to maximize the value of each minute of execution

It is significantly more economical to run tests in parallelism thanks to virtual servers, cloud technology, and services that assist automatically create environments and distribute your code. Look at the test suites that take a while to run and see if there are any options to run those tests simultaneously. We had a highly important test suite with 5,000 test cases on one group. We didn’t perform this suite too frequently because it takes many hours to complete. It was a highly thorough examination that covered a wide range of aspects. We were able to divide that suite into roughly a dozen other parallel-capable suites, allowing us to run the tests more often (daily as opposed to weekly), and we were also able to identify any issues more quickly because the new suites were arranged by component.

Make small yet effective test suites

Go for the most critical tests and combine them into a smaller, faster-running suite. These are often relatively basic tests, but they are required to validate your system for further testing. It makes absolutely no sense to advance if these tests fail. We usually call these build acceptance tests or build verification tests. If you already have these suites, that’s fantastic; just make sure they run fast.

This free online tool enables you to compare two text files and highlight the difference checker between them, thereby simplifying the identification of unique content..

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .