It's Time For A TRIM(S): Richard Bradshaw [Testμ 2022]

LambdaTest Team - Sep 6 '22 - - Dev Community

Automation, automation everywhere.

Why do you need to automate? Where do you need to set a boundary? When you dig deeper for the answers to these questions on automation, that’s where things get interesting. Richard Bradshaw, BossBoss at the Ministry of Testing, joined Manoj Kumar, VP — Developers Relation at LambdaTest, as a guest speaker at the Testμ Conference 2022 to unveil the nitty-gritty of automation testing through the TRIM(S) model.

He spoke elaborately on why the need arises to trim your automated tests. How do you get it done? He has detailed every step in his conversation with us.

Before he explained in detail about TRIM(S), he sent a red siren to the testing community:

Thou shall not delete!

This means that when you conduct automation testing, you need to see if the tests add any value to the core principle. When the test is green, and the team feels strongly that it wouldn’t add any value, he recommends that the team get into a discussion and share their feedback on why it has to be deleted.

These are the reasons he suggests for deleting or, in other words, trimming automated tests:

  • When there is no validity for the risk it was mitigating.

  • When you have moved the test to a different layer/seam.

  • When you come across unreliable/ flaky tests.

  • When you have features with lower usage.

  • When the test is too expensive.

To make this process efficient, he comes across the mnemonic- TRIM(S).

It stands for Targetted, Reliable, Informative, Maintainability, Speedy.

With Targetted, he means targeting a specific risk and automating on the lowest layer, which the testability allows.

When you target, he means you need to target a tiny risk in smaller units after identifying it instead of looking at the larger picture. With testability, he speaks about the ability to test, which is influenced by many factors. The lowest layer to test is the API testing, but he divulges the importance of your testing team performing tests at the API level. He suggests you sit with a piece of paper and think about what makes it hard to test.

In short, he suggests the tester think about risk, seams & layers, and testability.

The next thing he emphasizes is Reliability.

To maximize the value of your test, he suggests introducing checks to reduce flakiness.

“When we tend to have automated checks, and the reason why we tend to have automated tests is that we are looking for rapid feedback, we are looking for knowledge, we are looking for change detection,” he told the crowd.

He also adds that we are looking to be told, ‘When we built this, you asked to be told that it needs to look like this. It doesn’t look like this anymore.’ Your team can look at it and implement the suggestions.

By value, he means the information you get from the automated checks, which are useful and help your team make the right decisions.

He insists that you cannot get rid of flakiness. It’s always going to be there, according to him.

“The systems are complex. The codes and the tools we use are complex. Our code is complex. There will always be issues, so you can’t eliminate flakiness. But you can reduce it by taking a productive effort,” he quotes.

With deterministic, he means setting the right expectations when we build automated checks. According to him, it’s essential to look at the time your flaky tests take to get sorted out. With rapid feedback loops, he means that the faster the information you get, the better things will be.

Next comes Informative.

Passing and failing checks must provide as much information as possible to aid exploration.

When tests are passing, we need to briefly examine why it’s passing.

“A failing test is an invitation to explore” These are exactly his words when he speaks about exploration. It’s important to see why tests fail and learn our lessons, isn’t it?

With decision-making, he insisted on using a little army of robots, a.k.a automation, on learning from the failed and passed tests to decide further.

He then delves into Maintainability. As per his words, automated checks are subject to constant change, so we need to maintain a high level of maintainability. He says that when you depend on a lot of code and tools to test, you cannot ignore the importance of maintaining them. It’s important to write better automation code to ensure this process is smooth.

The final part is Speedy, where he insists that execution and maintenance must be as fast as the testability allows to achieve rapid feedback loops. He insists that improving and upgrading the system needs to be faster, allowing you to enjoy faster feedback, which translates to a successful automation testing experience.

He explained how buying an automation testing tool alone isn’t enough and that the execution phase is just the beginning. It’s equally important to depend upon a reporting tool to get the necessary information on failures and get information quickly when the test passes. The next stage would be maintenance, where you can implement different practices for tests with many layers.

“Quicker you get the needed information, the quicker you can build automated checks,” he told the listeners.

He concludes that the main reason we automate tests is to achieve feedback loops.

He further answered the questions put forth by the audience, where Manoj Kumar, VP of Developer Relations at LambdaTest, took the initiative to ask them on the audience’s behalf.

It was indeed an informative session with Richard! The session ended with a few questions asked by the attendees to Richard. Here is the Q&A:

What will be the future of manual testers in upcoming years?

Richard: If the person is someone writing test cases and manually executing test cases, the future is dark. The reason is they are missing out on a lot of knowledge. As a team, you need to look out on how you are going to perform testing. If that’s done perfectly with a pure focus on quality work, then the future is bright. If you are depending upon traditional testing methods, then you need to think it over.

What is the future of Selenium? Will Playwright replace it?

Richard: I don’t think so since they serve different purposes. The objective of Selenium is to be as close to the user as possible. Playwright isn’t the same. If you are trying to test for the user, you must test as the user. Hence I would be happy to depend upon any tool if I have to test as a user.

What’s the best process for where to house removed tests to reference them without having to dig through source control?

Richard: When you have tested A, B, and C but not tested E and F, you might be missing out on a lot. I am not a fan of ignoring tests. It’s equally important to check every test and learn better.

How important is test orchestration when you do automation testing? What are some great tools you are using for test orchestration?

Richard: The first question we need to ask is “Is the system alive?” Potentially, many people would prefer quicker execution over slower execution. Hence test orchestration can come in handy. We depend on Bash scripts as far as tools are concerned.

How can we conclude that we have to trim the tests since trimming out in the development stage can result in loss of test coverage and impact the results?

Richard: You need to look at how your test fails and how to get it done better. When you have flakiness, low confidence in your results, feeling like the test will fail, and when there are issues in production when the test is going live, you can consider trimming the test.

How do we quantify the value of the test?

Richard: It will help you make a decision. The result should make you decide what’s best to do next.

How often should automated tests be audited?

Richard: I would advise you to do it regularly and keep it part of your daily work.

What would you consider to be an acceptable build time?

Richard: In general and on industry accepted terms, it’s 15–20 minutes. It’s better to sit down with your team and decide on it.

Do you want to know his answers to the most vital questions put forth by the testing community? You can click the video link shared above to watch the full session.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .