Editorial

Why test automation is easy, but testing is tough

Ajit Sawant, lead test engineer at DWP Digital shares insight on the importance of test automation against the backdrop of service modernisation.

Posted 24 October 2022 by Matt Stanley


Test automation: why?

As a Test Lead for DWP Digital, I wanted to share some insights into some of the tools and techniques we are using within the department on our products and services, along with some thoughts on how test automation fits into our plans for service modernisation. 

DWP Digital is evolving from building monolithic products to microservice-based ones. By this, I mean from a monolithic architecture which is the traditional model of a software program that is built as a unified, self-contained unit independent from other applications to microservices, which allows a large application to be separated into smaller independent parts, with each part having its own realm of responsibility.

This transformation needs to be supported by a robust test strategy. To match the speed and scale at which the wheels of transformation are in motion, test automation is the only answer. However, it is not only test automation which will quickly deliver the quality product out of the delivery ‘sausage machine’. DWP Digital’s delivery squads are also on the journey of automating everything – from infrastructure to build, test and deploy – and then repeating this cycle. Exciting times!

It’s essential to have the right tools, but that those tools are used in the right way

The DWP Digital Engineering practice have invested in Gitlab Ultimate which comes with all the bells and whistles required for Continuous Integration (CI) and Continuous Delivery (CD). The Gitlab Ultimate tool provides tools and capability which traditionally were used by various organisations as end of delivery lifecycle testing, for example an IT Health Check for security, accessibility and performance testing. 

Alongside this, test engineers in our department are also using accessibility tools like Pa11y, which allows user interfaces to be verified for accessibility compliance during every code merge, as and when code changes. It also, provides K6, an open-source load testing tool for engineering teams, as an inbuilt performance test tool.

In addition to the Gitlab features, the DWP test engineering team also use numerous tools to automate their testing. The list is too long to mention here, but I will mention that finding the right tool for the right job is so important.  

New tools are exciting, but don’t forget the good old ways

Ajit Sawant and Ketna Tailor – DWP Digital

We are driven by a marketing world. It surrounds us in every aspect of life, and that includes engineering. Every new thing which comes into the market comes with the suggestion that this is the tool to solve all problems – but it never does! 

New tools are important for the evolution of tooling. These new tools have better features than their older counterparts, and in general, if they are effectively used, they can and do add value.

However, while embracing new tools and new ways of working, we also need to remember some of the old tried and tested principles.

One such principle I would like to mention here is the testing pyramid. Irrespective of whether you’re working with monolithic or microservice applications, irrespective of waterfall or agile delivery methodology, of manual or automation testing, the testing pyramid principles are like a north star. 

The key principles of the test pyramid are:

  • More granular tests in lower environments and a continuous integration pipeline
  • Shift left (in a test pyramid context – test more in lower environments)
  • Fast feedback in lower environments
  • As we move up to higher environments, the focus of testing needs to change to interface design, business driven covered by integration test, end to end testing respectively. 
  • Less number of tests in higher environment, as it is expensive due to dependencies on different products. Testing all the boundary cases, edge cases and failure scenarios in lower environments so there is no need to repeat these tests at a higher level.

There are also other test techniques which are equally important, such as boundary test analysis and test coverage. 

Remember, test automation doesn’t make the quality of your software good. It is the testing which is carried out, such as the quality of test case and test scenarios which drives the quality of software. Test automation just makes it run fast, repetitively and without human intervention.

Remove the blinkers 

In microservice architecture-based products, it’s very easy to focus only on your own product behaviour. And, to a large extent, that is the right approach. However, as a tester, we need to think from business or end user perspectives. Ask questions: what does the business want? What is this data telling us? Is your product sending the right output based on the data? Is this really what the business wants?

It’s so important to speak to your business analyst, and if possible, the business analyst for the end product, to clarify the product behaviour based on the data.  

Remember, humans can apply common sense, but the products cannot. So, this needs to build into product behaviour.

Testing for microservices – what’s the key difference from a test perspective?

The objective of the contract testing phase is to test at the boundary of an external service, verifying that it meets the contract expected by a consuming or provider service. In simple words, contract testing is testing against the ‘live contract’ published by the consumer or provider on the broker accessible by both parties. By doing so, any breaking changes by anyone will immediately get tested in their continuous integration pipeline.

The topic of end-to-end testing has polarised the test community. Before we go into end-to-end testing, let me clarify a few testing terms which are used by different testing resources to mean different things. 

  • Component test – to assure that each function integrates together within the microservice boundary (e.g., resources, service layer/domain, gateway, data mapper) to provide working software that satisfies the agreed acceptance criteria. Some teams call this integration testing as it integrates components within the microservice.
  • Integration test – in the microservice world, this is testing interactions between the product under test with all its interfaces, for example interaction with other microservices or the backend. This is driven by the product under test. Some teams call this an end-to-end test. Maybe it’s end-to-end test with the blinkers on!
  • End-to-end test – in the microservice world, this is testing the end-to-end business scenario to achieve business objectives. This may span across multiple microservices or backend and is driven by the end customer or business.

Rather than getting into the debate of what a specific testing phase is called, it is important to talk to interfacing teams and understand the objective of what they are trying to achieve and work together.

End-to-end tests are expensive, data set up can be complex, can run slowly and can fail due to unexpected and unforeseeable reasons.

To make end-to-end valuable, the following principles need to be followed:

  • Write as few end-to-end tests as possible – limit to happy path / critical business scenarios
  • Focus on personas and user journeys
  • Choose your ends wisely
  • Rely on infrastructure-as-code for repeatability
  • Make tests data-independent
  • Business needs to own end to end test scenarios

Value add – there is much more to it than test reports

Testing is much more than just assessing the acceptance criteria and focusing on a binary pass or fail. Checking for functional correction is important, but it shouldn’t be limited to that. 

Also, the testing process should not make only testers responsible to report on the status of testing. 

The whole delivery squad need to actively listen to the testing heartbeat. What I mean by this is paying attention to test outcomes. Pain points can drive operational intelligence and feed into the operational team who supports your application. User researchers can learn from application journeys. Business analysts and product owners can test and learn the end-to-end process and identify waste in the business process to bring efficiency. Project managers can learn business risk, rather than only focusing on project risk.

Overall, we should focus on quality in our testing, rather than just correctness. Squash the perception of ‘passed test = quality’.

DWP Digital has a complex landscape, with multiple business lines, lots of third-party interfaces and a need for consistent data flow between these systems. The transformation journey to microservices and event-driven architecture will help to achieve improved self-service and increased trust, to provide a holistic customer and colleague experience. 

To achieve these objectives, I believe that we should try to automate as much as possible – infrastructure, build, test and deploy – and repeat this cycle. Use the right tools and use its reporting feature effectively to avoid creating technical debt. Do the right testing in the right environment. Think about your business objectives while developing your product. And make testing your project team’s heartbeat.

Ajit Sawant is a lead test engineer at DWP Digital.