Tool-supported test design: A plea
Process-supporting tools make our daily work easier. They help us to develop and test software or manage the project. But there is one blind spot: test design. Good tests seem to fall from the sky. You just have to write them down or programme them.
In fact, there is enormous potential for improvement and savings that is left untapped in most organisations. Why should testers be satisfied with a text editor while their colleagues in development use various tools to support their work? Let’s take a closer look at this question.
Where do we come from? – The industrialisation of software development
The calls for an “IT factory“1 bor “industrial software development” have been around for a long time. Surely it must be possible to industrialise software development in the same way as car manufacturing, for example! In its 2010 guidelines, Bitkom defines four pillars of IT industrialisation2:
- automation,
- reuse,
- specialisation and
- continuous improvement.
We have made enormous progress over the last 20 years, especially in the area of automation. Developers have numerous tools at their disposal – not least thanks to the trend towards agility. Just think of Continuous Integration or Continuous Deployment (CI/CD).
But a lot has also happened in other areas. Architecture patterns support the modularity of software systems and thus their reuse. Experts in usability, IT security and, more recently, AI systems are so much in demand that they are hard to find. Continuous improvement, which often falls victim to day-to-day business, has at least been recognised by those involved.
So is it all sunshine and roses?
Where do we stand? – A sharp look at software testing
Unfortunately – or perhaps fortunately – software development is not just about programming. An important part is quality assurance. Viewed holistically, quality assurance begins with clearly communicating what is actually to be developed. Many companies have therefore introduced tools such as Jira / Confluence or another ALM (Application Lifecycle Management) tool, which theoretically contains everything important and is accessible to everyone involved in the project. I can also record test cases in the ALM tool, which I then ideally carry out automatically.
Test automation is also naturally tool-supported and uses reusable libraries. The so-called low-code or no-code tools enable the technical experts to concentrate on the essentials, namely the content.
Strangely enough, there is a blind spot at precisely this point in software testing: test design. Little noticed from the outside, technical experts usually endeavour to do their best and create “good” tests without any tool support. “Good” usually means “comprehensive, technically correct and not too expensive”.
Test design always takes place, completely independently of the test process. Here are a few examples:
Manual tests against requirements or user stories
For each requirement or user story, the test should cover all acceptance criteria. A careful test design ensures that everything that was requested has also been tested. In addition, testers come up with error scenarios and special cases to push the system to its limits. Depending on the level of documentation required, these tests are written down in advance (as a test specification) or only recorded in a test case description in keywords. It is important that the completeness of the specifications is given and comprehensible.
Automated tests
Automated tests also require test design. They are no different from manual tests. After all, a test script is only as good as the tests that have been automated in it. However, the documentation of the test design is often only rudimentary, often in the form of comments in the code. These then often focus on the design of the test script itself. However, test design and code design are different things! One is about the test idea, the other is about the way the script is programmed.
Exploratory tests
Even exploratory tests are based on a kind of test design, because exploratory testers also think about equivalence classes, limit values and error scenarios. This approach has become second nature to experienced testers. This makes them gifted “critics” of the product. This expertise is rarely written down.
The lack of documentation of the test design in particular means that this important intellectual achievement is practically not recognised. This even applies to test processes that still work classically according to the V-Model with test specifications. This is because these specifications usually contain the details of test execution, in which the underlying considerations are lost.
The quality of the tests stands and falls with the test design. Why is this not reflected in the level of support? Test design is usually done manually, without any tool support. We cannot speak of “industrialisation” here. This is at best a craft, if not an art!
Where do we want to go? – Tool support in test design
It’s time for managers to become aware of test design. After all, we are not only talking about a crucial activity in the test process, but also about an enormous potential for improvement.
Just how much productivity could be increased in test design becomes clear when compared to programming, i.e. the implantation of source code. Nobody would expect developers to write source code in Notepad++. How could they? The person in question would have quit within a few months. Developers have their integrated development environment (IDE) with syntax highlighting, text auto-completion, tools for static and dynamic code analysis and much more.
Testers seem to be more capable of suffering. We are already happy if we have this ALM tool because it allows us to link test cases with requirements / User Stories. This makes it easier to prove completeness. Otherwise, we have fields for test steps, expected and actually observed results. With a bit of luck, a spell checker will help to avoid the worst nonsense. All other fields, such as priority, are already used for test management. Tool support looks different!
However, these tools do exist, as the following list – without any claim to completeness – shows:
Visualisation of complex relationships
All tools that can be used to visualise complex relationships contribute to the clarification of requirements and therefore also to test design. This includes simple flowchart editors (e.g. Miro, Draw.io, Yest for Jira…) as well as more complex modelling tools (e.g. Sparx Systems Enterprise Architect or Camunda).
Determination of rules / scenarios to be tested
Agile Coaches have various methods up their sleeves that help to get the conversation going about new functionality and define the test requirements in a simple and efficient way. Prominent examples are example mapping and storyboards. For both methods, I need little more than a pen and paper.
Systematic testing of equivalence classes and limit values
Systematics can be achieved in many different ways. Classification trees, decision tables, explicit modelling of values – anything that makes the thoughts in the test design visible is helpful. Which method is supported depends on the tool selected. Examples are Tessy with the integrated classification tree editor, the decision tables in Yest and all tools for model-based testing (MBT).
Test case creation and revision
Pairwise Testing Tools support the generation of combinatorial tests3.
Model-based test case generators create test cases from models. The numerous MBT tools available on the market differ not only in the method of generation, but also in the degree of support for the creation and revision of the models and the generated test cases. Yest, for example, offers a comprehensive development environment for testers4, while TestCompass limits itself to the strict minimum5.
What do we need? – More appreciation for test design
Perhaps the picture painted here is too pessimistic. The ground-breaking development of generative AI will certainly contribute to better support for test design in the future. However, AI requires source data that is of good quality – and this is often where the problem lies.
In fact, testers catch up in test design on what was neglected in requirements elicitation or simply postponed to the backlog refinement sessions. It is not expedient to separate these two activities. Test-first approaches such as Behaviour-Driven Development (BDD) or Visual ATDD (an agile form of model-based testing) have understood this. Placing test design at the beginning of the process kills two birds with one stone. We should not see test design as an annoying cost factor, but as a value-adding form of clarifying requirements that needs and deserves suitable tool support. What we really need is more appreciation for test design!
Notes (mostly in German):
Please get in touch with Anne Kramer if you would like to talk to her about test design or take a look at the visual test design tool Yest. Simply write her an email or connect with her on LinkedIn.
[1] Der Weg zur modernen IT-Fabrik
[2] Bitkom: Industrielle Softwareentwicklung. Leitfaden und Orientierungshilfe
[3] Software Pairwise Testing Tools
[4] Test generation with Yest
[5] TestCompass
If you like this article or would like to discuss it, please feel free to share it in your network.
Anne Kramer
Anne Kramer has been involved in testing ever since she left academia in 1996. For many years, she was a project manager, process consultant and trainer in quality assurance at a medium-sized service provider. In 2006, she discovered her enthusiasm for model-based testing (MBT). She was actively involved in the ISTQB curriculum and co-authored the English-language textbook “Model-Based Testing Essentials” with her current colleague, Bruno Legeard.
Since 2022, Anne has been Global Customer Success Manager at Smartesting, a French manufacturer of software testing tools. As such, she looks after the German-speaking customers of the visual test design tool Yest, a model-based development environment for tests.