#44 From My Diary: What Years of Testing Have Taught Me
Theory is one thing; reality is another. Seven testing patterns that I have seen again and again. No theory—just hard-earned insights.
We have all heard about how crucial testing is. Shift left, shift right, traditional testing. Pyramids, inverted pyramids, and diamonds. End-to-end, integration, and unit tests. Acceptance, regression, smoke tests, and who knows what else.
How do you avoid drowning in this sea of information? How should you approach a testing strategy for your application?
There is no single answer – it all depends on a multitude of different factors. What I want to share with you is something of a personal diary – seven patterns I have noted over the years while building various applications. I hope it will help you decide which way to go.
The following fragment comes from the testing strategy step in my Master Software Architecture book:
Theory is one thing; reality is another. You can blindly follow concepts, but what really matters is what works well in your specific context. Over years of working with various systems, I have gathered key observations that I would like to share.
Every time we prioritized integration tests over unit tests, we observed increased endpoint reliability and stability. However, this approach inadvertently resulted in a decline in code structure and its quality. The focus on overall functionality came at the expense of maintaining well-designed code at the chunk level.
When we focused too much on unit tests and too little on integration tests, we were able to keep our code well structured and it was easy to modify. The downside was that our endpoints often failed to perform as expected due to insufficient integration tests. Even though unit tests passed, we had many issues integrating multiple chunks.
Focusing mainly on end-to-end tests in greenfield applications almost always led to huge technical debt and, eventually, an inverted pyramid of tests. The plan was to start with end-to-end tests and gradually add other types of tests. However, this approach often failed because there was never enough time to add additional tests due to the constant addition of new features.
End-to-end tests have always been the most helpful for us when working with legacy projects. When no tests were in place, the easiest approach before refactoring was to cover key business processes with E2E tests. And when such tests did exist, they were the only way to understand how the process worked, especially in the absence of domain experts.
Flaky end-to-end tests often caused frustration and neglect. When there were hundreds or thousands (ouch!) of such tests, running them took several hours. Sometimes, some tests would fail randomly, so we had to rerun that part of the suite. Then they would pass, but we couldn’t be sure if other tests had affected the results. So we had to run them again, and then other tests failed. This led to a repetitive cycle of testing and retesting.
The amount of stuff we had to mock was a good indicator of how well our test was designed. Each time we needed many mocks, the test was unstable and useless. It was a trigger for us to review and redesign the code we were testing.
Shifting our focus to behaviors rather than all possible combinations resulted in the creation of more effective tests. By concentrating on the expected outcomes and interactions, we were able to develop tests that were well-structured, reliable, and easier to maintain. This behavioral approach allowed us to better align our testing with customer requirements.
As you can see, in some cases it is a game of trade-offs (similar to other topics related to software architecture). The key is finding the right balance for your specific context. Too much focus on integration tests might improve endpoint reliability but could impact code quality. Unit tests can help maintain clean, modifiable code but might miss critical integration issues. End-to-end tests could be invaluable for legacy systems yet potentially troublesome in other cases as they are slow and flaky.
Rather than dogmatically following a single testing approach or methodology, the most effective strategy involves carefully weighing these trade-offs against your project's unique needs, team dynamics, and business requirements. Remember that testing strategies should evolve as your application grows and changes – what works perfectly in the early stages of development might need adjustment as the system matures.
What are your observations when you come to the test topics?