Test Driven Development is a commonly recognized term and probably the most acclaimed development practice out there. Nowadays, it is near to impossible to find a developer, architect or a project manager not entirely convinced this paradigm is worth implementing. Or is it so?

First, let me remind you what the arguments for TDD are:

  • Having to implement tests, you will have to deeply understand what your code is supposed to do in the first place. Do not underestimate this opportunity.
  • It will also force you to rethink your architecture and use at least some of the trustworthy design patters, e.g. Inversion of Control, before you start coding.
  • You will be able to actually run and test your code without needing to integrate it with the rest of the application.
  • Testing edge cases will give you a picture what potential security security and performance issues should be mitigated.
  • Tests can be themselves the best user documentation you could ever write.
  • You can safely refactor your code anytime – thus you can also afford not to invest too much into its versatility until it’s really needed.

I have seen many projects where teams were dedicated to implement all kinds of automated tests before the project started and then dropped this idea shortly after the development commenced. I can understand some of the reasoning behind this:

  • Implementing tests is time-consuming even if it will spare even more time in the future. Thus, it is hard to “sell” the idea to some uneducated clients.
  • Under a tight schedule, with the upcoming deadlines, it is too easy to cut the corners and give up implementing tests (temporarily, of course).
  • In some cases, implementing meaningful automated tests is very difficult, e.g. for thin-client web applications without much business logic.
  • Implemented and then maintaining automated tests in general requires a lot of effort. Usually a lot more than implementing and maintaining the code being tested. It sounds daunting.
  • Therefore, it is very tempting to disable failing tests or to implement “dummy” tests instead (they always pass without detecting real problems).

So, what you can do to help your teams keeping up the good quality and adhere to the best practices? Obviously, just committing to implementing test is not enough and will, in the worst case, lead to a slow-but-steady decay and eventually – dropping them all together.

I have a couple ideas on how to make this a sustainable process, to convert a chore into a habit. I hope you will implement them while working on your projects and then share, what the outcomes are.

  • Make unit tests, integration tests and UI tests parts of your Definition of Done. Consider them integral part of your development process. Code is not finished – not working – until it is not covered with enough number of tests (dummy tests do not count!) Let the other team member check tests written by their colleagues.
  • Add the time needed to implement and maintain tests to your estimates. Adding them to the DoD will help you justifying your estimates.
  • Maintain the tests whenever they require it. Do not ever disable tests only because they stared to fail. Fix the code and extend the test cases instead.
  • Configure the automatic build environment and run all short-running tests after each commit. If some tests require more than 15 minutes to run, schedule them to a nighttime.
  • Create reports from automated test runs and pass them to the management and clients on a regular basis. Advertise your concern over quality, it will help you defending your case.
  • Educate your clients and management on every opportunity. Make them wanting you to deliver the tests as an inseparable part of the code you produce; implementing and maintaining the tests – a necessary part of the production process.