Archive

Posts Tagged ‘tdd’

Lessons from a month in hell: unit tests need love too!

September 29, 2010 Leave a comment

My last month (sprint and a half) have been spent “merging” and “refactoring” the codebase for a supposedly “new” system that is being developed to (eventually) replace a legacy platform. One of the things I’ve come to appreciate is that developers are perfectly capable of writing brand-new code with a lot of the characteristics of legacy systems. In his book, “Working Effectively with Legacy Code”, Michael Feathers defines legacy code as “code without tests”. I’d add that code with bad tests can be almost as bad as code without tests. In fact, because unit tests are code, poorly written tests – test that are hard to read, hard to maintain, etc. actually contribute to the technical debt.

If you’re not sure what bad tests look like (and conversely, what good tests look like), I highly recommend you check out Roy Osherove’s “The Art of Unit Testing”. I also recommend checking out Osherove’s video test reviews – educational and entertaining at the same time.

This post isn’t so much about what constitutes a good or bad test – it’s more to highlight the fact that unit tests need to treated with the care as production code. If you believe that your production code should be readable – you should expect the same out of your unit test code. If you believe that your production code shouldn’t be thousands of lines long – your unit test shouldn’t be either. Don’t like your production code to have too many responsibilities? Expect the same of your unit test code. You refactor your production code has your requirements change and as your design emerges – do the same for your unit tests.

There is perhaps one exception – or at least something I might consider doing slightly differently. We all know about the “DRY” (don’t repeat yourself) principle. For unit tests, I am inclined to err on the side of readability over full-on adherence to the DRY principle – sometimes I prefer to be more explicit in the “arrange” part of my unit tests (“arrange”, “act”, “assert”). In particular, I tend to do this when there are fakes (mocks or stubs) involved – it’s important that somebody reading the test knows that there are fake objects involved in the test.

Some code hygiene issues that are specific to unit test code (and might not necessarily apply to production code)…

  1. Separate integration tests from unit tests – this is something you don’t generally have to worry about with normal code, but when you have a test suite that you want people to actually run – especially, when you have any reasonable number of test suites – you want to allow people to be able to run their test suites quickly (for example, Friday half an hour before beer o’clock). Separating integration tests from true unit tests will allow people to identify which tests are likely to be pretty quick and painless to run and which tests could potentially take a while. This is also useful if you find your CI build getting painfully slow.
  2. Keep the number of ignored tests to a minimum – Ignored tests are like commented out code. At minimum, add a comment (e.g. nUnit’s Ignore attribute allows you to add a comment) so that people know why the test is commented out. Generally, I don’t like to have too many ignored tests kicking around – far from being the great communication tool that well written unit tests can be, ignored tests just add noise .
Categories: Agile Tags: , , ,