One thing that I see a lot of with JUnit tests are “cascade failures”. That is, one change causes lots of tests to break. This is often (not always) associated with tests that assert things they shouldn’t.
To quote the JUnit cookbook
bq. In the case of an unsuccessful test JUnit reports the failed tests in a list at the bottom. JUnit distinguishes between failures and errors. A failure is anticipated and checked for with assertions. Errors are unanticipated problems like an ArrayIndexOutOfBoundsException.
You leverage this behaviour by not over-asserting. In particular, don’t assert the same things a lot in the different test. Here’s a good example:
public void testSomething() { String someValue = new FunkyClass().doSomething(); assertNotNull(someValue); assertTrue(someValue.length() > 10); }
Here, it’s obvious that the test author has put the assertNotNull check in to ensure that they don’t get a (possibly-annoying-to-debug) NullPointerException later on. The problem is that this test will be marked as ‘failure’ as is if someValue is null, where as it would have been marked as ‘error’.
By itself, this isn’t a big deal. However, when you’ve got lots of tests all doing the more-or-less redundant “assertNotNull”, you get a cascade effect: all of a sudden, FunkyClass.doSomething() starts returning null, and lots of tests are marked as “failed”.
By contrast, if the tests were split up as follows:
public void testSomethingIsNotNull() { String someValue = new FunkyClass().doSomething(); assertNotNull(someValue); } public void testSomethingIsLongEnough() { String someValue = new FunkyClass().doSomething(); assertTrue(someValue.length() > 10); }
I would now have 1 test failed, and 1 test in error. If there were a lot more tests, I would have, say, 1 test failed and 100 tests in error. While I would still have 101 broken tests, I would start with the failed test and see if fixing it fixed all the other tests.
So, in general, don’t assert things you don’t need to. Some people take this to an extreme and say “One assertion per test” – that’s a little too far for me, but you certainly don’t want to repeat assertions.
Interesting. I too assumed that it was sensible to put the null check to stop the annoying null pointers.
But as David Brent says “assume puts an ass between ‘u’ and ‘me'”.
It is also an interesting point that if you look at your failures first, you will have a better chance of seeing at a glance what is wrong, rather then looking at the exceptions peppering your results.
It _can_ be sensible to put the assertNotNull() in. This, BTW, assumes that failures make more sense than null pointers. 🙂
The problem lies in the _repeated_ assertions. Assert it once, in one test, then don’t do it again.
Of course, the assertNotNull isn’t the only example of this particular anti-pattern; it’s merely the most prevalant.
I found myself slipping into a pattern where I would stuff a whole lot of assertions into one method, whereas I *should* be doing what you suggest above (breaking out important assertions to indicate a major failure seperately from more functional tests).
This is cause I am going through a slow test suite-hatred phase. And that leaks down into me thinking “that doesn’t really need a test all of its own…”…
(of course the real reason for slow unit tests is cause they often aren’t unit tests).
Seems to me like it’s a variant of always check for null argument idiom.
Both are a good idea, IMO. You want to say explicitly in the code that nulls are out. And as a side effect you don’t have the possibility of code changes removing the check in some obscure cases.