There are reasons why JUnit has separate test instances per test method. The major reason is independence; you can (or should be able to) run the tests in any order. Tests should set themselves up as needed, and tear themselves down as needed – you shouldn’t write tests assuming that previous tests have been called. People still do, of course, and you get odd bugs, but at least it’s _their_ fault, and not the fault of the framework they are using.
In JUnit, at least, the order that tests are run is decided by reflection – the tests are executed in the order they are declared in the class file. This, in turn, is dependant on the compiler: the Sun
javac will list methods in the order declared in the source, while the IBM
jikes@ does it in reverse order. I have a vague recollection that @gcj does it alphabetically, though I’m probably remembering a stupid joke. The point here is that the order of methods in a class file is something not under your control. Requiring your tests to rely on an ordering not under your control would be silly.
(TestNG, by contrast, runs tests within a test class in the order that the annotations are discovered – again, something not under control)
This has some interesting consequences. For example, in IntelliJ, I can run individual test methods. How annoying would it be to go into a test, run it, and see it fail when it passes when I run the entire test class? Or have it pass when it failed in the larger environment? These wouldn’t be good, right? In the default GUI for JUnit, I can re-run individual tests. How bad would it be if they started passing because a test that ran after the first unsuccessful run changed the world to allow it to pass? There are also test runners that run tests in different orders – for example, the Continuous Testing Eclipse Plugin can prioritise tests for earlier running (notably the most recently failing tests). Using that feature with dependent tests is just asking for randomised failures!
What really surprises me is the context that Cedric is asking the question in. He’s adding a rerun failing tests” mode (and appears to be claiming credit for the concept… naughty Cedric!). He notes that he needs to have the dependant tests run first, and he has to include them in the specially generated
testng-failed.xml file. TestNG, of course, gives you a way to “mark tests as dependent which is fine and dandy for the ones you _know_ are dependent. Of course, because TestNG uses the same instance to rerun tests, every test method in a test class is potentially dependent on the others, through their common instance variables. So this new “failing-test” feature of TestNG will merely result in having randomised failures due to dependencies and side effects that the developer was not aware of!
So to sum up: dependencies that you know about aren’t that big a deal. Dependencies you don’t know about are. When you rely on implicit orderings, you often introduce unexpected dependencies.