No, it’s not an exercise craze (though I do want to get serious about losing weight soon). I’m talking about the FIT acceptance test harness. I looked into it recently, and this (rather extended) post describes what I found out.
In particular, I’m looking at JWebFit, an addon to JWebUnit. The reason is simple: I want to look into testing a web-based application. The application in question is one that we recently built at work, and which made me yearn for automated customer tests. Now that the project is more or less over, I can indulge myself a bit with a working system and a blank slate to learn from. 🙂 And a blank slate it is: I don’t know FIT, I don’t know JWebUnit, and I certainly don’t know this union of the two.
Installed JwebFit last night. Wasn’t too hard, all things considered, especially for a beta product that doesn’t even get distributed prebuilt. The documentation is pretty poor, but I could get the example from the main FIT site going easily enough, and I found a couple of example tests in the jWebFit source. So, enough dithering. Here’s what I did.
Build instructions:
- Check it out from CVS, from the jWebUnit SourceForge project.
- Build jWebUnit first; this is essential.
- Build jWebFit. There was an Ant task to do this: jwebfit.jar
- For good measure, download the Java version of FIT so I’ve got the source for it (a pre-built version comes with jWebUnit).
Setup notes:
After installation, decided to do a quick smoke test. Created a test page to go to the About Page for probres. It looked like this:
|net.sourceforge.jwebunit.fit.WebFixture | | | |baseUrl | http://localhost:7001/foo/ | | |begin | About.do | | |check | title | TitlePage |
Simple enough: it goes to http://localhost:7001/foo/About.do and makes sure that the title element says “TitlePage”.
So how does it work?
There is very little documentation on the internals; a well-defined spec (FIT has implementations in several languages, all of which are meant to conform to the spec), but that’s about it. Lots of examples on writing tests, but not much on what goes on under the covers. The jWebFit isn’t any better, either. 🙂 However, the source is there, and quite readable.
Reading raw source code, however, is a lot like reading raw HTML; rather annoying because you keep wanting to follow the links. I know I’m going to be doing a lot of digging, so I fire up Eclipse, and import the two projects (FIT and jWebUnit) in. Now I can easily jump around using Eclipse’s excellent support for browsing the code base. For good measure, I create another project for holding my own attempts at writing tests.
(FWIW, I highly recommend this sort of approach when coming to grips with a new code base; a good IDE turns source code into a hypertextual environment where you can easily flick back and forth between related files).
The FIT and jWebUnit code turns out to be very understandable, albeit written in a style that I’m not all that comfortable with. Probably due to extensive Smalltalk experience by the coders, I guess. Still, I can handle it. 🙂
So, all that said: here’s what I managed to delve through a quick (30 minute) scan of the source.
In general, FIT works by scanning a document for tables. The contents of the tables is a script in a mini command language. Pretty simple idea, really. There are a few different parsers for different ways of defining tables; the default is for HTML, but the example I’m using uses a Wiki-ish format. The results of the test run is a HTML document; the table (which ends up in HTML regardless of the input) gets annotated with the results.
Each table is a separate test, I think; I’ll look into this more later. In any case the first row in the table is the name of a test Fixture; a standard Java class, albeit one that inherits from the Fixture base class. The remaining rows are method calls; the first column is the method name, with spaces going to camel case (so ‘do stuff’ becomes doStuff()). The methods take no parameters, however they have the remainder of the row available to them as a list of ‘cell’ objects. Each method does something a bit different, I guess.
Looking at the About Test above, we can see two calls: begin
and check
.
begin
is easy; it starts a jWebUnit conversation and pass the second cell in as the URL. Not a problem.
check
is a little different; not quite so obvious. Ah… I see; it maps the second cell to a corresponding ‘assert’ method in jWebUnit, and passes the remaining arguments in. So the |check | title | Foo Bar |
calls the assertTitleEquals method. Pretty simple so far.
The available commands
Let’s look at what some of the other commands I can invoke:
- base url
- Sets the base URL so that you don’t have to enter it all over the place.
- base window
- Makes the top-level window the active window. This will be handy if you’ve got various sorts of windows in your app (ie. internal frames, dialogs, and so on).
- begin
- Starts a new web conversation, pointing at the supplied URL. Like opening a new browser window.
bundle:Supply a resource bundle for a test. Turns out that jWebUnit (and jWebFit) can set values in forms by a resource key, with the real value coming from a resource bundle. This could be very useful for running the same basic test over and over again, each time with different test data. - check
- The all-powerful assertion method. As noted above, this calls a corresponding assertion method in jWebUnit. There’s lots of those, and while I’ll probably have to write them up if I want to point this tool at my customer reps, but for now, simply check out the javadoc and remember the rule: the second cell becomes the name of the assertion. So if the second cell is “checkbox selected”, then you’re going to be calling “assertCheckboxSelected”. Also, not all the assertions are available; only the ones that take Strings.
There are also some additional checks defined in jWebFit that are not directly mapped to jWebUnit. These are:
- title – maps to assertTitleEquals()
- form element – maps to assertFormElementEquals()
- title key – maps to assertTitleKeyEquals()
- option – maps to assertOptionEquals()
Finally, they also map (by preference, even) to check methods in the WebFixture. There are two defined by default:
- link – makes sure that there’s an link with the supplied label.
- link id – makes sure that there’s a link with the supplied name present.
- deselect
- Turns off a checkbox. The line would look like this:
| deselect | checkbox | <checkbox name |
- dump response
- Writes the page out to the standard error stream. For debugging, essentially.
- enter
- Set a value for a field. The line would look like this:
| enter | | |
- form
- Declare the specified form to be the “working form”. Not sure what impact this has, as it’s a jWebUnit thing and I still haven’t got around to reading the doco on that yet.
- frame
- Makes the specified frame the target frame for future interaction.
- ignore
- A comment marker (I think)
- select
- Like the enter command, but this selects a value in a drop-down (aka SELECT element) from the options available to it.
- submit
- Submit the working form.
- window
- Moves to the named window.
Quite a long list, though I can see things I’d like to add. It should be easy enough to add on to things, so let’s do that:
Defining my own fixture.
I want to be able to create a new fixture, specific for my project, that will provide my own assertions and utilities. For starters, I’m going to add a command which will save the response to a named file (in the results directory); the dump response command is all well and good, but trawling through text is painful. So here goes.
First, I create my own fixture class, descending from WebFixture. I’m using Eclipse, so it will compile everything for me, and I’ll just stick the eclipse build directory into the classpath of the ant script I wrote to invoke jWebFit (I like Ant scripts for this sort of thing because they work at home on my Linux box and at work on my XP machine). Then I need to change my fit tests to use it.
How did that go… yep, worked fine. Not that I really expected it wouldn’t, but as Ron Jeffries points out, it’s always best to start with something really simple.
Okay, now for the real work. What I’m doing is an easy concept: take the response and write it out to a file. To do this, I need to:
- define the command. I’ll call that “snapshot”
- accept a parameter in. This would be the next cell over.
- copy the response to a file.
I’ll want proper tests around this later, but right now I want to find out what I don’t know, so I’ll spike it instead. Before I write any more extensions, I’ll need tests, but I want something I can show first. It will be sufficent, for a spike, to make sure that I’m echoing the details back out to the error stream; I’ll leave the actual saving until I get a test in place.
Here’s what my fixture looks like after I’ve put in the error echoing. Not to surprising, it’s a lot like the standard ‘dump response’ command; after all, it’s doing about the same thing.
//////////////////////////////////////////////////////////////////////////////// // Copyright 2004, Robert Watkins //////////////////////////////////////////////////////////////////////////////// package foo.fit; import net.sourceforge.jwebunit.fit.WebFixture; public class FooFixture extends WebFixture { public void snapshot() { System.err.println(">>>>>>>>>>>>>> " + cells.more.text()); tester.dumpResponse(System.err); } }
As you can see, the fixture itself is pretty easy. I’m not quite finished doing my spikes yet, however; I still need to learn how to send output back to the report.
It turns out the the Fixture class has some methods to help out here. There are three “status” methods: right()
, wrong()
, and exception()
. You can also addToTag()
, which lets you add things to the TD tag for the cell in question, and addToBody
, which adds to the body. A little bit of experimentation lets me get the hang of it; I show off my new found prowess by putting in an error message to the test report if the file name isn’t there.
From here, I wrap up the fixture by completing the snapshot()
method, complete with a unit test (which I know now how to do; that’s what the spike was about). I’ve attached the files down below if you’re interested, though it’s pretty simple. One annoying thing, though: you can’t find out the name of the test, nor the place the results file will be written to. I could extend a test runner to set these up for me, but for now I’m happy to encode it into the test itself.
Lessons Learned
What did I learn? Well, I learnt that FIT isn’t too hard to do. Writing tests is pretty simple, and running them is even easier. I can see some tests being tedious to write; lots of repetition and all. Of course, I now know how to write my own fixture to help solve that problem.
I’m going to play with this tool some more, mostly around other ways of getting tests written; other formats and so on. I really like the idea of using a format my customers can appreciate; this is simple enough that the list of commands can be summarised over a couple of pages. It’s definitely worth investigating.
I then want to see how we can integrate it into the existing tools we use, such as XPlanner. A test runner that I can use by pointing it at an iteration in XPlanner and say “Test that!” could be pretty handy. 🙂 I’ll see how that goes later.
In any case, that’s about where I’m leaving this. On a side note, this is the 100th entry I’ve made on this blog, not quite a year after I started. I’ve found blogging to be very interesting; journaling my thoughts is a new thing for me, and amazingly more useful than I would have anticipated. [Ed: don’t bother counting the entries; a number were purged after a site failure]
Anyway, it’s late, so I’m off to bed. Here are the links, and then that’s it from me:
- testAbout.fit – the test.
- testAbout.fit.html – the result.
- FooFixture.java – the custom fixture.
And that’s good night from me. Catch you all later.
You *might* want to look at selenium, which is like fit but runs inside the browser. As it is actually rendering it will run slower, however it is also using the browser itself rather than some abstract notion of the browser. We had problems with Fit at my current client where the Javascript implementation in fit itself (Rhino) differed from the javascript implementation in IE, such that we had to change our Javascript code. Selenium would of resolved that issue for us – it also has the benifit that you can step through your test to see what’s happening at each step, which can help you find the problem more quickly. For more info see: http://selenium.thoughtworks.com/index.html, for online demos: http://selenium.thoughtworks.com/demos.html
Thanks for the comment, Sam.
I haven’t looked at Selenium, but it does sound interesting. One issue for me would be how I run them from my build scripts; firing up an IE browser, pointing it at a URL, and capturing the result wouldn’t be non-trivial. 🙂
Also, in my particular environment, our build servers are Win2K with IE6, but the client desktops are NT 4 with IE 5.5. Thus, tests that use the IE6 browser wouldn’t find bugs that show up in IE 5.5. Programing to the DOM model, which Rhino requires, helps reduce these bugs (at the cost of making the Javascript less readable). I’m actually in the process of such a conversion at the moment.
The documentation was terrible. We couldnt get much help out of it. Please add some meaningful and informative documentation about Jwebfit. We had fits because we could’nt complete our project in time, due to lack of help from documenetaion
Sagar, it’s not _my_ project. If you’ve got feedback to give, please give it to the project maintainers.
However, I’m not sure why the lack of documentation cost you so much time. It’s not a large project, you can read the source code easily enough, and work out pretty much everything it does in a few hours. I actually spent more time writing this article than I did working out how it worked.
What I probably should do is restore the examples that got lost when my site had to be rebuilt.
Oh, and if you committed yourself to using a tool that you didn’t know on a time-sensitive project, then frankly that was your own mistake. You can’t learn under pressure – you need to have slack time to come to grips with new tools, so you can experiment and play with it. _Then_ you can apply the tool to a time-sensitive project.
Learn in the quiet times
I had a comment lodged on an older article recently. The poster was complaining about the poor quality of the JWebFit sub-project of JWebUnit. In particular, he was complaining about how it meant their project wasn’t delivered on time. There’s…
I have been using selenium for a lot of my projects and a couple of the drawbacks that I faced were:
1) Javascript popup windows: Selenium has some limitations with these and we are not able to “click” a specific button to test the different paths that the application might take based on different choices.
2) When the test fails at a particular table row (command), it is quite difficult to find out the exact line number where it failed – especially if the test has lots of commented out rows (which are typically just comments, but sometimes are also the results of changing functionality/bugs that were found). In the latter case, these test rows would ideally be uncommented at a later time (as soon as the developer fixes the bug).
So, my question: does using either of the above mentioned packages bypass these concerns in an easy way?
thanks,
Vijay
I haven’t used Selenium, so I can’t really compare and contrast. However:
# Javascript popups: JWebFit uses JWebUnit, which uses HttpUnit. HttpUnit lets you build custom “responders” to JavaScript dialog, which should let you do what you want. It’s not exactly trivial, but of course, once you do it once, you can re-use it.
# Test breakages: FIT, unlike JUnit, keeps running when it hits broken assertions. All broken assertions are highlighted, so you can easily see which one broke. Working out why it broke can be a little tricky, though. In the case of JWebFit, you do get error output, so it depends on how much effort you put into that.