Post-Release Testing – what’s that about, anyway?

One of the odder practices of “conventional” software development that I’ve ever come across is the post-release test cycle. It’s something that has baffled me ever since I first saw it; you go through the normal development cycle, including testing, and then you deploy to production – and then you test it again. Sometimes with test cases that weren’t done during development. Why?

This isn’t rocket science. There are simple steps you can take that remove the need for the post-deployment “shakedown test”.

(Caveat: this advice is for hosted or server-based applications – the sort that goes into one controlled environment, not the sort that gets onto a desktop or mobile device)

Deploy the same application build through  your pre-release test environments that you put into production.

If it’s the same build, then the chances you have bugs to expose during post-release testing are a lot smaller. You have the same application logic – the only thing that is different is environmental configuration.

Having a consistent build means that you need to externalise environmental configuration – you don’t have a UAT build vs a system test build vs a prod build – you just have the prod build (and maybe a developer build that is propped up with scaffolding).

Have pre-release test environments

Of course, if you’re going to put a consistent build through pre-release test environments, you need to have pre-release test environments. This is a step that no professional software development team should skip – but then, I keep hearing how large percentages of dev teams don’t use version control, either.

Pre-release test environments should be comparable to the production environment. The same OS, the same app server version, if possible the same sort of memory and hardware, and a similar network topology for interacting with other services. It doesn’t have to be exactly the same – especially for larger clustered environments. But it should be comparable.

Virtual servers help with this, at least if you’re willing to put up with the overhead of managing the virtual server environment. Also, if you’re using cloud-hosted servers for production, then using cloud-hosted servers for system testing and UAT is kind of a no-brainer.

Have a build server

In order to get a consistent build to put through the pre-release test environments into production, you need to have a build server. Without a build server, you’re going to get inconsistent builds. Remove a potential source of error through automation. (That’s going to be a common point, so I hope you paid attention)

Use prod-like test data

And the best prod-like test data is production data (suitably sanitised of identifying or sensitive information, of course). You can’t beat real production data for uncovering unexpected problems.

Automate the deployment process

One argument I’ve heard to support post-release testing is that “you need to make sure everything to deployed right”. This turns out not to be the case. What you need to do is automate the deployment process. This can be verified as much as needed – in the pre-release environment. But in production, you should expect it to work.

Use tools and test harnesses to verify the production environment

Another argument – one that I can respect a bit – is that if the application is going to interact with other services (and let’s face it, that’s increasingly common these days), you need to make sure you can talk to those services. But using the production application for that isn’t the right idea.

Want to see if your application has the right network connectivity to talk to the other application? Use a test harness – from the production server – to send requests to the other application. Want to make sure that your application is talking to the remote service using the correct protocol? Again, use a test harness.

These sort of test harnesses should not be written from scratch. They should be spun out of the same codebase as your application; with good code design, you should be able to take any critical part of your software and wrap a test harness around it. This is especially true of facades to external systems. By doing it this way, if your test harness works, you should expect that the real application will.

Automate any post release shakedown tests

If you really think you must do a post-release test, then automate it. This means you’ll get a consistent test, you’ll get it done faster, and you can have more confidence that it is testing what you want. Manual post-release testing is error prone – particularly if you do releases outside of normal business hours.

Also, if you do this, you should make it so you can run it any time – not just post release. These automated smoke tests should be your first ‘go-to’ when you see unexpected issues in production.

For the matter, automate your pre-release regression tests

There is absolutely zero point in having a manual regression test suite. Automate the lot. If it takes you ten times as long to automate it as it does to test manually, you’ll pay that back by about the tenth release.

Manual testing should be reserved for exploratory testing.

Use dynamic health monitoring

A good robust system should have health monitoring. You should be able to get information about how the application is running – can it see the database? Can it connect to the external payment provider? Are there errors spewing into the logs? Is it meeting performance and response targets or SLAs?

These sort of monitoring systems do take a bit of time to build. They pay for themselves by reducing the frequency and duration of unexpected downtime.

Monitor the application post-release

When you get into production, you’ll see issues you hadn’t thought of before. You’ll get crazy input data, slow client network connections, I/O errors you hadn’t seen before, and so on. You want to monitor your application very close post-release to make sure there are no issues.

Monitoring isn’t the same as testing. The whole point here is that you’re monitoring for the stuff you didn’t think of it. If you could think of it first, test it in the pre-release environments.

Never run a test for the first time in production

If you must have a post-release test, and it has to be manual, please please please don’t leave it until the post-release to run it. If you haven’t run that test case in the pre-release environment, don’t run it in prod.

If you aren’t confident it’s going to work, don’t release it

And if you are confident, why are you testing it?

Author: Robert Watkins

My name is Robert Watkins. I am a software developer and have been for over 20 years now. I currently work for people, but my opinions here are in no way endorsed by them (which is cool; their opinions aren’t endorsed by me either). My main professional interests are in Java development, using Agile methods, with a historical focus on building web based applications. I’m also a Mac-fan and love my iPhone, which I’m currently learning how to code for. I live and work in Brisbane, Australia, but I grew up in the Northern Territory, and still find Brisbane too cold (after 22 years here). I’m married, with two children and one cat. My politics are socialist in tendency, my religious affiliation is atheist (aka “none of the above”), my attitude is condescending and my moral standing is lying down.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: