RAD Test Scripting

The other day, some developer coworkers and I were talking about ways to make it easier for our Test group to create and maintain test scripts. Test scripts are a very important piece of the software development puzzle. But test scripts don’t find new defects. Test scripts are really good at finding breakage. That’s why scripts are run on new builds as BVT’s or release candidates as part of a larger regression suite. For me, few things are worse than uncontrolled breakage to keep a project from hitting deadlines.

To find new defects, you really can’t beat good test cases and exploratory testing. We wanted to find ways to keep our Test group working on finding new defects, not struggling to maintain scripts. Our current test scripting tool is just what it claims to be: A scripting tool. The problem with scripting is that it’s programming and programming brings a whole set of issues to the party. We’d like to hide those issues from most of the testers and only have a small team worry about them, if possible.

The genius idea we hit upon was a sort of RAD system for creating test scripts. Ah yes, well I did say we are developers. The general idea is to componentize the creation of scripts. Testers could piece together various script components to create test cases and suites. Of course, there would be a nice Windows application to allow testers to build their projects. We would save the project in an intermediate format and “compile” to native script that our scripting tool would execute.

We even found an additional benefit of the system: By maintaining our test suites in an intermediate format, we avoid script tool vendor lockin. When we switch to a new test scripting tool, we just need to write a new “compiler” to convert to the new script language. Vendor lockin is a real problem. How many commerical test scripting tools do you know use a common, portable language? It can make switching vendors very costly.

We are currently prototyping a system. I’ll let you know how it goes.

Test Scripts == Source Code

How is it that a software development company that understands the value and importance of the code their developers write doesn’t place the same value on the code their testers write? Good development processes apply to tester too. Regression test scripts are the health indicator of the development process. Build Verification Tests (BVT) or Smoke Tests are very important and, typically, rely on code written by testers.

Some mistakes I have seen in the past include:

  • Not storing scripts in a source control system
  • Not using a common/shared framework to build scripts
  • Not spending time to design or review scripts
  • Not building robust scripts that can be maintained across development cycles or even within the current cycle

There is a benefit from using test scripts. There is a cost as well. Use the same good sense you apply to your development process in your testing process.

Defects Love New Code

People who know more about software development processes than me, and write books on the subject, frequently talk about metrics like Defect Density (defects per thousand lines of code). Say your development team averages 10 D/KLOC. If you start working on the next version of a product with 1M LOC and eventually add 100K LOC, you could look into your crystal ball and expect 1K defects. Holy crap! Honestly, it could be worse. In my experience, adding code to a 1M line product is likely to break lots of the existing code as well.

What if you start a product completely from scratch and end up with 1M LOC? 10K defects could give the faint of heart a reason to jump out a window. How will you ever know when to ship your product? I have written enough code in various products for various companies to know that those people writing those books know what they’re talking about. I have seen it work out just like they predicted it would. Bad things happen to new code. I have come to expect it.

Sometimes people, usually pointy-haired types, get really caught up in things like defect counts. It drives me nuts. Defects are a fact of life in software development. You need to document and prioritize the defects discovered in your products. When it gets close to ship time, look at a cost benefit analysis of each defect. Compare the cost to fix to the cost of leaving it in the product. At some point, you will have to ship your product. It will have defects in it. Knowing what they are is a Good Thing. Making sure the bad ones were fixed is a Good Thing.

Joel On Software has a good article on the cost of fixing defects. Steve McConnell also has some good articles about when a product is ready to ship. They apply some rational thinking and decision making, instead of simply the “fix all bugs!” or “zero defect!” mantra pointy-haired types like to spout.

Using XML As File Format

In my last post, I talked about some persistence issues we are working through and how persisting to XML could make things easier for us. In this post, I want to talk about using XML as a file format. Assuming we switch to an XML based file format for our products, there are questions I have regarding the proper use of XML.

Looking around, I can see at least 2 different categories:

  1. XML file is a dump of your in-memory state at the time of saving.
  2. XML file is a system unto itself and should be designed as a standalone deliverable of the product.

If you read my last post, you could guess the system we built for persisting objects falls clearly in to category #1. If you were to view the resultant XML, I doubt you’d be overwhelmed by the beauty of its structure. The XML is not being used to its full potential. No namespaces, no XPointer or XInclude. Just a raw dump of state stored in XML elements, with very little use of attributes.

On the other hand, if you were to look at the XML created by an MS Office product you would find a decent looking XML document. Namespaces are used and the XML elements don’t appear to map directly to the product’s underlying object model.

Want a better example? Look at a document file created using OpenOffice. These guys really cared about the XML file structure and use many XML standards (spec). In fact, there isn’t just a single XML file. They create a ZIP archive and place several XML files in the archive. They also include a manifest file to describe the other files contained in the archive. Whenever possible, they use or extend existing XML based specs to implement their format. Clearly not a direct mapping to their object models.

How important is the XML structure when using XML as a native file format? We could always suggest people use XSLT to change our XML to suit their needs. That sounds reasonable, right?

Persisting Can be Painful

We are considering changing from an OLE Structured Storage based file format to an XML based file format our flagship product. OLE Structured Storage has served us well, but we are starting to hit walls, especially with versioning and backward/forward compatibility. Yes, forward compatibility!

The product, mainly C++, has objects that save and load their state using a storage system. The storage system is IStorage/IStream and all of the objects know it. That’s a minor problem that can be fixed. The bigger problem is the fact that the object hierarchy defines the structure of the file stream. That means any change to the object hierarchy changes the structure of the file stream. Or, any significant change in the object model (version 1.0 to version 2.0) makes it very painful for v2.0 to read v1.0 files. Why the headache? Since each object is responsible for saving/loading is own piece, an object in v2.0 may need to remember what it looked like in v1.0. Even worse, what if an object in v1.0 no longer exists in v2.0? It can get pretty ugly.

A different product, still in development, uses a system that abstracts way the “how” of the persistent storage. The objects that make up the business logic can save and load their state using an abstract storage interface. It has implementations for persisting to binary streams, SQL databases and XML streams. Its not revolutionary, COM’s IPersist has allowed developers to do this kind of thing for years. However, we find ourselves primarily using the XML storage more than the others. In fact, we use it for more than persisting to files. We use it for copying state to clipboard and maintaining our Undo/Redo system as well. But I am getting ahead of myself.

I was reading a post by Chris Pratley where he talks about Word, RTF and XML:

RTF is used for this purpose instead since it is easier to deal with than Word binary for apps other than Word (remember that is why we created it – it stands for Rich Text interchange Format). The new XML format is designed for exactly that purpose – and it is easier to work with than RTF. You can create the WordML doc (or even a minimal subset) on a server using XML tools, then send the XML to Word on the client and Word will load it up. If you’re missing a lot of the Word specific stuff, that’s OK – Word will fill in the missing bits with defaults. In fact, you can skip generating the doc on the server if you want – just generate an XML data file in your own schema and provide an XSLT for Word to use when opening the file. That pushes a lot of the processing onto the client.

An idea for how to get past some of our versioning issues started to form:

  1. What if our objects persisted to XML?
  2. What if each version only cared about the structure of its XML?
  3. What if we used XSLT or XQuery to convert one version’s XML to another?

In my mind, forward compatibility (an old version opening a newer version’s file) would be hard. Backward compatibility would be easier. Just transform the old XML to the current version’s structure and load it. We could allow newer versions to save files in an older version’s XML structure. That would be one way of handling forward compatibility. We are researching this concept to see where it can take us.