What is the definition of fully tested software? Fully tested software is code that has been validated and tested to such an extent that all concerned parties (QA, software engineers, project owners, etc) have ‘high confidence’ that the code will perform exactly as intended once released to production.
The phrase, ‘high confidence’ has relative meaning depending on the threshold of the individual or organization doing the testing. At what percent of your codebase being tested do you have high confidence that it is fully tested and will work exactly as intended when it reaches production? Is it 1%, 10%, 25%, 50%, 98%? Would you like it to be 100% or do you think 100% is unattainable? Are you fully satisfied with anything less than 100%? What the percentage of your codebase being tested will give you the high confidence to say to your Chief Architect, VP of Engineering and other major decision makers, “Yeah, the code is fully tested and ready to go to production”?
I recently sat at a conference table in a room filled with software engineers, QA engineers, the head of QA and the Chief Architect and I asked them a very simple question, “What percentage of your codebase is being tested right now?” No one in the room could give me an answer.
I’ve also sat at a conference table with only a VP of engineering and a QA lead of in an organization where they were testing a very sophisticated application; and when asked the same question, their answer was an unequivocal 98%. Their fervent desire is to get to 100%.
Do you know what percentage of your codebase is being tested?
First off, good software testing stems from a codebase that is conducive to testing. For example, if many of the methods or functions within your codebase on average contain more than 30 lines of code, you don’t have code that is conducive to good testing; you need to re-factor large blocks of code into smaller units of code. You need to follow good design patterns so that your code is easier to understand and thus easier to test. Having a really good service tier where most if not all of your business logic is defined in small methods and functions will make it much easier to use good code coverage tools that can calculate how much code has been tested and identify code still to be tested. And how much of the code should be tested; all of it; 100% of it.
If you have a codebase whose business logic is written in well-defined methods and functions that incorporates good design patterns, then the only obstacle to writing hundreds of unit tests and integrations tests, and doing thorough functional testing and load testing is having the necessary test data to accomplish your testing tasks.
For the company that is testing 98% of their codebase, their challenge to getting to 100% is having the ability to create huge sums of complex test data strafed across multiple servers fast enough to meet their testing lifecycle time constraints. For them, having huge amounts of test data isn’t just to increase confidence that their codebase is fully tested, it gives them the ability to test code they could not otherwise test without it.
Here’s a question for you: If you had all of the test data you’ve ever wanted or needed, would you be satisfied with anything less than fully testing your codebase? Is fully testing 100% of your codebase a tall order; yes it is! Is fully testing 100% of your codebase an attainable goal; absolutely!
Think about it this way, if you have a fully tested codebase, how much more confident would you be when the decision makers call you into the conference room, sit you down at the table and ask you the big question, “Are we ready to release the code to production?”