Test coverage — how about quality coverage?
I’ve had some focus time today and explored thoughts around reporting to stakeholders and the topic of coverage. What I have found difficult is the coverage topic. The problem I find with requirements coverage is gaps in requirements. In a recent project I was working on we noticed a gap, so I feel using our test coverage based on requirements is fallible. Another option could be product coverage, but what if the project you’re working on isn’t focused on product coverage per se, and it’s more a technical go live. These are the problems that I have been facing.
SO I’ve thought about what we are reporting on. One of those things are quality criteria, things like security, compliance, performance and so on. As we’re reporting on this anyway, why don’t we add a metric to it. So here’s a formulae that we could use for this.
Say you have 8 quality criteria in scope for the project. The quality criteria that you have covered is only 1 at this time. Do 1 divided by 8 multiply by 100 and there is your figure of 12%
I feel what this might help with is pushing the holistic quality message. It moves away from reporting on percentage of testing completed and more related to risks with quality criteria.
What do you think? Could I be onto something here? Or are there flaws waiting to be uncovered?