Girish Mahajan (Editor)

Domain Range Ratio (DRR)

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Definition

The Domain Range Ratio (DRR) of a logical function (or software component) is the ratio between the cardinality of the set of all possible test cases (inputs) to the cardinality of the set of all possible outputs.

Contents

Discussion

To understand this metric, consider a function (hardware or software) that outputs one of two states: (‘1’) or (‘0’). Further, assume that each output state occurs 50% of the time. Because of this minimal output space size, a fair coin toss also has a 50-50 chance of providing a correct output for any given input, and that could certainly be less expensive than the cost to build and implement the function. Worse, consider the scenario where ‘1’ and ‘0’ are not evenly distributed, e.g., the specification states that for 1M unique test cases only 10 should produce a ‘1’ and the other 999,990 should produce a ‘0’.

Here, you could build this function or you could write a piece of code that simply says: for all inputs output (‘0’). This incorrect code is still 99.999 reliable, and you’ll almost certainly not discover the defect in the code with a handful of random tests sampled from the 1M. In short, testing here has a minimal probability of detecting this faulty logic because each test case has such a low probability of revealing the defective logic due to the tiny output space and its probability density function for each output. If you view the definition of a testable system to be one where test cases have a reasonable chance of detecting defects, then a high DRR suggest low observability of defects during test, and hence low testability*.
_____________________
* Testability here refers to the likelihood that defects can be discovered during testing.

References

Domain Range Ratio (DRR) Wikipedia