A Framework for Ranking Vacuity Results
Vacuity detection is a method for finding errors in the model-checking
process when the specification is found to hold in the model. Most
vacuity algorithms are based on checking the effect of applying
mutations on the specification. It has been recognized that vacuity
results differ in their significance. While in many cases vacuity
results are valued as highly informative, there are also cases in
which the results are viewed as meaningless by users. As of today,
there is no study about ranking vacuity results according to their
level of importance, and there is no formal framework or algorithms
for defining and finding such ranks. The lack of framework often
causes designers to ignore vacuity information altogether, potentially
causing real problems to be overlooked.
We suggest and study such a framework, based on the probability of the
mutated specification to hold in a random computation. For example,
two natural mutations of the specification $G(req -> F ready)$ are
$G(\neg req)$ and $GF ready$. It is agreed that vacuity information
about satisfying the first mutation is more alarming than information
about satisfying the second. Our methodology formally explains this,
as the probability of $G(\neg req)$ to hold in a random computation is
0, whereas the probability of $GF ready$ is 1. From a theoretical
point of view, our contribution includes a study of the problem of
finding the probability of LTL formulas to be satisfied in a random
computation and the existence and use of $0/1$-laws for fragments of
LTL. From a practical point of view, we propose an efficient algorithm
for estimating the probability of LTL formulas, and argue that ranking
vacuity results according to our probability-based criteria
corresponds to our intuition about their level of importance.