Attention-based Coverage Metrics =========================== Over the last decade, extensive research has been conducted on coverage metrics for model checking. The most common coverage metrics are based on mutations, where one examines the effect of small modifications of the system on the satisfaction of the specification. While it is commonly accepted that mutation-based coverage provides adequate means for assessing the exhaustiveness of the model-checking procedure, the incorporation of coverage checks in industrial model checking tools is still very partial. One reason for this is the typically overwhelming number of non-covered mutations, which requires the user to somehow filter those that are most likely to point to real errors or overlooked behaviors. We address this problem and propose to filter mutations according to the {\em attention\/} the designer has paid to the mutated components in the model. We formalize the attention intuition using a multi-valued setting, where the truth values of the signals in the model describe their level of importance. Non-covered mutations of signals of high importance are then more alarming than non-covered mutations of signals with low intention. Given that such ``importance information'' is usually not available in practice, %still something to look forward to, we suggest two new coverage metrics that automatically approximate it. The idea behind both metrics is the observation that designers tend to modify the value of signals only when there is a reason to do so. Thus, the value of a signal that has just been assigned is ``more intentional" than the value of a signal that maintains its value. Our first coverage metric is \emph{stuttering coverage}, where mutations flip the value of a signal along a block of states in which the signal is fixed (rather than flipping the value in a single state). Our second metric is applied to netlist mutations. Such mutations set a signal in the netlist to a constant value or make it ``free" to change nondeterministically in every cycle. Here, we associate attention with signals that change often during the computation. We demonstrate the advantages of both metrics and describe algorithms for calculating them.