Code coverage is useful right up until you turn it into a target.

The appeal is obvious. Anything below 100% means some code never ran during the test suite, so there are parts of the system you have not even looked at. That matters. Untested code is a blind spot. Coverage is good at showing you where those blind spots are, and that alone makes it worth measuring.

The trouble starts when the number becomes the goal. Covered does not mean tested. A line can execute and still prove nothing. You can hit every branch in a method with vague assertions, weak fixtures, and tests that would stay green if the behavior broke tomorrow. A suite with 100% coverage can still leave you guessing whether the code actually works. Worse, it is easy to game. Once teams are judged on the percentage, they get better at feeding the metric instead of strengthening the tests.

What I want is not 100% coverage, but 100% confidence. That is harder, and more difficult to measure. Coverage helps because confidence without execution is fantasy, but execution without meaningful checks is just movement. Coverage is a necessary precondition. It is not the finish line.

There is a better way to think about this, and it starts by asking whether your tests notice mistakes instead of whether they merely visit code. But that will have to wait untill next time.