I love TDD. I practice it in my “day-to-day” developer life. I’ve written a book about it and speak about it and conferences everywhere I can. I even did a 30-post series on it for a previous employer. So I tend to get involved in a lot of discussion about TDD, BDD, ATDD and many of the concepts, practices and tools surrounding them. One topic that is paradoxically my most and least favorite is Code Coverage.
For those who may be unfamiliar with Code Coverage, allow me to explain. Developers write unit test that exercise code. These test are run and they either pass or fail. Code Coverage is the process of analyzing those test runs and determining which lines of non-test code are exercised (or run) by the tests. The data collected is generally viewed in two ways; which specific lines of code were exercised by a test and which were not (more on that later) and the percentage of the code base the was exercised.
Death by Numbers
Of the two pieces of data generated by code coverage, management tends to latch onto the percentage of the code base covered as a means of evaluating the effectiveness of the development teams testing efforts. This makes sense. Most managers are trained to work with numbers, manage several teams (making looking at actual lines covered impractical) and in some cases aren’t technical. In these cases, when analyzed correctly, the percentage can be a good metric to track.
The catch is the whole “when analyzed correctly” part. To the uneducated, 100% code coverage sounds like a great goal. The problem is that very little though is put into how that number is arrived at by the code coverage tool.
When Is Your Code Not Your Code?
If you’re a .NET developer you’re probably using some feature of the .NET framework to support your presentation layer; either WPF or ASP.NET. And when you created your WPF or ASP.NET application, Visual Studio helped you out by creating a lot of… stuff. Mostly boilerplate code to handle “frameworkey” things that developers don’t want to deal with. No question; WPF and ASP.NET are HUGE timesavers for software developers.
And they are also the cause of why most software development projects will never reach the “100% code coverage” panacea that many managers crave.
And this is OK. But we need to understand how use of these libraries skews code coverage percentage.
When a developer writes a unit test they are expecting to exercise the code in the method they are testing as well as the code in any private methods on the same class that the public method we are testing uses. And external resources, and ideally any third party frameworks, should be mocked out and not covered by the unit test.
This is logical and what most developers expect. And if we’re writing code that’s not in the presentation project, our code coverage makes sense; the only code in the class or library (generally) is the code we create. And, or course, we would never DREAM of writing code without having a test for it first, right? RIGHT?!
It’s when we get into the presentation project that things get a little complicated. Remember all that code that Visual Studio crated for your to take care of the stuff you didn’t want to do? Well, just because you didn’t write it doesn’t mean it’s not there.
If you’re Test-fu is strong, you are no doubt creating tests for your Controllers/Model/ViewModels/etc. That’s good. But, since you’re generally not running in the same application context during testing that you would be running in during execution, all that boilerplate helper code that was created is never going to be tested. Which means while that code counts toward your “total codebase” (the number of lines of code in your application) they can’t be covered by a unit test.
Even if you could reach it with a test your test context is not going to have the services and facilities the WPF/ASP.NET runtimes need to execute correctly. And even if you could create those services and facilities, you would be flirting pretty heavily with having an integration test (still important!) but you wouldn’t really have a unit test anymore. In any case, trying to cover this code with a unit test is really just not practical.
We Should Probably Just Give Up Now Then, Right?
The problem isn’t the boiler plate code, it’s the way we are measuring the code coverage percentage. Ideally we don’t even want the code coverage tool looking at this code for purposes of determine our “full code base.”
When determining what get’s measured for code coverage metrics we want to exclude the boilerplate code and only check the code that we (as the development team) actually write. We also want to make sure that we that we exclude any source code that ships as part of a framework or library we are using. Most .NET libraries ship as pre-compiled assemblies, but there are some that still ship with a source component, those need to be excluded. Does your application have integration tests? Guess what, those are probably being analyzed by your code coverage tool as well.
The good news that most code coverage tools enable you to exclude code on a project or class level. Some even let you exclude on the method level. As your project continues to grow I recommend that you update your “exclude list” to accommodate new boiler plate code, libraries and tests that will be added to your solution.
By excluding the code you don’t write, your code coverage number should be more inline with the impact of your unit tests on your code. When you are only testing your code, 100% code coverage is very achievable. But what does a number that is less than 100% mean? In my next post on this topic I’ll discuss situations where your code coverage dips, what to do about it, and why it’s not always a bad thing.