Category Archives: TDD

Dependency Injection 101

As developers we are always trying to find ways to make our applications more flexible and tolerant of change. Ideally, we want our components to be modular and reusable. The ability to unit test our code is also becoming a requirement in more and more software development projects.

A technique that can help us in both of those areas is Dependency Injection or DI. DI is a simple pattern where we take elements of our class that would normally be treated as static dependencies and instead we provide an implementation of that dependency at runtime. Consider the following snippet of C# code:

In this case MyDomainClass has a dependency on some sort of data repository. As you can see on line seven, our class is expecting to use a specific implementation of this dependency. While this is something you’ve no doubt seen in many demos, training materials and even in production code, it’s actually not a good idea for several reasons:

  • By declaring our variable of the specific type OracleRepository we are bound to that implementation and have limited the reusability of this code to only situations where Oracle is being used to store data.
  • When we combine the instantiation with the declaration in the class, we have no opportunity to vary what’s being provided as our repository at runtime
  • Static dependencies make unit testing more difficult

Refactoring this to use DI involves two steps. First, I would prefer it if MyDomainClass was dependent on an abstraction of a data repository instead of a specific data repository. This can be accomplished by creating an interface called IDataRepository that serves as an abstraction for a generic data repository and provides definitions of the methods needed to interact with such a repository:

 

Next, we want OracleRepository to implement the IDataRepository interface:

 

By doing this we are saying that OracleRepository supports the abstraction represented by IDataRepository and will provide the methods defined in IDataRepository. This means that OracleRepository can be used anywhere my application expects an instance of IDataRepository, such as the instance variable definition in MyDomainClass (line 7):

Now MyDomainClass is a lot more flexible. It can use any data repository that implements the IDataRepository interface. This opens us up to using other types of databases like MS SQL Server or MySql. It even enables us to use things like document databases, file systems or proxies representing web services. As long as the methods in IDataRepository are implemented, MyDomainClass can use it.

But we still have the problem of being statically bound to OracleRepository. This is due to the combination of declaration and instantiation on line seven. This is where DI comes into play. Instead of being statically bound to an implementation of OracleRepository, we are going to “inject” an implementation of IDataRepository at runtime. The easiest way to accomplish this is with “constructor injection” which is a pattern where an implementation
of IDataRepository is provided as a constructor parameter at runtime:

Now we no longer have a static dependency on one type of data store provider. Instead we are dependent on an abstract, for which we will be provided a concrete implementation at runtime. This provides several benefits:

  • We can leverage other types of data stores, so long as their repositories can support the IDataRepository interface
  • MyDomainClass becomes more reusable as it’s no longer dependent on OracleRepository
  • Unit testing is easier as we can provide amocked instance of IDataRepository instead of a concrete implementation.

I hope this post has helped you understand DI a little more. It’s definitely a great pattern to adopt in your software development practices.

Why TDD – A Question

I love TDD. I speak a lot about TDD. During these talks I tend to field a lot of questions. Occasionally someone will say “I’m just not sold on TDD” which isn’t really a question, but begs a response.

For many TDD evangelists, the response to this is to launch into a monologue of all the reasons why TDD is great and why you should use it and all the scary stuff that will happen to you (… up to and including death!) if you don’t use TDD.

This is the wrong approach to bring someone around to TDD. If they are at this point where they can say they aren’t “sold” on the idea, they probably have already heard the speech about TDD before. It didn’t move them then, why should it move them now?

That doesn’t mean that it shouldn’t be addressed. But the first thing to figure out is why someone isn’t sold on TDD. So that’s what I’m doing. Now.

I’ll be working on a series of posts that (I hope) will help people see the light and drink the TDD Kool-Aid. But first I want to know what is keep you from loving TDD. Is it the process? Is it the learning curve? Is it the tooling? Help me help you.

Leave your reason in the comments below, or tweet them to me. I’ll address as many as I can in future posts.

Code Coverage – You’re Doing it Wrong – Part 1

I love TDD. I practice it in my “day-to-day” developer life. I’ve written a book about it and speak about it and conferences everywhere I can. I even did a 30-post series on it for a previous employer. So I tend to get involved in a lot of discussion about TDD, BDD, ATDD and many of the concepts, practices and tools surrounding them. One topic that is paradoxically my most and least favorite is Code Coverage.

For those who may be unfamiliar with Code Coverage, allow me to explain. Developers write unit test that exercise code. These test are run and they either pass or fail. Code Coverage is the process of analyzing those test runs and determining which lines of non-test code are exercised (or run) by the tests. The data collected is generally viewed in two ways; which specific lines of code were exercised by a test and which were not (more on that later) and the percentage of the code base the was exercised.

Death by Numbers

Of the two pieces of data generated by code coverage, management tends to latch onto the percentage of the code base covered as a means of evaluating the effectiveness of the development teams testing efforts. This makes sense. Most managers are trained to work with numbers, manage several teams (making looking at actual lines covered impractical) and in some cases aren’t technical. In these cases, when analyzed correctly, the percentage can be a good metric to track.

The catch is the whole “when analyzed correctly” part. To the uneducated, 100% code coverage sounds like a great goal. The problem is that very little though is put into how that number is arrived at by the code coverage tool.

When Is Your Code Not Your Code?

If you’re a .NET developer you’re probably using some feature of the .NET framework to support your presentation layer; either WPF or ASP.NET. And when you created your WPF or ASP.NET application, Visual Studio helped you out by creating a lot of… stuff. Mostly boilerplate code to handle “frameworkey” things that developers don’t want to deal with. No question; WPF and ASP.NET are HUGE timesavers for software developers.

And they are also the cause of why most software development projects will never reach the “100% code coverage” panacea that many managers crave.

And this is OK. But we need to understand how use of these libraries skews code coverage percentage.

When a developer writes a unit test they are expecting to exercise the code in the method they are testing as well as the code in any private methods on the same class that the public method we are testing uses. And external resources, and ideally any third party frameworks, should be mocked out and not covered by the unit test.

This is logical and what most developers expect. And if we’re writing code that’s not in the presentation project, our code coverage makes sense; the only code in the class or library (generally) is the code we create. And, or course, we would never DREAM of writing code without having a test for it first, right? RIGHT?!

It’s when we get into the presentation project that things get a little complicated. Remember all that code that Visual Studio crated for your to take care of the stuff you didn’t want to do? Well, just because you didn’t write it doesn’t mean it’s not there.

If you’re Test-fu is strong, you are no doubt creating tests for your Controllers/Model/ViewModels/etc. That’s good. But, since you’re generally not running in the same application context during testing that you would be running in during execution, all that boilerplate helper code that was created is never going to be tested. Which means while that code counts toward your “total codebase” (the number of lines of code in your application) they can’t be covered by a unit test.

Even if you could reach it with a test your test context is not going to have the services and facilities the WPF/ASP.NET runtimes need to execute correctly. And even if you could create those services and facilities, you would be flirting pretty heavily with having an integration test (still important!) but you wouldn’t really have a unit test anymore. In any case, trying to cover this code with a unit test is really just not practical.

We Should Probably Just Give Up Now Then, Right?

Nope.

The problem isn’t the boiler plate code, it’s the way we are measuring the code coverage percentage. Ideally we don’t even want the code coverage tool looking at this code for purposes of determine our “full code base.”

When determining what get’s measured for code coverage metrics we want to exclude the boilerplate code and only check the code that we (as the development team) actually write. We also want to make sure that we that we exclude any source code that ships as part of a framework or library we are using. Most .NET libraries ship as pre-compiled assemblies, but there are some that still ship with a source component, those need to be excluded. Does your application have integration tests? Guess what, those are probably being analyzed by your code coverage tool as well.

The good news that most code coverage tools enable you to exclude code on a project or class level. Some even let you exclude on the method level. As your project continues to grow I recommend that you update your “exclude list” to accommodate new boiler plate code, libraries and tests that will be added to your solution.

By excluding the code you don’t write, your code coverage number should be more inline with the impact of your unit tests on your code. When you are only testing your code, 100% code coverage is very achievable. But what does a number that is less than 100% mean? In my next post on this topic I’ll discuss situations where your code coverage dips, what to do about it, and why it’s not always a bad thing.