Test Driven Development (TDD) and the ever increasing trend towards Unit Testing as a religion among developers is probably a good thing for software quality in general. I've just spent a couple of weeks trying to employ the recommended unit testing techniques with Entity Framework 4, and this has reinforced my misgivings about the wisdom of some of the basic concepts behind how unit testing is
supposed to be done.
I've always been somewhat lukewarm towards unit testing. I certainly don't do as much testing as I probably should, but I do agree that
some unit testing is essential to developing a quality application. So, I always do
some unit testing, but only to the degree that I feel the testing is appropriate or necessary.
But unit testing has grown in popularity over the last 10 years, to a point where it appears that common sense is not a part of the process anymore.
The source of my misgivings about this blind commitment to unit testing deals with one of the fundamentals; that a unit test should only test logic within the specific block of code that is under test.
I don't disagree with this principal, but the sticking point is that you have to fake your code's dependencies, especially the external ones like real databases and external services. In most modern applications, external dependencies are pervasive and sometimes very hard to avoid.
To support testing without hitting external dependencies, you generally design your code in a way that allows for dependency injection. There are several ways to do dependency injection, but the different techniques can range in complexity a lot. On the simpler side you can just use overloaded constructors or write methods so they take their dependencies as parameters. On the other end, there are horribly complex programming acrobatics such as the Inversion of Control (IoC) design pattern.
No matter which techniques you use, the overall point is that your unit tests are able to supply their own versions of any dependencies instead of using real ones. These mock (fake) dependencies will mimic the behavior external systems or other complex code routines.
I'm not opposed to making some concessions in my code in the name of enhanced testability. Indeed, many of the simpler dependency injection techniques lead to better code in general. But there are cases where simple techniques just don't work. This is where I start to have serious misgivings about the wisdom of unit testing.
I've always had the opinion that the use of fancy OOP tricks, complex inheritances in particular, should only be used if there are direct benefits to the application. Likewise, employing advanced design patterns is only worth the effort if the patterns solve issues within your application's actual problem domain.
To some degree, facilitating better unit testing can be considered a good reason to employ some of these more advanced techniques. Indeed, the TDD guys will usually tell you that unit testing
is a part of the application's problem domain, and that a full suite of unit tests
must be part of the delivered product. I do see their point, but I do believe that there are cases where it is misguided --and unwise.
In my opinion, you should never compromise your application's primary requirements just for the sake of unit testing. Testing is at best a secondary concern; an optional requirement when it doesn't interfere with other priorities.
But that doesn't stop developer after developer from doing insanely stupid shit to their code for the sheer sake of testability. I've seen so many otherwise simple applications become bloated and convoluted beyond reason, and often to the detriment to both developers and the application's functionality.
The most recent case of this that I've run into was with the next version of TicketDesk. I have a class library to encompass the application's core domain logic. Here I had planned to auto-generate my entity model from my database, add some data annotations and custom logic of my own, then just create a simple service API to expose the model to my web application; and in the future potentially other clients such as Silverlight.
All-in-all this is a pretty standard design. My entire reason for choosing the Entity Framework was to leverage the more advanced features, both within the library as well as within the client code consuming it.
After spending a solid two weeks exploring how to make Entity Framework 4 testable the
correct way, I'd ended up with an insanely complicated hand-hacked variation of a standard generated EF model. But to support the necessary dependency injections, especially with the problematic EF ObjectContext, I had produced gobs of abstract interfaces and classes. Return types at the service API had become so abstract that the consuming clients had lost access to all of the special functionality of EF models; the very features that had made EF an appealing choice in the first place.
I got far enough into this to prove that achieving a useful level of testability could be done, even with the dreaded ObjectContext. But well before I reached that point, it was apparent that doing so would make my code into a complex and time consuming maintenance nightmare.
I still needed to be able to unit tests my custom logic and service layer though, but I was done with doing it "the right way". Instead, for my tests I pointed the ObjectContext at a real database server, used the built-in CreateDatabase method to generate a temporary
fake database schema from the EF model, and wrote my tests against it.
My tests are able to add any fake data needed directly to the temporary database via the strongly typed and auto-generated ObjectContext. With this approach, I can still test the relevant code blocks, but without jumping through hoops to dodge a dependency on a database server.
Sure, it isn't technically the
correct way to do unit testing... and yes, many of my tests do spend a bit of time talking to the external database, or managing the test data within it; but
all of the extra complexity is encapsulated directly within the unit tests or helpers classes within the test project; not junking up my actual application.
I'm sure the more fanatical unit testing advocates, especially the TDD guys, would foam at the mouth at this approach, but I have nearly 100% code coverage in my tests (excluding generated code). And for me, the biggest advantage is that my application code is simple, elegant, effective, and highly maintainable...
Isn't that the whole point of unit testing?