Dependency injection doesn’t strictly require frameworks – Code smells series

This post is part of a 10-week series by Dino Esposito (@despos) around a common theme: code smells and code structure.

Last week, we looked at using constructors vs. factory methods, and how sometimes factories can help make our code more readable. In this post, let’s look at the concept of dependency injection to help make services available in our code, as well as make it more extensible.

In this series:

Dependency injection

Dependency Injection (DI) is a widely used development pattern that serves the primary purpose of making services available to application code. Whenever a class needs the services of some external code (e.g., an instance of another class), you have two options. You can create a fresh new instance of the desired class directly in the calling code, or you can assume that someone will be passing you a valid instance of the service you need.

In other words, the contract between the class you’re writing and whoever happens to use it is such that users need to provide the class with the tools it needs to work – injecting its dependencies.

For many years, few people paid enough attention to writing loosely coupled code. Loosely coupled code is code that does not work in a standalone context or in a disconnected software island. Instead, loosely coupled code is simply code where connections exist, but are strictly ruled and occur under the umbrella of clear contracts. Here’s a canonical example:

The class BusinessTask makes internal use of a Logger class which, in turn, requires an instance of the TextWriter class. All necessary instances are created in place, right where they are needed. The code works beautifully!

But if you need to grow it bigger (e.g. scale it up or out in some way), at some point it becomes problematic. It also becomes problematic to test it appropriately. The issue is that dependencies are not encapsulated and well treated. If you ask ReSharper to draw the diagram of dependencies (from the ReSharper | Inspect menu), you get the following:

Dependency diagram generated with ReSharper

There are two main patterns to address the issue. One is Service Locator and the other one is Dependency Injection. The Service Locator pattern works nicely with legacy code that you’re just trying to update to the next level. A service locator is a sort of centralized “instantiation-as-a-service” platform. A very common programming interface is like this:

Internally, the service locator implementation will figure out whatever is necessary to create a valid instance of the specified type. To refactor your code to Service Locator, you just need to analyze known dependencies and abstract them out to a locator.

Even in small code bases, the help of an inspecting tool is crucial to doing a good job in a reasonable time. A tool like ReSharper will make it a breeze to find occurrences of types.

Dependency Injection (DI) is a different pattern that serves the same purpose. Compared to Service Locator, DI is more intrusive, as it requires changes to the public interface of involved classes. For this reason, the rule of thumb is going with Service Locator if working on legacy code, and using DI for new code or code that can be significantly refactored and undergo radical changes.

The key to DI is to rely on interfaces rather than on specific implementations. To do that, you can extract an interface from all dependencies in our example: Logger and DefaultWriter. Again, it’s a quick task for a tool like ReSharper. We can use the context menu (Refactor | Extract | Extract Interface) or use the Refactor This action (Ctrl+Shift+R).

Extract interface from an existing class

When you complete the wizard, you will have a Logger class that implements an ILogger interface.

The next step is making BusinessTask depend on ILogger, rather than a specific implementation type such as Logger.

Programming BusinessTask to an interface rather than an implementation makes the class inherently more flexible from a customization and testability perspective. Ideally, with a DI approach you could refactor BusinessTask as:

An ILogger instance is now required to instantiate a BusinessTask object, and obtaining an instance of the logger object is no longer a problem of the BusinessTask. Any necessary code is moved outside the class.

To complete the exercise, you should perform the same steps on the DefaultWriter class and inject a dependency in the Logger class.

See the point? At the end of your work, in order to create an instance of BusinessTask, you first need to create an ILogger; and in order to create an ILogger, you need an IDefaultWriter. It may soon become a (long) chain. An Inversion-of-Control (IoC) framework just saves you from the burden of having to deal with all those instantiations manually.

However, for relatively flat dependency diagrams, you can opt for “poor man’s DI”:

Poor man’s DI consists in the definition of an additional constructor that accepts the type(s) to inject. The default constructor calls into this additional constructor providing an instance of a concrete type. The basic function of DI is fulfilled, but more advanced functions of an IoC are clearly missing.

Refactored business logic to make use of interfaces

Refactored in this way, your code is inherently more extensible and, on a larger scale, easier to deploy.

Next week, we will look more closely at inheritance and composition.

Download ReSharper 2018.1.2 or Rider 2018.1.2 and give them a try. They can help spot and fix common code smells! Check out our code analysis series for more tips and tricks on automatic code inspection with ReSharper and Rider.

This entry was posted in How-To's and tagged , , , , . Bookmark the permalink.

13 Responses to Dependency injection doesn’t strictly require frameworks – Code smells series

  1. Pingback: Dew Drop - July 31, 2018 (#2777) - Morning Dew

  2. Scott Hannen says:

    The title is how DI doesn’t require a framework. But before it gets to that, this is one of the clearer explanations of DI that I’ve seen. Most explanations over-complicate this. I also appreciate that (as the title says) you left the frameworks out. That reduces the misconception that DI = using a framework, even though using a framework is likely the next logical step.

  3. Vasya says:

    Thank you for the article, it gets everything right, except the point that “The key to DI is to rely on interfaces”.

    It’s wrong, DI is about “not managing instances by their consumers” and only this. Relying solely on interfaces is even going in contrary with OOP principles, interfaces are there to support polymorphism and switching to interfaces everywhere violates it, you have no different implementations and having an interface there is unneeded boilerplate.

    You could say, that interfaces are useful because of testing — yes, it’s true, but the interface is only needed if, again, you have additional (testing or mocked) implementation and, if testing is done right, need for this is very rare. Having interfaces everywhere also encourages extensive use of mocking, which is proven to make tests fragile, hard to write, hard to support and violates abstraction/encapsulation (testing internals instead of inputs and outputs).

    I’d like to provide some link for further reading on a more concise and modern DI approaches, but don’t know where to get it: if somebody else has similar thoughts and experience with modern DI approaches — please, share.

    • BobDontUseSoManyInterfaces says:

      I agree wholeheartedly. In the given example, even for testing purposes, it is only necessary to have an interface for the writer class.
      Advice like this is one of the things that encourages to people having an interface for every single class, which is one of the interface anti-patterns.

    • Dino Esposito says:

      Hi Vasya and BobDontUseSoManyInterfaces and everybody else!
      After a bit of rethinking the actual meaning of words, I admit your point here is a good one. The “key” to DI is more about management of instances delegating their actual creation to the outside dark space than managing interfaces. Otherwise, as Bob pointed out, you soon recursively reach the point of every-class-has-an-interface. Which is silly.
      For what that matters, when I used the word “key” I was actually thinking one or maybe two steps away from the plain management of instances and to what it may mean business-wise–managing functionality a layer may depend upon and making sure everything is injected with a compatible interface to work smoothly. Hence the misconcept. Thank you all for pointing it out!

  4. Vadym says:

    Thanks for the article! Small suggestion. “Poor man’s DI” is retired by the author:http://blog.ploeh.dk/2014/06/10/pure-di/. I wasted a lot of time trying to explain what is explained in the article to my colleagues just because of it. Nobody wants to be a poor man :). It would be great if it were replaced with newer self-explaining term “Pure DI”.

    • Dino Esposito says:

      It’s probably my attitude to (try to) keep a low profile and look much less attractive than I’m actually. I really like the expression “poor’s man” :)

  5. mdeschaepmeester says:

    I too would like to see some more resources on DI like this, especially ones where the author understands that interfaces are not a requirement for DI!

    Our codebases are indeed full of interface-implementation pairs (so-called header interfaces). Mark Seemann drives that point home in a wonderful article: http://blog.ploeh.dk/2010/12/02/Interfacesarenotabstractions/ and a follow-up: http://blog.ploeh.dk/2010/12/03/Towardsbetterabstractions/.

    And indeed we notice that our tests are too brittle and stand in the way of refactoring. So we are looking for a different take on it… but it’s easier said than done.

    Thanks for the article Dino.

  6. stan4th says:

    Hi Vasya, one thing related to what you’ve said, which I often question in my mind, is the use of interfaces implying >1 possible implementation. But, if you initially start with only one, you might not know what the future holds – so how do we decide? If we leave them as a concrete then we are potentially creating future refactoring work. Any thoughts?

    • BobDontUseSoManyInterfaces says:

      Apply the YAGNI (You Aint Gonna Need It) principle.

      Add things to code only when you need them, not because of “what if one day…” scenarios. This is hard for many programmers, as our natural instinct is to keep asking those questions.

      You should write your code so that modifying it is easy. You should NOT try to write your code to anticipate all future needs, because you can’t.

      It is not actually a difficult change to add an interface reference when you have a second implementation of it, because you won’t have used the first implementation in many places. Right? :-)

    • Bleh says:

      Ah, the classic “But we might need it later” argument.

      This excuse is very common unfortunately and very problematic.

      Either you know you need something now, you know you’ll need it soon, or you’re fooling yourself.

      If you know you need something now or I’m the next version, you work it into the design and do it. But don’t pretend you’ll know what the future holds. Odd are you won’t need what you think you need and even if you do, you’ll almost certainly have factors that change things from what you’re assuming right now.

      Bottom line: Do what you need when you need it. Don’t wast time and reproduces, which leads to more maintenance and cognitive overhead, “just in case I need it later”.

      Imagine if you threw every possible design patten at an app just in case. Imagine what a mess physical products would be if the designers and engineers tried to toss in extensibility and features to cover all sorts of hypothetical future use cases. It would be a mess and a failure.

  7. Scott Hannen says:

    It’s accurate to say that DI depends on interfaces if we’re referring to the publicly accessible members of the dependency. In C# terminology classes and interfaces are distinct entities. More generally speaking, classes have interfaces.

    For someone learning DI, it’s perfectly fine to start by injecting literal interfaces. We can use interfaces as abstractions, and we should depend on abstractions, which means that we’re injecting abstractions, which often means that we’re injecting interfaces. (Note that I was careful not to equate abstractions and interfaces.)

    Defining interfaces for dependencies also works along with interface segregation.

    DI doesn’t strictly require that the injected dependency be an abstraction. It could be a sealed class. That’s still better than a class creating its own dependency. But if I’m explaining DI, I’d rather show (as the author did) how we can use it to apply Dependency Inversion.

  8. Bleh says:

    One downside potential downside to using DI this way, at the object instantiation level, is performance.

    If your classes have a lot of functions which require a lot of dependency classes, you could be instantiating dozens or even hundreds of objects used by functions you’re not even calling.

    This may not be an issue for most people but I wanted to mention it because if you have classes that are fairly resource hungry to instantiate or you need every drop of performance you can get, you’ll want to avoid creating lots of objects you’re not using, which then adds to garbage collection overhead.

    And that’s not even taking into account any overhead your IoC framework adds.

    In this case, I’d suggest a service locator and let the functions dictate which objects need to be created and only create those.

    And nobody wants to deal with a million interfaces either. I’ve yet to see any real world benefits from DI beyond making making it simpler to refactor code when constructor interfaces change or are added/removed and making things somewhat easier to test if you truly do need to do unit level testing with mocks.

    If you need to unit test functions that you really need to mock objects for then go with interfaces there and only there.

    It’s usually better to do higher level integration tests anyhow. Far less test code to write and maintain and your test code is far less brittle. Save unit tests for complex functions that need to be tested directly.

    If your error handling and logging is what it should be, integration testing should pinpoint any unit level issues for you.

    It’s horrendous to see these monolithic unit testing projects that start to get as complex as the app they’re testing.

Leave a Reply

Your email address will not be published. Required fields are marked *