Dependency Injection is an Anti-Pattern

How come we (need to) use a runtime pattern for things we know at compile-time?

Dependency injection (DI) is a pretty great technique for extensibility. It allows us to let lifetime management, reusability, decoupled implementations, and more be managed at a central location. Quite often we associate DI also with inversion of control (IoC), which can be used as a natural side-effect. Instead of writing code like

public class MyClass
{
  public MyClass()
  {
    _myDependency = new MyDependency();
  }
}

we can freely write

public class MyClass
{
  public MyClass(IMyDependency myDependency)
  {
    _myDependency = myDependency;
  }
}

if we properly registered the implementation for the respective interface. To fully use this pattern obviously MyClass has to be created as well by the DI system. This is a natural follow-up and as a result we want to put all (or as many as possible) of our (service) classes into DI (obviously classes that represent records or simple DTOs should not be part of DI / we do not want to use IoC here).

Cool - so far, so good. DI gives us much flexibility and is great for frameworks, libraries, and all kinds of systems that may want to switch / extend their dependencies (e.g., for testing). If DI gives us testability, flexibility, extensibility, and an easier to manage application (by decoupling dependencies) how can I describe it as an anti-pattern?

For me the reason is simple: It is totally unnecessary - to be a runtime mechanism. In 99% (omitting all the 9s in the fraction) of the cases we are not creating a system that can be extended at runtime. I have not yet seen a web application that supports dynamic DLL injection at runtime. Barely any desktop applications support this extension mechanism (and if they do - they do it with their own mechanism and will not just throw in stuff from "external" DLLs into their DI). All other things, e.g., replacing one or the other implementation at, e.g., a test run or mock service, are all known at compile-time. Yet, we still use DI for this, which comes at huge runtime costs.

Why are we wasting precious CPU cycles at runtime to build up dynamic resolutions that impose more indirection than necessary instead of resolving the DI-relevant interfaces at compile-time? C# (or .NET for that matter) could actually support DI via language constructs (or some kind of setup) that would be a total replacement of the runtime DI we see today in most cases. Thus instead of having all these registrations performed in some startup we would wire them up in a special file that is considered by the compiler (e.g., the csproj file).

I feel here - again - the JavaScript community has - despite the language's shortcomings and pitfalls - overtaken .NET drastically. Modules are much more flexible and can be fully resolved at compile-time (this is what a bundler does). For testing we can mock away modules (thus gaining the flexibility). If we want extensibility we refer to modules not via a constant string, but rather via some variables or other resolutions. We get exactly what we want and can have the optimizations we need.

Is it really that .NET's (or C#s for that matter) class / OOP approach is clouding the performance with unnecessary costs? Is compile-time / meta programming or post-compilation not a real thing in .NET? What do we need to do to bring to compile-time what should be at compile-time? I am not against reflection (its great), but quite often it is not used because we want to, but because we have to, i.e., instead of using a compile-time mechanism we need to fall back to a runtime-mechanism.

TL;DR: The real anti-pattern is using runtime for what should be done at compile-time.

Created .

References

Sharing is caring!