Doing it the Right Way

Should types be placed on the left or right side? I have a strong opinion and valid arguments for the right side.

Even though C# has type inference much longer than C++, the community is still debating over its purpose. Meanwhile the C++ community has fully embraced type inference in form of the auto keyword. C++ now even allows auto to be used in lambdas and for return types.

Once in a while I make a quick survey among C# developers about their preference. The response is always about 50/50. Overall I would say that the tendency is more to avoid var than to embrace it. The argument I hear is usually to "provide better code readability". This is more than dubious in my opinion, since most C# developers also avoid Hungarian notation for being redundant. Now one could also argue that the Hungarian notation was indeed an improvement for readability. Therefore if the Hungarian notation is considered redundant, so should be specifying a type more often than required.

But let's see what we may gain and why I think var should be fully embraced.

Specifying Intention over Implementation

We may clutter our code with types everywhere. But that does not really help. What is a type? It is just a "memory map". In fact a type is nothing more than a combination of bytes, where certain bytes may be combined again to form something like integers, floating point numbers or pointers. Do we really care what types we handle? Aren't objects and instances much more important? Don't get me wrong: Types are important. I don't think the concept of a type (struct, class, ...) is out of fashion. But why should I care what specific type is returned by a function? I only care about the functionality and / or the data that is returned.

The whole interface movement is part of that. Instead of relying on specific types we should use more general interfaces. That way the concrete implementation is less important than the intention. What do we want to do with the code? Functionality is key.

Now let's go one step further. A classic pattern in the .NET-framework is a TryX function. A boolean value is returned, stating if the operation was a success or not. But the result of the actual operation is missing. Instead of tuples the original design was to supply parameters as references. A great design choice was to distinguish between incoming (ref) and outgoing (out) references. The pattern relies on the latter.

A simple example for parsing an integer would be:

int result;

if (int.TryParse("42", out result) == false)
{
    // Print some error
}

Now several questions arise. Do I really need to name the type twice? As a matter of fact we do (even though C# 6 was supposed to improve that; but that particular feature was removed from the final version). But should we leave out initialization? In this special case it would be better, but generally I would always go for the explicit initialization. Choosing a good initialization value seems to be easy: zero! But this would not be honest. We do not care actually. Therefore we should choose the following:

int result = default(int);

Since we do not care about a specific value, this expresses our intent very well. But doing so would repeat the type three times. So let's just use var:

var result = default(int);

It is longer than the initial version, but it expresses our intent and it is completely honest.

Initialization is Declaration

In C# we can use some inconsistent variable initialization without any compiler warning. The compiler will do the right thing without introducing any casts. In C the following line would possibly punish us at runtime (even though only with a very small performance hit).

uint num = 42;

What is happening here? We declare num to be an unsigned integer (32-bit), but we assign (initialize) a signed integer (32-bit). The problem is more obvious if we use var (type inference).

var num = 42;

Now that the type is on the right (implicitly) we have to be careful about what initialization we use. But this solves two problems: One is that readers be be confused abotu the ambiguity above and second the compiler can now choose the right type, which is always the most specific possible (no more unintended generalization).

var num = 42u;

Here suffixes, such as the u for unsigned, f for float or m for decimal are very handy.

Prevent Inconsistencies

Using the default compiler method has a lot of advantages. Most importantly it brings the type to the right side, which should be the preferred side of placement. Another benefit is that we can finally exclude inconsistencies in a block of variables. In the past we may have had code that looks similar to the following block:

var element = argument.GetElement();
var parent = element.Parent;
MyClass result = null;

Such blocks do not only look ugly, but provide a subtle problem that will be explained later. A better solution would have been to use default. Here we have:

var element = argument.GetElement();
var parent = element.Parent;
var result = default(MyClass);

Much more appealing. Additionally the intention is very clear: We do not care about the value of result. It will be provided later.

There is yet another advantage of the previously described way to structure our code. We do not have to worry about the kind of type of MyClass. Obviously for the initial code, the explicit null assignment, the type has to be a class. But in the improved version we could easily refactor the type to a struct without any compilation error.

Refactoring is Key

Yes, refactoring is definitely important. Most of the crucial refactorings, such as rename, extract method or interface are very well integrated into Visual Studio / the compiler services. But there are other refactorings such as changing the return or argument type, kind of type or argument order. Sometimes tools exist to eliminate the need for applying changes to dependent code, but even if some tool exists, we should ask us why we should depend on a tool, if we could have solved the problem in code already.

The idea that one change in the code triggers multiple changes automatically is not new. It is, in fact, one of the principles of programming. We want our programs to be as agile as possible. They should be easy to extend and maintain. Using type inference and placing the type on the right side is the only true way to provide this.

So let's embrace var! It may have been introduced with LINQ, but it is much more than a handy helper if typing would be tedious. It is an agile refactoring machinery.

Created . Last updated .

References

Sharing is caring!