Introduction
With the rise of sophisticated backend services living in the cloud the need for efficient (i.e., developer friendly) APIs has also exploded. Luckily, we have great examples of wonderful APIs out there. Unfortunately, building our own APIs in a similar way is hard work.
It turns out that applying REST correctly is not tedious, but quite a vast undertaking. Many things need to be taken into consideration and every endpoint requires a whole set of audits to be even remotely considered stable or complete.
If we really want to think in terms of resources instead of actions / operations we need to fully understand the HTTP verbs and status codes and their implications. Two really interesting verbs are PUT
and PATCH
. In theory their distinction is straight forward: Both perform an update on an existing resource (and therefore require a proper identifier to be able to target a specific resource). However, while PUT
can be considered a standard "replace", the PATCH
operation performs a value-by-value update, only considering the provided properties.
We can also refer to the PATCH
operation as a partial PUT.
In this article we want to explore how to efficiently create a partial PUT
endpoint ourself. It turns out that ASP.NET already delivers an object for such partial updates, howeverer, that object comes with its own resource definition and does not purely reflect the RESTfulness that we desire. In this article we therefore start from scratch.
Background
In our own APIs we do not have any PATCH
endpoints. The reason is simple: All our PUT
endpoints are by definition considered partial. One may dislike this choice, but considering that an API can only be useful if it is used accordingly we decided to go for this. All our developers associated with a PUT
a partial update, while no one has heard of a PATCH
operation. As a consequence we simplified our design.
Why is a partial PUT
so useful anyway? First of all, it avoids traffic. If we have a large resource we only need to send the corresponding fields containing the updated value back to the server. Implicitly, we do not override values that we did not care about. Finally, if we know the id of a resource and the new values of some fields we do not need to fetch the full resource for performing the update.
All in all working on a partial update instead of a full update can have real benefits. The only major drawback is that may want the concistency check / assure for the full resource already provided the client to avoid surprises from the server. A sample scenario may be that the API performs a check that property B has a certain value depending on a property A. Let's say someone makes a valid change of B and A while we are also making a change of only B, which would be valid under the current A, but invalid with the upcoming A. We would be surprised to get back an error as our change (thus our resource) looked fine, while we checked. Such race conditions are certainly not eliminated, but certainly reduced with a full update. Note, however, that these are special cases only occuring under certain very special validation rules.
Problem Description
So what do we want to achieve? Let's say we start with the following controller:
public class SampleController : controller
{
public IActionResult Put(Model model)
{
// ...
}
}
Obviously, this controller has only a single action which leads to a full / normal PUT
, i.e. operation, update on a resource. This action is also properly reflected on a generated Swagger documentation. Furthermore, the validation from the framework kicks in to prevent invalid values to go into our action.
Now we want to go over to a partial PUT
. What do we want to obtain here?
- The generated Swagger should reflect the partial-ness of the action (i.e., every property should be optional)
- The validation should respect the partial-ness of the action (i.e., if a property is specified it must fit to the model)
- We need to know which property has been set and which one was omitted - just having
null
everywhere is not good enough - Properties that have been specified but are not in the model should lead to an invalid input
- We can still work with / on the partial model
Optionally, we want to be able to exclude certain properties of our full model in a partial update. We want to be able to express this with attributes on the original (i.e., full) data model.
All in all the approach should feel like the following:
public class SampleController : controller
{
public IActionResult Get()
{
// returns a Model instance, e.g., via ObjectResult
}
public IActionResult Post(Model model)
{
// ...
}
public IActionResult Put(Partial<Model> model)
{
// ...
}
}
So we can reuse the same model over and over again - to truely reflect the resource based approach. For GET
we return an instance of the model, in a POST
we expect a full model definition to be handed over and a (partial) PUT
uses a special version of this model that allows only a fragment to be passed in.
Approaches: Tackling the Problem in Newtonsoft.Json
What can be so hard writing a simple JSON deserializer that is capable of indicating what keys have been used? After all, a simple JObject
with a bit more code could already do the job, right? So let's try some approaches.
We could simply try to place a different converter on top of the model. Something along those lines:
[JsonConverter(typeof(MyJsonConverter))]
public class Model
{
// ...
}
Now this converter would (always) be used to deserialize some JSON string into a Model
instance. As we only want this for a partial endpoint we could just do the following instead:
public class Model
{
// ...
}
[JsonConverter(typeof(MyJsonConverter))]
public class PartialModel : Model {}
The problem with this approach starts once we want to implement the actual converter. What do we want? First, we may need to obtain some information what keys have been supplied in the JSON. The simple solution to this would be to convert to a JObject
(essentially a dictionary) first, then use this as a basis to form the real model instance.
public class MyJsonConverter : JsonConverter
{
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
var data = serializer.Deserialize(reader, typeof(JObject));
var raw = JsonConvert.SerializeObject(data);
// unfortunately, there is no way to do another conversion directly without much trouble
// ideally, we would just convert from JObject to [objectType]
return JsonConvert.DeserializeObject(raw, objectType);
}
public override bool CanWrite => false;
public override bool CanRead => true;
public override bool CanConvert(Type objectType) => true;
}
However, as a result we will hit a wall in Newtonsoft.Json. The original converter cannot be removed and we will always try to re-use the current converter, essentially resulting in a stackoverflow exception (due to the recursive condition, which is not properly resolved).
Obviously, the problem cannot be tackled so easily. Apparently, there is no way to directly act against the "base deserializer". One potential solution would be to perform some very nasty hacks on Newtonsoft's internals.
By performing some modifications against the "known" converters we can simply teach Newtonsoft to "forget" about our custom converter for now. The problem is only that this is not a robost (i.e., future proof) way of handling things, plus we will definitely run into cross-threading issues. Having race conditions or code that is non-deterministic and will crash / behave inappropriately based on the machine's state is not what we desire.
serializer.Converters.Remove(this);
var result = serializer.Deserialize(reader);
serializer.Converters.Add(this);
return result;
Even though this may work in some version of Newtonsoft such an approach would easily be outdated once the internals change. There is no guarantee - even for patch releases - to hold the internal API stable. Furthermore, the shown approach is not thread safe and thus not really well suited for any production system, especially web apps.
Now that we failed with the obvious and creative approach it is time to tackle this problem a little bit more structured.
Solution: Proper Wrapping is the Key
We have already seen that eventually we need to implement the whole deserializer ourselves. This, of course, is way too much to do and not what we want. How about just re-using some internals? Indeed, we tried that but faced multiple other challengees. Nevertheless, there is a middle way.
The potential solution to the problem is a converter that works on an object that contains a reference to another object. The "outer" object would store the reference and all keys seen when deserializing the inner object. Think about it like:
public class Part<T>
{
public Part(T data, String[] keys)
{
Data = data;
Keys = keys;
}
public T Data { get; }
public String[] Keys { get; }
}
The advantage of this approach is that we could define a custom converter only for the outer object, which could make use of the standard converter for the inner type. In order to get the keys we would need a different mechanism though.
Since the converter also needs an instance of a JsonReader
we could just write a wrapper that is sensitive to the property (name) tokens. Presumably, we get all the name tokens, hence we would need to integrate a top-level check (we don't want any nested partials).
The following code snippets shows the Part
class, which is the container for all relevant information. This type has the Data
property representing the deserialized .NET object and the Keys
property referring to the used / found keys in the original JSON. As mentioned the keys only refer to the top-level keys.
[JsonConverter(typeof(PartialJsonConverter))]
public class Part<T>
{
public Part(T data, IEnumerable<string> keys)
{
Data = data;
Keys = keys.ToArray();
}
public T Data { get; }
public string[] Keys { get; }
public bool IsSet<TProperty>(Expression<Func<T, TProperty>> property, Action<TProperty> onAvailable = null)
{
var info = GetPropertyInfo(Data, property);
var name = info.Name;
var attr = info.GetCustomAttribute<JsonPropertyAttribute>();
var available = Keys.Contains(attr?.PropertyName ?? name);
if (available)
{
onAvailable?.Invoke((TProperty)info.GetValue(Data));
}
return available;
}
private static PropertyInfo GetPropertyInfo<TProperty>(T source, Expression<Func<T, TProperty>> propertyLambda)
{
var type = typeof(T);
var member = propertyLambda.Body as MemberExpression ??
throw new ArgumentException($"Expression '{propertyLambda.ToString()}' refers to a method, not a property.");
var propInfo = member.Member as PropertyInfo ??
throw new ArgumentException($"Expression '{propertyLambda.ToString()}' refers to a field, not a property.");
if (type != propInfo.ReflectedType && !type.IsSubclassOf(propInfo.ReflectedType))
throw new ArgumentException($"Expression '{propertyLambda.ToString()}' refers to a property that is not from type {type}.");
return propInfo;
}
}
Where we define the converter (PartialJsonConverter
) like the following code:
public class PartialJsonConverter : JsonConverter
{
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
// Should only be used for deserialization, not serialization
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
var innerType = objectType.GetGenericArguments()[0];
var wrapper = new JsonReaderWrapper(reader);
var obj = serializer.Deserialize(wrapper, innerType);
return Activator.CreateInstance(objectType, new [] { obj, wrapper.Keys });
}
public override bool CanWrite => false;
public override bool CanRead => true;
public override bool CanConvert(Type objectType) => objectType == typeof(Partial<>);
}
All the magic is now contained in the JsonReaderWrapper
, which is just a wrapper around the standard JsonReader
instance that Newtonsoft already gives us. The advantage is that we can use this reader to "track" what keys have been seen.
In practice this looks as follows:
public class JsonReaderWrapper : JsonReader
{
private readonly JsonReader _reader;
private int _level = 0;
public JsonReaderWrapper(JsonReader reader)
{
_reader = reader;
}
public List<string> Keys { get; } = new List();
public override bool Read()
{
var result = _reader.Read();
if (_reader.TokenType == JsonToken.StartObject)
{
_level++;
}
else if (_reader.TokenType == JsonToken.EndObject)
{
_level--;
}
else if (_level == 0 && _reader.TokenType == JsonToken.PropertyName)
{
Keys.Add(Value as string);
}
return result;
}
public override char QuoteChar => _reader.QuoteChar;
public override JsonToken TokenType => _reader.TokenType;
public override object Value => _reader.Value;
public override Type ValueType => _reader.ValueType;
public override int Depth => _reader.Depth;
public override string Path => _reader.Path;
public override int? ReadAsInt32() => _reader.ReadAsInt32();
public override string ReadAsString() => _reader.ReadAsString();
public override byte[] ReadAsBytes() => _reader.ReadAsBytes();
public override double? ReadAsDouble() => _reader.ReadAsDouble();
public override bool? ReadAsBoolean() => _reader.ReadAsBoolean();
public override decimal? ReadAsDecimal() => _reader.ReadAsDecimal();
public override DateTime? ReadAsDateTime() => _reader.ReadAsDateTime();
public override DateTimeOffset? ReadAsDateTimeOffset() => _reader.ReadAsDateTimeOffset();
public override void Close() => _reader.Close();
}
Since we are inheriting from a standard JsonReader
we need to redirect all calls to the wrapper JsonReader
. Obviously, this is quite a list (any many of these things will not be needed during standard operations, but one never knows), but they all follow the same schema.
The only thing we need to take special care of is the Read
method. This one gets the most attentation of the whole solution. Let's see the code again and dissect it in detail.
// Read the next token
var result = _reader.Read();
if (_reader.TokenType == JsonToken.StartObject)
{
// If we start (another) object increase the nesting level
_level++;
}
else if (_reader.TokenType == JsonToken.EndObject)
{
// If we end an existing object decrease the nesting level
_level--;
}
else if (_level == 0 && _reader.TokenType == JsonToken.PropertyName)
{
// If we encounter a property name at the "base" level
// we should add its value (i.e., the name) to the keys
Keys.Add(Value as string);
}
// Act as the "normal" Read - return the seen token
return result;
Essentially, we introduce the special logic of handling the (nested) objects with their properties. If we encounter a propperty of the "base" object (the partial one) we additionally store its name.
Bonus 0: Useful Extension Methods
For working with the given keys we can introduce two extension methods. One gives us the JSON property name from the property and the other gives us the property from a JSON property name.
public static class JsonExtensions
{
public static string GetJsonPropertyName(this PropertyInfo info)
{
var name = info.Name;
var attr = info.GetCustomAttribute<JsonPropertyAttribute>();
return attr?.PropertyName ?? name;
}
public static PropertyInfo GetPropertyFromJson(this Type type, string jsonPropertyName)
{
foreach (var property in type.GetProperties())
{
if (property.GetJsonPropertyName().Is(jsonPropertyName))
{
return property;
}
}
return null;
}
}
Since the used enumerable of keys refer to the JSON property names we need to have such converters to map from one name to another (or from a JSON name to a POCO property).
Bonus 1: Ignoring and Naming Properties
So far so good. Potentially, we want our (reused) DTOs also to have special entries to be intentionally left out for usage in partial puts. The following attribute should do well:
[AttributeUsage(AttributeTargets.Property, Inherited = true, AllowMultiple = false)]
public sealed class IgnoreInPartialPutAttribute : Attribute
{
public IgnoreInPartialPutAttribute()
{
}
}
The attribute alone is, of course, not sufficient. We will use the attribute to decorate properties that should only be set during normal (e.g., POST
) operations. However, with the attribute we do not have any logic associated right now. Therefore, we will need another useful extension method. We call it Validate
and its responsibility is to perform validation on any Part
object.
public static bool Validate<T>(this Part<T> partialInput)
{
foreach (var key in partialInput.Keys)
{
var type = typeof(T);
var info = type.GetPropertyFromJson(key);
if (info == null || !partialInput.IsSet(info))
{
return false;
}
}
return true;
}
This utility function goes over all set keys and gets their corresponding .NET property info as defined earlier. Then we check if no such mapping exists or if we get info that this cannot be set via the partial input. The latter is directly associated to our attribute (or other attributes, such as the general JsonIgnore
attribute from Newtonsoft.Json).
private static bool IsSet<T>(this Part<T> partialInput, PropertyInfo info)
{
if (!info.IsJsonIgnored() && !info.IsJsonForbidden())
{
var key = info.GetJsonPropertyName();
return partialInput.Keys.Contains(key);
}
return false;
}
The two extension methods (IsJsonIgnored
and IsJsonForbidden
) are pretty much self-explanatory. The only look for occurance of the respective attributes on the given property info.
Bonus 2: Swagger Generation Using Swashbuckle
So far so good, but we are not done yet. In the end our API should be nicely documented and having a proper Swagger generation is an absolute must to get this done.
There are many options to archieve this and in our case we will pick Swashbuckle for no particular reason than because we can.
To teach Swashbuckle something about how it should generate the Swagger documentation / JSON schema for our API we need to configure it. In our case the configuration looks similar to the following lines:
public static IServiceCollection AddSwaggerDoc(this IServiceCollection services)
{
services.AddSwaggerGen(config =>
{
config.SwaggerDoc("v1", new OpenApiInfo
{
Title = "Awesome Service",
Description = "Description of the awesome service",
});
foreach (var path in GetXmlDocPathsOfAssemblies())
{
config.IncludeXmlComments(path);
}
config.EnableAnnotations();
config.DocumentFilter<PartFilter>();
config.SchemaFilter<PartFilter>();
});
return services;
}
The crucial part is the registration of the PartFilter
. These filters are used by Swashbuckle to determine how certain types are converted. We add two filters - one for the whole Swagger document and one for the specific JSON schema.
public sealed class PartFilter : IDocumentFilter, ISchemaFilter
{
private static readonly string PartOfTName = Regex.Replace(typeof(Part<>).Name, @"`.+", string.Empty);
private static readonly string PartOfTSchemaKeyPattern = $@"{PartOfTName}\[(?<Model>(.+))\]";
public void Apply(OpenApiDocument doc, DocumentFilterContext context)
{
foreach (var schemaPair in doc.Components.Schemas)
{
if (Regex.IsMatch(schemaPair.Key, PartOfTSchemaKeyPattern))
{
try
{
ModifyPartOfTSchema(context, schemaPair);
}
catch
{
// Don't crash if this fails for one schema.
// In the worst case, our Swagger doc. contains a few additional information.
}
}
}
}
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
if (context.SystemType.IsGenericType && context.SystemType.GetGenericTypeDefinition() == typeof(Part<>))
{
var wrappedType = context.SystemType.GetGenericArguments().First();
var ignoredPropertyNames = wrappedType
.GetProperties()
.Where(prop => Attribute.IsDefined(prop, typeof(IgnoreInPartialPutAttribute)))
.Select(GetJsonPropertyName)
.ToList();
schema.Extensions.Add(
nameof(IgnoredInPartialPutExtension),
new IgnoredInPartialPutExtension { PropertyNames = ignoredPropertyNames }
);
}
}
}
Swagger declares generic models like this: GenericClass[TypeParam]
. Therefore, this means that every model which is wrapped in a Part<T> class has a name such as Part[MyModel]
. The used regular expression detects this.
Because we don't want Part
models to appear in the final Swagger document we remove every model whose name matches the regular expression.
For the document we remove any property which appears in a list of removable properties.
private static void ModifyPartOfTSchema(DocumentFilterContext context, KeyValuePair<string, OpenApiSchema> schemaPair)
{
var mySchema = schemaPair.Value;
var partDataProperty = mySchema.Properties.First(p => p.Key == "data").Value;
var referencedSchemaSchemaId = partDataProperty.Reference.Id;
var referencedSchema = context.SchemaRegistry.Schemas[referencedSchemaSchemaId];
var referencedSchemaProperties = referencedSchema.Properties;
var propertiesClone = DeepClone(referencedSchemaProperties);
mySchema.Properties = new Dictionary<string, OpenApiSchema>(referencedSchemaProperties);
mySchema.Description = referencedSchema.Description;
var ignoredPropertyNames = mySchema.Extensions.Values
.OfType<IgnoredInPartialPutExtension>()
.Select(ext => ext.PropertyNames)
.FirstOrDefault();
if (ignoredPropertyNames != null)
{
foreach (var ignoredPropertyName in ignoredPropertyNames)
{
var associatedKey = mySchema.Properties.Keys
.FirstOrDefault(key => key.Equals(ignoredPropertyName, StringComparison.OrdinalIgnoreCase));
if (associatedKey != null)
{
mySchema.Properties.Remove(associatedKey);
}
}
}
mySchema.Extensions.Remove(nameof(IgnoredInPartialPutExtension));
}
The algorithm in the code above is as follows:
First of all, clone and copy all properties from the model (of Part<T>
) into our schema. Do a deep clone, so that the properties can modified, without changing the original properties.
We must ensure that:
- No property is required anymore (not necessary in partial
PUT
). - Our schema description (which comes from
Part<T>
) gets erased. - Properties with the
IgnoreInPartialPut
attribute don't appear in the list.
The used helpers are defined as follows.
private static T DeepClone<T>(T original)
{
var serialized = JsonConvert.SerializeObject(original);
return JsonConvert.DeserializeObject<T>(serialized);
}
private static string GetJsonPropertyName(PropertyInfo property)
{
var jsonPropertyAttr = property
.GetCustomAttributes<JsonPropertyAttribute>()
.FirstOrDefault();
return jsonPropertyAttr?.PropertyName ?? property.Name;
}
Ignored property names are injected by the ISchemaFilter
below. They get injected via the IgnoredInPartialPut
OpenApi Extension which we can just read out here.
Some attributes in the model may be annotated with the IgnoreInPartialPutAttribute
. If that is the case, we grab the names of these properties and inject them as a custom OpenApi extension, so that we can, later on, read them out again.
private class IgnoredInPartialPutExtension : IOpenApiExtension, IOpenApiElement
{
public IEnumerable<string> PropertyNames { get; set; }
public void Write(IOpenApiWriter writer, OpenApiSpecVersion specVersion)
{
writer.WriteStartArray();
foreach (var propName in PropertyNames)
{
writer.WriteValue(propName);
}
writer.WriteEndArray();
}
}
Using the Code
The code could be just copied and modified easily. To simplify the whole process I've published a very small library called Partial.Newtonsoft.Json, which brings all these little helpers and more. If you feel that something useful is missing please provide a pull request (or open an issue) at the GitHub repository. You'll find the repository at: github.com/FlorianRappl/Partial.Newtonsoft.Json. The Swashbuckle helper is not part of this library as it has nothing to do with Newtonsoft.Json and may not be the Swagger generator of your choice.
nuget install Partial.Newtonsoft.Json
Using the library is as simple as using the Part
class from the Newtonsoft.Json.Partial
namespace.
Points of Interest
It's interesting that Microsoft (or someone else?) did not implement a partial PUT
yet. Other frameworks / communities have this either inbuilt or feature existing libraries to handle these scenarios. The only thing we have in .NET is the PatchDocument
, which is not a real resource and very much in the "RPC Parameter" camp instead of being RESTful.
The provided code only illustrates one particular way to handle partial PUT
(or PATCH
) scenarios. There are multiple others. The interesting part was to gain the ability to keep using the same DTO as for the POST
. Ultimately, the limited type system in .NET is the root of having to migrate to runtime mechanisms such as reflection / a custom deserializer to support these scenarios.
I hope that for the future of C# / .NET we will get a more powerful type system that allows compile-time enhancements and type manipulation. A good role model would be TypeScript, which really shines in that regard.
History
- v1.0.0 | Initial Release | 24.03.2019