Social Icons

twitter google plus linkedin rss feed

Pages

Featured Posts

28.5.20

Synchronous and Asychronous ThreadSafe Blitzkrieg Caching

Update! You can download BlitzCache from nuget now


Caching is necessary
Over the years I have used a lot of caching. In fact, I consider that some things like user permissions should normally have a cache of at least one minute.

Blitzkrieg Caching
Even when a method is cached there are cases when it is called again before it has finished the first time and this results in a new request to the database, and this time much slower. This is what I call The Blitzkrieg Scenario.

The slowest the query the more probabilities you have for this to happen and the worse the impact. I have seen too many times SQL Server freeze in the struggle of replying to the exact same query while the query is already being executed...

Ideally at least in my mind a cached method should only calculate its value once per cache period. To achieve this we could use a lock... But if I am caching different calls I want more than one call to be executed at the same time, exactly one time per cache key in parallel. This is why I created the LockDictionary class.

The LockDictionary
Instead of having a lock in my cache service that will lock all the parallel calls indiscriminately I have a dictionary of locks to lock by cache key.

public static class LockDictionary
{
    private static readonly object dictionaryLock = new object();
    private static readonly Dictionary<string, object> locks = new Dictionary<string, object>();

    public static object Get(string key)
    {
        if (!locks.ContainsKey(key))
        {
            lock (dictionaryLock)
            {
                if (!locks.ContainsKey(key)) locks.Add(key, new object());
            }
        }

        return locks[key];
    }
}
With this I can very easily select what I want to lock

GetBlitzkriegLocking
Now I can check if something is cached and return it or lock that call in particular and calculate the value of the function passed as a parameter.

public class CacheService : ICacheService
{
  private readonly IMemoryCache memoryCache;

  public CacheService(IMemoryCache memoryCache)
  {
      this.memoryCache = memoryCache;
  }

  public T GetBlitzkriegLocking<T>(string cacheKey, Func<T> function, double milliseconds)
  {
      if (memoryCache.TryGetValue(cacheKey, out T result)) return result;
      lock (LockDictionary.Get(cacheKey))
      {
          if (memoryCache.TryGetValue(cacheKey, out result)) return result;

          result = function.Invoke();
          memoryCache.Set(cacheKey, result, DateTime.Now.AddMilliseconds(milliseconds));
      }

      return result;
  }
}
And how do I use it?
var completionInfo = cacheService.GetBlitzkriegLocking($"CompletionInfo-{legalEntityDto.Id}", () => GetCompletionInfoDictionary(legalEntityDto), 500));
//Look ma, I am caching this for just 500 milliseconds and it really makes a difference
I find this method extremely useful but sometimes the function I am calling needs to be awaited... And you Cannot await in the body of a lock statement. What do I do?

The SemaphoreDictionary
Semaphores do allow you to await whatever you need, in fact they themselves are awaitable. If we translate the LockDictionary class to use semaphores it looks like this:

public static class SemaphoreDictionary
{
    private static readonly object dictionaryLock = new object();
    private static Dictionary<string, SemaphoreSlim> locks = new Dictionary<string, SemaphoreSlim>();

    public static SemaphoreSlim Get(string key)
    {
        if (!locks.ContainsKey(key))
        {
            lock (dictionaryLock)
            {
                if (!locks.ContainsKey(key)) locks.Add(key, new SemaphoreSlim(1,1));
            }
        }

        return locks[key];
    }
}
And using this I can await calls while I am locking stuff.


The Awaitable GetBlitzkriegLocking

The main rule about semaphores is that you must make sure you release them or they will be locked forever. Catching the error is optional though.
public async Task<T> GetBlitzkriegLocking<T>(string cacheKey, Func<Task<T>> function, double milliseconds)
{
    if (memoryCache.TryGetValue(cacheKey, out T result)) return result;

    var semaphore = SemaphoreDictionary.Get(cacheKey);

    try
    {
        await semaphore.WaitAsync();
        if (!memoryCache.TryGetValue(cacheKey, out result))
        {
            result = await function.Invoke();
            memoryCache.Set(cacheKey, result, DateTime.Now.AddMilliseconds(milliseconds));
        }
    }
    finally
    {
        semaphore.Release();
    }

    return result;
}
And how do I use it?
await cache.GetBlitzkriegLocking($"RequestPermissions-{userSid}-{workplace}", () => RequestPermissionsAsync(userSid, workplace), 60 * 1000);
Please give these methods a try and let me know how you would improve them. I am using them every now and then and I really enjoy how they simplify the code. I hope you like them too.

No comments:

Post a Comment

22.4.19

Concurrency Errors and Value Objects in Entity Framework 2.x

We have been using value objects in our databases for a couple of months and after we passed the first hurdles with entity framework we have noticed improvements in speed and a significant decrease in the Includes wich is all what we wanted to get... but suddenly...

In one of the microservices we noticed we were getting concurrency exceptions every time we updated a value object. And it wasn't a complicated nested value object, it was a value object made of two guids and two strings.

I spent a good afternoon trying to figure out why were we getting the error and could not see anything wrong in the code... and I thought... it's entity framework again!

I asked for help to the rest of the team and they found this thread Changes on Owned Entites Properties causes a concurrency conflict on same dbContext where they propose a workaround. We tried and it didn't work but we thought it was going in the right direction so we debug it and change it a bit and voilà the rowversion column was being updated again both in SQL and in my backend!

The modified code is this:
private void ConcurrencyFix()
{
    var changedEntriesWithVos = ChangeTracker.Entries().Where(e =>
        e.State == EntityState.Unchanged
        && e.References.Any(r =>
            r.TargetEntry != null
            && (r.TargetEntry.State == EntityState.Modified || r.TargetEntry.State == EntityState.Added)
            && r.TargetEntry.Metadata.IsOwned()
            && e.Metadata.Relational().TableName == r.TargetEntry.Metadata.Relational().TableName)).ToArray();

    foreach (var entry in changedEntriesWithVos)
        entry.State = EntityState.Modified;
}

And we have placed it in our SaveChanges methods in our base context, from which all of our contexts are inheriting so we are sure this code is always being executed when we save.
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
    ConcurrencyFix();
    return base.SaveChanges(acceptAllChangesOnSuccess);
}

public override Task SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default)
{
    ConcurrencyFix();
    return base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}

Have fun everyone!

No comments:

Post a Comment