Sunday, October 21, 2018

Unit testing Kentico with xUnit

With Kentico EMS 12 almost ready I noticed that support for MSTest has been dropped from the CMS.Test assembly. I think that’s a good thing given the (slow but steady) move towards .NET Core, where MSTest is no longer the default unit testing framework. I was sort of hoping for a bit more bold move to drop the dependency on a specific test framework all together, or to factor that dependency out to a separate Nuget package.
This is mostly because I really prefer xUnit over NUnit. It’s low ceremony approach to unit testing results in simple and clean code.
Luckily, there’s nothing stopping us from using xUnit with Kentico and at TrueLime we’ve been doing that for over 2 years now. I’ll get to the code shortly but first, it’s probably good to touch on the differences between NUnit and xUnit.
pexels-photo-1331975

Differences between NUnit and xUnit

In a nutshell, xUnit lacks most of the ceremony of older frameworks like NUnit:
  • There’s no Setup or TearDown – use constructor and IDisposable instead
  • Testclasses are not fixtures, Fixtures are in seperate classes to promote reuse
  • Each test runs in it’s own instance of the test class to improve isolation
  • Tests either pass or fail, there is no intermediate state
The net result is that xUnit tests mostly look like the rest of your code. This encourages developers to treat the test code with the same hygene as the application code (refactor, clean up etc.). It also makes it more natural to write clean tests. All the code involved in the test should go into the test, not into setup and teardown at any level.
If you do need to handle some sort of context around your test, like running Kentico, there’s always standard C# constructors and you can implement IDisposable to clean stuff up.’

Enough talk, time for code

Kentico provides support for working with it’s data APIs in unit tests, which is pretty cool. There are some caveats (see next section), but once you’re past those it works quite well and fast.
Unfortunately, since Kentico is based on NUnit we do need to handle some ceremony but we can tuck that away into a base class and keep it out of our test code.
public abstract class KenticoUnitTest : CMS.Tests.UnitTests, IDisposable
{
    protected KenticoUnitTest()
    {
        // Initialize Kentico test infrastructure
        InitFixtureBase();
        InitBase();
        UnitTestsSetUp(); // enable Kentico object faking
    }

    void IDisposable.Dispose()
    {
 // Cleanup Kentico Test infra
        CleanUpTestClass();
        ResetAllFakes();
        try
        {
            CleanUpBase();
            CleanUpFixtureBase();
        }
        catch( System.IO.PathTooLongException )
        {
            // this fails under VS Live testing but that is not critical
        }
    }
}

Kentico Unit Testing Caveats

  • Always use .WithData with Fake if you’re going to query that data. If not, you’re in for some very nasty and hard to decypher stack traces. For example:
    Fake<SettingsKeyInfo,SettingsKeyInfoProvider>().WithData();
  • If you do run into nasty stack traces, especially the ones that end in a failing DB connection, carefully read the first calls in the stack trace and try to figure out what entity is being used so you can fake it.
  • Be careful with VS Live unit testing. We’ve seen some tests failing due to errors unrelated to the test itself.
  • When using nCrunch for continuous testing,make sure you configure the project to copy in referenced assemblies. This is due  Kentico dynamically loading lots of assemblies while scanning for extensions like modules and custom data classes.
  • Custom data classes and other CMS extensions will only be available if the containing assembly is marked with the assembly discoverable attribute:
  • [assembly:CMS.AssemblyDiscoverable]

References

Friday, December 23, 2016

Setup SQL Server session state for a web farm

It takes a bit of digging around to get all the information needed to setup out-of-process session state for an ASP.NET web farm. There are a couple of decisions that you need to make and then you need to configure the database and the application. This post explains all this using a real-life project.

The situation

I'm currently working on a project for a large medical center. There is a strong obligation to the public to be always online, especially in case of a large scale emergency.
This leads to interesting choices in infrastructure for their website: everything is redundant and split across multiple locations across the campus.
The database is a SQL Server Availability Group.

Picking the right session state provider

  • In-Process
    Really only suitable for small applications that run in a single server instance.
  • Session-state server
    A TCP service that is hosted on a single server within the infrastructure.
    Since this introduces a single point of failure, it's no good for this project.
  • SQL Server session state
    Stores session state in the database, either persistent or in temporary storage.
    This is a great pick for a web farm but dus incur additional load on your SQL Server installation
  • Redis session state
    This is the new kid on the block for ASP.NET. Since the medical center is a Microsoft shop and has already invested a lot in top-notch SQL performance, this would only incur technical risk and additional costs for infra.

SQL Server session state, but what flavor?

Putting ASP.NET session state is supported very well, there's tooling to set it up for you but before we dive into that there's yet another consideration.
Where to put the session state:

  • Application database
    This would add a couple of tables to the application database and a bunch of stored procedures. It could be a nice fit when hosting at a shared provider and the additional cost of an extra database is not desired.
  • Session state in TempDB
    Session state data is transient by nature so TempDB, which gets cleared on a server restart and will not be replicated to other SQL Server instances seems like a good idea. You can choose to put the TempDB on a different drive from the application db which could help squeeze more IOPS out of your server. Not writing through to the rest of the cluster may also help improve write performance, but this will also cause loss of session state when the cluster needs to fail over, for example due to maintenance.
  • Session state in it's own database
    This mode will store session state in permanent storage and replicate it across the cluster. That's a performance penalty but gives more guarantees for seemles failover when needed.
    The fact that this database is separate from the application allows to easily make different decisions about IT management, for example about backups or even hosting the session state database on a different database instance.
    This is the best match for this project.

Configuring session state

Once we decided on where to store the session state we had to roll out the configuration in our environments. These are the steps to follow:

  1. Setup the session state database using the ASP.NET SQL Server Registration Tool
    %Windows%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regsql.exe -S MyCluster\Prod -U sa -P topsecret -ssadd -ssype p
  2. Configure the connection string in web.config
    I strongly reccommend including the application name in the connection string and keeping the connection time out low.
    <connectionStrings>
    <add name="SessionConnectionString"
    connectionString="Data Source=MyCluster\Prod,1234;user=sa;pwd=topsecret;Connect Timeout=10;Application Name=Kentico;Current Language=English" />
    </connectionStrings>
  3. Setup the machine key in web.config
    If you're running the site on multiple servers or in the cloud, this is a must.
    <system.web>
       <machineKey xdt:Transform="Insert"
         validationKey="..."
         decryptionKey="..."
         validation="SHA1" decryption="AES" />
    </system.web>
  4. Configure session state in web.config
    <system.web>
    <sessionState mode="SQLServer"
    sqlConnectionString="SessionConnectionString"
    compressionEnabled="true"
    cookieless="false"
    timeout="20"/>
    </system.web>

Tuesday, April 21, 2015

AutoMapper Anti-patterns

I think AutoMapper is evil. It just makes it too easy to do things you should not be doing.

Over the past 4 years I've worked with AutoMapper on several projects and collected a bunch of anti-patterns.

 

Automapper 101

When you see repetitive code appear in a mapping profile this is a sure sign you're not 'doing it right'. For example every mapping from DateTime to string explicitly states that it should use the short date format.

Automapper supports powerful methods of cleaning up the way you map your data. 

Solution

Learn how to use AutoMapper before using it in production code.

Define once how to convert one data type into another (e.g. DateTime to string) so you don't need to repeat that everywhere.

 

Empty profile validation

Create a profile but register every mapping in the global namespace.

Then assert the profile is valid. You are essentially validating an empty profile.

Solution

Don't ever call AutoMapper.CreateMap from an AutoMapper profile, use the instance methods instead.

public class BadProfile: Profile
{
   public override string ProfileName { get { return "BadProfile"; } }
   
   protected override void Configure()
   {
      // !!! Do not do this! It registers the mapping globally !!!
      AutoMapper.CreateMap<DTO.SomeAggregate, Model.SomeAggregate>();
   }
}
public class GoodProfile: Profile
{
   public override string ProfileName { get { return "GoodProfile"; } }
   
   protected override void Configure()
   {
      // Always use the methods on the profile to register mappings
      CreateMap<DTO.SomeAggregat, Model.SomeAggregate>();
   }
}

 

Wait to validate

Credits go to Bastiaan de Rijber for this one.

Asserting profile validity is expensive. If you do this at application startup you may get a serious delay on application startup.

Solution

Assert profile validity in a unit test.

 

Automap all the things

automap-all-the-thingsWhen using automapper is a reflex, it will work against you.

Automapper was intended to reduce repetitive code for mapping types that are highly similar.

If you use it for everything you'll end up with HUGE profiles that are usually more complex than straight up hand-coded mappings.

Solution

Use AutoMapper to get rid of dumb 1-on-1 mappings and simple projections, for example from a primitive type to a localized string.

The rabbit hole

It's easy to build up a mapping profile for a complex data structure like an aggregate root. But keep in mind that when you change your model you will need to untangle the mappings to see which are actually used and where. Since you lost the ability to trace how data is connected due to AutoMappers loose coupling, you'll find yourself in a swamp and sinking fast.

Solution

Keep your mappings simple and your mapping profiles limited in size.

 

Resolvers galore

Building resolvers to do simple mappings is a bad idea. The resolvers add a layer of obscurity to the already loosly coupled mappings. It's all too easy to start hitting the database in these mappings and before you know it you get spanked by N+1 issues.

Solution

If your mapping is more complex than a single expression but it does not involve loading entities create a method for it on the profile or add an extension method that you can reuse. This will prevent obscuring the mapping code in yet another class.

If you do need to load stuff from the database to perform the mapping, seriously reconsider whether AutoMapper is the right tool for the job.

You may be better of coding this by hand using a proper ORM (that's what they are for).

 

'Intelligent' mapping

Resolvers open up a doorway to make your mappings 'intelligent'. This is where resolvers get more and more complexity and soon all sorts of complicated logic starts to creep into the mappings. This smell turns into an outright stink when entities are being created in the resolvers. The net result is that the mappings go from magical to dark voodoo and you find yourself writing unit tests that need to initialize AutoMapper and stub out a bunch of services in order to check the validity of the mapping. A world of hurt.

Solution

Just don't do it. Mapping should always be straight forward.

 

Let me reconfigure that for ya

As mentioned before, setting up mappings is costly. This means you need to think carefully about when you perform that configuration. Once you understand that it’s easy to see that reconfiguring AutoMapper on every, single, call to a service is a bad idea.

Solution

Initialize your mappings once, and once only.

Monday, April 20, 2015

AutoMapper is evil

tl;dr Automapper kills traceability making software maintenance increasingly difficult. It's also a gateway drug to a bunch of anti-patterns.

I've turned from an AutoMapper advocate to a strong opponent over the course of about a year.

At first it seems like a great way to reduce stupid repetitive code. It accelerates initial development.

All is good and well until you have to change the application and all the nice tooling that helps us refactor with confidence breaks down because it cannot see the implicit coupling between mapped objects.

Compiler checks are gone.

Traceability is gone.

Debugging is next to impossible.

Profile validation to the rescue?

AutoMapper allows you to mitigate this to some extent by grouping mappings into profiles and then asserting that the profile is valid, meaning that all properties of the destination types are mapped.

This however requires that any unused properties in the destination type are ignored. If mappings address only a subset of fields on your types this will lead to lots of custom configuration code in your mappings, essentially defeating the main benefit of AutoMapper: less mapping code.

AutoMapper is not free

AutoMapper was hailed by some as the solution to ease the implementation of strictly layered architectures and later DDD. Decoupling layers or domains is impossible without mapping or transforming from one model to the next.

While AutoMapper can indeed help ease that burden, I'm inclined to agree with Greg Young that all this mapping is painful and if you really want to do this, you should feel that pain.(See 8 lines of code by Greg Young, skip to 39:00 for the bit about AutoMapper)

All that mapping magic does not come for free. Besides the maintenance issues mentioned earlier, AutoMapper also has a performance cost.

Setting up a lot of mappings is expensive as expressions need to be compiled on startup. Once that is done, AutoMapper is still nowhere near as fast as straight up hand written assignments.

Have a look at the performance comparison offered by AutoMapper alternative FastMapper (FastMapper on CodePlex, scroll all the way down for the performace comparison).

In an average application a bit of AutoMapper won't hurt, but once load increases it can definitely start to make a difference. A concrete case I saw was in a message driven application where large volumes of messages were being transferred during upgrades on deployment of new releases. Performance analysis showed that AutoMapper consumed between 10 and 20% of the CPU time. 

A final word

I have stopped using AutoMapper completely and have not regretted that once. Hand authored mapping code is faster, easier to understand and easier to maintain.