Sunday, October 21, 2018

Unit testing Kentico with xUnit

With Kentico EMS 12 almost ready I noticed that support for MSTest has been dropped from the CMS.Test assembly. I think that’s a good thing given the (slow but steady) move towards .NET Core, where MSTest is no longer the default unit testing framework. I was sort of hoping for a bit more bold move to drop the dependency on a specific test framework all together, or to factor that dependency out to a separate Nuget package.
This is mostly because I really prefer xUnit over NUnit. It’s low ceremony approach to unit testing results in simple and clean code.
Luckily, there’s nothing stopping us from using xUnit with Kentico and at TrueLime we’ve been doing that for over 2 years now. I’ll get to the code shortly but first, it’s probably good to touch on the differences between NUnit and xUnit.

Differences between NUnit and xUnit

In a nutshell, xUnit lacks most of the ceremony of older frameworks like NUnit:
  • There’s no Setup or TearDown – use constructor and IDisposable instead
  • Testclasses are not fixtures, Fixtures are in seperate classes to promote reuse
  • Each test runs in it’s own instance of the test class to improve isolation
  • Tests either pass or fail, there is no intermediate state
The net result is that xUnit tests mostly look like the rest of your code. This encourages developers to treat the test code with the same hygene as the application code (refactor, clean up etc.). It also makes it more natural to write clean tests. All the code involved in the test should go into the test, not into setup and teardown at any level.
If you do need to handle some sort of context around your test, like running Kentico, there’s always standard C# constructors and you can implement IDisposable to clean stuff up.’

Enough talk, time for code

Kentico provides support for working with it’s data APIs in unit tests, which is pretty cool. There are some caveats (see next section), but once you’re past those it works quite well and fast.
Unfortunately, since Kentico is based on NUnit we do need to handle some ceremony but we can tuck that away into a base class and keep it out of our test code.
public abstract class KenticoUnitTest : CMS.Tests.UnitTests, IDisposable
    protected KenticoUnitTest()
        // Initialize Kentico test infrastructure
        UnitTestsSetUp(); // enable Kentico object faking

    void IDisposable.Dispose()
 // Cleanup Kentico Test infra
        catch( System.IO.PathTooLongException )
            // this fails under VS Live testing but that is not critical

Kentico Unit Testing Caveats

  • Always use .WithData with Fake if you’re going to query that data. If not, you’re in for some very nasty and hard to decypher stack traces. For example:
  • If you do run into nasty stack traces, especially the ones that end in a failing DB connection, carefully read the first calls in the stack trace and try to figure out what entity is being used so you can fake it.
  • Be careful with VS Live unit testing. We’ve seen some tests failing due to errors unrelated to the test itself.
  • When using nCrunch for continuous testing,make sure you configure the project to copy in referenced assemblies. This is due  Kentico dynamically loading lots of assemblies while scanning for extensions like modules and custom data classes.
  • Custom data classes and other CMS extensions will only be available if the containing assembly is marked with the assembly discoverable attribute:
  • [assembly:CMS.AssemblyDiscoverable]


Wednesday, April 18, 2018

Kentico EMS - Enable bulk delete of form data

With GDPR right around the corner many of our clients are reviewing what data they have in their Kentico installation. One of the primary areas of concern is data collected through online forms.

GDPR and Kentico Online Forms

Kentico has a quite powerful online forms, making it easy to ask for input from your visitors. Unfortunately, privacy regulations including GDPR and it's predecessors state clearly that you cannot keep that data around after you're done processing it.

In addition to that, the best way to ensure data doesn't get stolen or leaked is not to have it in the first place.

Kentico does not make it particularly easy to manage high volumes of form input though. One of the glaring omissions is support for bulk deletion of form data.

The power of the UniGrid

Funny enough though, Kentico's data grids and data layer do support bulk delete out of the box. It's a matter of tuning the grid that shows the form data. The Kentico data grid is backed by the UniGrid control. In the case of form data, this control is defined in


All that is needed to enable the mass delete action:

<cms:unigrid runat="server" id="gridData" islivesite="false" >
    <GridOptions DisplayFilter="true" />
    <%-- insert the following tag --%>
        <ug:MassAction Name="#delete" Caption="$General.Delete$" Behavior="openmodal" />

After this change, the form data grid will show a selection field as the first column and the delete action is available at the bottom of the grid.

Further reading

If you want to know more about the powers of the Kentico UniGrid, check the docs.

Thursday, March 15, 2018

Kentico EMS - Timeout in inactive contact cleanup

On high traffic sites Kentico's EMS feature is a treasure trove of marketing information but the large volume of data can also become your site's tombstone. As the Kentico guidance documentation rightfully points out, it makes no sense to keep older data around forever so you should setup a strategy to clear out inactive contacts from the start.

En garde!

Even when you properly setup the inactive contact cleanup though, the volume of data can get to a point where the contact cleanup tasks start to time out and nothing gets cleaned up any more. As it turns out, there is quite a bit you can do to prevent from getting into this situation. One of the keys is to keep an eye on your event log and your database. Especially long running queries and timeouts should raise a red flag.

Tuning the database

One of the key queries used by Kentico inactive contact cleanup is this one:

SELECT (COUNT(*)) AS [Count]
     FROM OM_Contact
     WHERE (([ContactEmail] = N'' OR [ContactEmail] IS NULL) AND (EXISTS (
          SELECT TOP 1 [ActivityContactID]
          FROM OM_Activity
          WHERE [ActivityContactID] = [ContactID]
          GROUP BY ActivityContactID
          HAVING MAX(ActivityCreated) <= '1/14/2018 2:00:20 AM'
    OR ([ContactCreated] < '1/14/2018 2:00:20 AM' AND NOT EXISTS (
         SELECT TOP 1 [ActivityContactID]
         FROM OM_Activity
         WHERE [ActivityContactID] = [ContactID]
) AS SubData

Given this query and the knowledge that there can be millions of rows in the OM_Activity table (17 million in this case) the middle part of the query really stands out as being risky:

SELECT TOP 1 [ActivityContactID]
FROM OM_Activity
WHERE [ActivityContactID] = @ContactId
GROUP BY ActivityContactID
HAVING MAX(ActivityCreated) <= '1/14/2018 2:00:20 AM'

It will effectively need to search through the entire OM_Activity table to figure out what contact has recent activities. Unfortunately, the default Kentico setup does not provide a covering index for this. This will force SQL Server to process the entire table, which is quite costly. In this particular case it took an Azure SQL S2 instance well over 40 minutes to execute this.

The covering index is pretty straight foreward and looks like this:

ON [dbo].[OM_Activity] ([ActivityContactID])
INCLUDE ([ActivityCreated])

It takes SQL server a bit of time to build up this index so make sure you do that during off-peak hours. After applying it though, Kentico is able to determine the number of rows up for deletion in just over 1 minute, well within the configured timeouts.

Friday, December 23, 2016

Setup SQL Server session state for a web farm

It takes a bit of digging around to get all the information needed to setup out-of-process session state for an ASP.NET web farm. There are a couple of decisions that you need to make and then you need to configure the database and the application. This post explains all this using a real-life project.

The situation

I'm currently working on a project for a large medical center. There is a strong obligation to the public to be always online, especially in case of a large scale emergency.
This leads to interesting choices in infrastructure for their website: everything is redundant and split across multiple locations across the campus.
The database is a SQL Server Availability Group.

Picking the right session state provider

  • In-Process
    Really only suitable for small applications that run in a single server instance.
  • Session-state server
    A TCP service that is hosted on a single server within the infrastructure.
    Since this introduces a single point of failure, it's no good for this project.
  • SQL Server session state
    Stores session state in the database, either persistent or in temporary storage.
    This is a great pick for a web farm but dus incur additional load on your SQL Server installation
  • Redis session state
    This is the new kid on the block for ASP.NET. Since the medical center is a Microsoft shop and has already invested a lot in top-notch SQL performance, this would only incur technical risk and additional costs for infra.

SQL Server session state, but what flavor?

Putting ASP.NET session state is supported very well, there's tooling to set it up for you but before we dive into that there's yet another consideration.
Where to put the session state:

  • Application database
    This would add a couple of tables to the application database and a bunch of stored procedures. It could be a nice fit when hosting at a shared provider and the additional cost of an extra database is not desired.
  • Session state in TempDB
    Session state data is transient by nature so TempDB, which gets cleared on a server restart and will not be replicated to other SQL Server instances seems like a good idea. You can choose to put the TempDB on a different drive from the application db which could help squeeze more IOPS out of your server. Not writing through to the rest of the cluster may also help improve write performance, but this will also cause loss of session state when the cluster needs to fail over, for example due to maintenance.
  • Session state in it's own database
    This mode will store session state in permanent storage and replicate it across the cluster. That's a performance penalty but gives more guarantees for seemles failover when needed.
    The fact that this database is separate from the application allows to easily make different decisions about IT management, for example about backups or even hosting the session state database on a different database instance.
    This is the best match for this project.

Configuring session state

Once we decided on where to store the session state we had to roll out the configuration in our environments. These are the steps to follow:

  1. Setup the session state database using the ASP.NET SQL Server Registration Tool
    %Windows%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regsql.exe -S MyCluster\Prod -U sa -P topsecret -ssadd -ssype p
  2. Configure the connection string in web.config
    I strongly reccommend including the application name in the connection string and keeping the connection time out low.
    <add name="SessionConnectionString"
    connectionString="Data Source=MyCluster\Prod,1234;user=sa;pwd=topsecret;Connect Timeout=10;Application Name=Kentico;Current Language=English" />
  3. Setup the machine key in web.config
    If you're running the site on multiple servers or in the cloud, this is a must.
       <machineKey xdt:Transform="Insert"
         validation="SHA1" decryption="AES" />
  4. Configure session state in web.config
    <sessionState mode="SQLServer"