Monday, September 29, 2014

The 10 commandments of software development

A couple of weeks ago fellow senior Bastiaan de Rijber and I were venting some frustration on Skype. We realized that we find ourselves repeating the same tidbits of wisdom over and over again to developers in conversations, code reviews and post mortems on failures, big and and small.

I’ll admit the title seems a bit ‘holier-than-thou’ so I’m going to stick to the theme and do this all biblical style:

  1. Thou shall not assume, for assumptions are the mother of all fuckups
  2. Thou shall not blame the tooling, for it's probably thou who has made the mistake
  3. Thou shall clean up thy shit, for there is no greater pleasure than deleting code
  4. Thou shall not optimize prematurely, for premature optimization is the root of all evil
  5. Thou must find the root cause before applying the 'fix'
  6. Thou shall not practice the black art of multi threading without Jedi assistance
  7. Thou shall not commit shortly before going home/on vacation/to sleep
  8. Thou shall review thy code before committing
  9. Thou shall not proclaim the obvious in comments
  10. Thou shall not claim 'It works on my machine'

Of course we have a bunch more of these…

Thursday, May 15, 2014

Migrating sites from IIS7 to IIS8

I was recently tasked with migrating a bunch of sites from a Windows 2008 server to a Windows 2012 server. I expected this to be a well know path but it’s not. I tried to use WebDeploy, it’s a complete disaster.

I ended up cherry-picking IIS configuration backups in a merge tool. Not pretty but it worked like a charm.

What’s wrong with WebDeploy

WebDeploy is mostly an automation tool for deploying websites. It assumes a bunch of things. For example that you already created the site and the apppool. I am not looking to do that by hand 20 times or so.

It also seems to be incapable of creating virtual directories etc. In effect, a deployment package for a site seems to contain only files, none of the IIS structural configuration that’s so tedious to setup by hand.

Looking for ways to migrate the IIS configuration itself I found that WebDeploy supports server level packages too. This seems to create a package with all sites and a whole slew of configuration data. Great, or so I thought.

When I tried to deploy that on the server I found that IIS would no longer work. At all.

WebDeploy had messed up the .NET Framework configuration files, machine wide!

The alternative

I was not looking forward to manually configuring 15 sites with their app pools, virtual directories and all that. The chances of me messing that up in multiple places are 100%.

So I started looking into IIS’ built-in backup functionality:

%windir%\system32\inetsrv\appcmd add backup "migration" 

This creates a backup of the IIS configuration in


The backup consists of XML files and being a developer, I know how to deal with XML.

Next step is to create a backup on the destination server. I named that ‘Pre-Migration’ and copied it to ‘Migration’ so I would have a version to return to i

The file that has the actual site configurations is applicationHost.config, and is mostly compatible between IIS7 and IIS8. The XML is pretty straight forward and I was able to merge the bits I wanted into the backup file on the new server.

The relevant sections are:

  • applicationPools
  • sites
  • locations

Now restore the backup:

%windir%\system32\inetsrv\appcmd add backup "migration" 

Tuesday, October 2, 2012

TallComponents discontinues PDFWebViewer.NET

Source code now available on CodePlex (here and here)

On the company blog, CEO Frank Rem states that the product is not part of their core technology and revenue is insufficient to justify the load on support that comes from a browser based component.

As the original developer of the product I'm happy with TallComponent's decision to open source the product rather than to keep it locked down and let it die.

Looking at the source code brings back some good memories of building a nice product. I'm also curious to see what has become of it in the past two years.

Looking back

Technology wise it hasn't changed much. The server side was built on TallComponents' excellent flag-ship products PdfKit.NET and PDFRasterizer.NET.

At the time PDFWebViewer.NET was created, ASP.NET MVC was an emerging technology and Microsoft hadn't embraced open source yet. So the client-side implementation was based on the now pretty much obsolete MS Ajax framework.

Though version 2.0 looks like it's been modernized it hasn't really evolved. I'm sure this also accounts for much of the support load that TallComponents must have been experiencing for this product. Especially with HTML5 and mobile devices on the rise, as mentioned on the blog, a full rewrite would probably be in order.

The essence of the product was to use common HTML constructs only to display PDF, a couple of years back that meant using divs and images with plain vanilla javascript.

Looking ahead

Nowadays it would probably be safe to implement the control using a canvas, which would make it much easier to implement a lot of the functionality. For example, rotating a page required a round trip to the server in the original implementation but can be done in the browser when using a canvas.

That would also eliminate a lot of common issues with alignment, scrollbars and the like.

In addition to that, commonly used javascript frameworks (like jQuery) would probably also go a long way in solving cross-browser scripting issues. It should also help reduce maintenance overhead by reducing the amount of code involved in handling events and manipulating the DOM.

Life expectancy

An open source project that relies on commercial components (however good they are) is probably doomed. I’ve been supporting and using software components long enough to know that any component (or open source project for that matter) gets developer attention as long as the project that is using the component is in development. After that some occasional maintenance may trigger a bug report but there’s not going to be any developer love.

Since the product was not sold in large volumes there are probably not a lot of developers actively working with PDFWebViewer.NET. Therefore the most likely scenario is that there will be little, if any, activity on the Codeplex projects.

The life expectancy could be improved if somebody decides to replace the proprietary core components with freely available or open-source counter parts.

Final words

As with any software project I’ve worked on I do hope people will continue to use PDFWebViewer.NET. I’d love to see people forking and contributing but, as Frank has hinted in the blog post, the product wasn’t a big seller so I’m not expecting much.

Having said that – feel free to contact me if you need help with these projects.

Friday, May 25, 2012

Fixing Sitefinity 3.7 URL handling–Part 2

A while back I wrote about URL handling in Sitefinity 3.7. The default internal URL handling is not working well with IIS 7 Rewrite module. Over the past weeks I’ve had more problems with making Sitefinity 3.7 behave correctly, this time its about SEO friendly 404 handling.

How ASP.NET handles errors

Sitefinity is based on ASP.NET and by default any ASP.NET application will handle errors by either showing a generated error page (the dreaded Yellow Screen Of Death – YSOD) or by redirecting to a predefined error page. This is all configured in web.config:

<customErrors defaultRedirect="Error.aspx" mode="On">
   <error statusCode="404" redirect="404.aspx"/>

This bit of XML instructs ASP.NET to redirect users to Error.aspx when an (unhandled) error occurs, unless it’s a 404 error in which case the user should be redirected to 404.aspx.

SEO friendly 404 handling

Redirecting is an acceptable way to help your visitors explain what’s going on. Your users get a decent explanation and can continue on their way. If you’re using a CMS like Sitefinity you can manage the 404 page within the CMS and even drop in a smart control that offers relevant suggestions based on the requested URL.

Search engines indexing the site will however have difficulty understanding what is going on. A conversation between a crawler like the Google bot and your site would look like this:


The conversation ends with a successful HTTP 200 status code, indicating to the search engine that the page was found… The crawler will even index the 404 page unless it’s explicitly told not to via meta tags or robots.txt.

In order for search engines to understand what’s going on the conversation should look like this:


Fortunately the customized Sitefinity CMS module from my previous post can provide us with the hooks needed to set this up.

Step 1 – intercept errors

Since the CMS module is a HttpModule, it can register for the ASP.NET Error event. If that event occurs we can check the type of error and lookup where to get the alternate content from the customErrors section in web.config.

var context = HttpContext.Current;

var error = context.Server.GetLastError() as HttpException;
if ( null != error && error.GetHttpCode() == 404 )
  // use the web.config custom errors information to 
  // decide whether to redirect
  var config = ( CustomErrorsSection )WebConfigurationManager
                  .GetSection( "system.web/customErrors" );
  if ( config.Mode == CustomErrorsMode.On ||
       ( config.Mode == CustomErrorsMode.RemoteOnly 
                        && !context.Request.IsLocal ) )
    // redirect to the error page defined in web.config
    var redirectUrl = config.DefaultRedirect;
    if ( config.Errors["404"] != null )
       redirectUrl = config.Errors["404"].Redirect;
    // now render the content

Step 2 – render alternate content

This is where things get interesting. In IIS7 with Integrated Pipeline mode there’s a Server.TransferRequest method that makes it easy to do an internal redirect. It’ll do a full run of the request pipeline. TransferRequest will simulate an actual request and you can specify any parameters you want to pass which will be available in the request through the HttpContext.Params collection.

If not using Integrated Pipeline mode, the Server.Transfer can do an internal redirect. The redirected request will however not go through the full ASP.NET pipeline and vital events will not fire. Most notably, some events used by Sitefinity to resolve the page that needs to be rendered. The code below works around that by setting up the request the same way Sitefinity would before handing it off to the main entry point.

Both methods will however discard the HTTP status code from the original request. To work around that the status code is reset in the transferred request.

if ( HttpRuntime.UsingIntegratedPipeline )
                      redirectUrl, true, "GET",
                      new NameValueCollection { { "__sf__error", "404" } } );
        var context404 = 
          CmsSiteMap.Provider.FindSiteMapNode( redirectUrl ) 
            as CmsSiteMapNode;
        if ( null != context404 )
          context.Response.StatusCode = 404;
          CmsUrlContext.Current = context404;
          context.Items["cmspageid"] = context404.PageID;
          context.Server.Transfer( "~/sitefinity/cmsentrypoint.aspx" );

In integrated pipeline mode you have no control over the executing request. So to rest the HTTP status code it’s passed to the transferred request using a custom header. In the PostRequestHandlerExecute event handler in the Cms module the header is picked up and used to alter the status code:

private void PostRequestHandlerExecute( object sender, EventArgs e )
   var context = HttpContext.Current;
   // Set the error code passed in the headers when TransferRequest was invoked.
   var error = context.Request.Headers["__sf__error"];
   if ( null != error && context.Response.StatusCode == 200 )
      int errorCode;
      if ( Int32.TryParse( error, out errorCode ) )
         context.Response.StatusCode = errorCode;
         context.Response.TrySkipIisCustomErrors = true;

Full code and installation instructions available on GitHub.

This code has been tested with Sitefinity 3.7 SP4 and is in use on production systems.