Deserialized The Ramblings of a Web Architect


Why I choose the Microsoft stack for my new startup

Posted by Bryan Migliorisi

Microsoft webOne thing is for sure - Microsoft gets a bad rep these days.  Maybe its because of Windows Vista.  Maybe its because of classic ASP, or even ASP.NET’s WebForms.  Maybe it is because they are a large company who’s focus has too long been on the enterprise, and not enough on the consumer.  Most likely, though, it is because its simply cool to make fun of Microsoft.

I had plenty of options when I decided to start building a new web application.  My favorite language is C#.  I’m not talking about web app frameworks, or frameworks in general – no, just the language.  I love that C# does not do very much magic, like Ruby does.  I love that C# has a cleaner and more expressive syntax than Java does.  I won’t even compare the language syntax and API to that of PHP. Of course, there were other options too, such as Python and Scala.


I’ve been working for the past 3 years with Java and Spring 2.x.  Older versions of Spring are god-awful, what with their dependence on a metric ton of XML configuration files.  Spring 2.5 began to add support for annotations (attributes in C#) and Spring 3 continued to improve even further in this area.  Java has a fairly good amount of cool technologies, frameworks, and specifications too.  For me, though, the ecosystem feels very broken.  Having many options is good.  Having too many options is bad.

Ruby and RoR

I began toying around with Ruby and, consequently, Ruby on Rails too.  It is a nice and powerful language and framework but at the end of the day it didn’t feel right.  I like type safety and compile-time checks.  I like knowing what my code does – in fact, I like telling my code what to do!  Ruby loves to do magic for you.  Rails builds on top of this magic to provide even more magic.  Honestly, it is pretty impressive but for me, it just didn’t feel right.  I’ve spoken to fellow developers who feel the same way: It is very cool, but doesn’t feel natural.

With that said, I have already written a couple of supporting services in Ruby for my new app.. because Ruby was the better choice for those services.  They are exposed as web services running on Sinatra.


I’ve used several PHP frameworks, including CakePHP, CodeIgniter, Kohana, Lithium and Yii.  My favorites are the last two, Lithium and Yii.  They both do a great job of making it easy to build web applications and both are extensible.  However, I despise PHP as a language.  It is arguably one of the ugliest languages ever “designed.”  (It really never was designed, it just grew organically which is why function names, parameter order, etc is so different from function to function)  If I choose PHP, I know I would not be able to sleep at night.

As for the others, I just didn’t have enough experience with them to make a fair (to me) comparison.  Nor did I have several months to learn them to a point that would make a fair comparison.

Home, sweet, home

This all brings me back to my personal favorite language, C#.  It has a very clean and expressive API (look at LINQ).  The ecosystem is smaller than that of Java, but it is tighter because one company runs the show – Microsoft.  With Microsoft’s latest MVC framework release, ASP.NET MVC3, writing a web application has become so simple its scary.

The ill-named ASP.NET MVC framework learned quite a bit from Rails and the open source community in general. Microsoft studied what people want to do with their frameworks and what they don’t – and it shows.  ASP.NET MVC is a fantastic framework and they’ve built in extensibility at every level.  Don’t like something? Write your own and plug it in.  It really is that easy. Seriously.

With version 3 comes the Razor view engine which I think is absolutely beautiful.  I’ve used several view engines in different languages but Razor stands out to me as a new approach (which is sometimes scary to some people) where you don’t need weird braces, brackets, or custom HTML tags to render content or run code in your views.  I think its just very streamlined and easy to look at and read.

It scales – they all do!

Scalability isn't a feature of any language or framework.  Nowhere is there a bullet that lists scalability as an important feature.  Instead, scalability is something the application designers must think about while they built there app, no matter which language its built in.

ASP.NET scales very well.  So do Java, Ruby, PHP and all the others – when done right.  Scalability is about engineers, not language.

The Community

People often complain that Microsoft is a large corporation who doesn’t care about the community. Those same people often ignore the fact that Microsoft has open sourced much of the .NET framework, including ASP.NET MVC. They ignore that most (though, not all) .NET programs will run happily on any operating system, including Linux, OSX, and iOS. This is made possible through the open source Mono project, sponsored by Microsoft and Novell.

Microsoft and the community also collaborated on a cool new package manager called Nuget.  It is similar in nature to Ruby’s Gems and Java’s Maven, though it is not a copy of either of them.  What is amazing is that immediately upon its release, cool open source (and some commercial) libraries began showing up in the Nuget listing.  So many libraries that I, and presumably many other people, had never heard of were suddenly at our fingertips, ready to be installed as easily as a Ruby  gem.

Since releasing Nuget, the amount of packages has been steadily growing and the amount of package downloads has been growing even faster.

In addition to this,  there is a healthy community of people willing to help you solve any issues you have.  You can head over the the ASP.NET Forums or StackOverflow if you need help with anything and your question will be answered usually in minutes.

The Microsoft stack, minus Microsoft

For my newest web application, I’ve made the decision to go with MongoDB for my database.  It is a super fast, document-oriented database as opposed to standard relational databases such as MySql, Postgres, and Microsoft’s own SQLServer.  I made this decision because it best fits the type of data I will be storing and I’ve been working with MongoDB since its alpha stage so I know what it is capable of.

MongoDB runs on Windows but runs better on Linux, and that is where I plan to run it.

If C# code and ASP.NET MVC applications can also run on Linux, then you have no need to run on Windows.  You have removed Microsoft from the Microsoft stack.  Where is your vendor lock in now?

I will most likely run my application on Windows Server 2008 but the point is that I don’t have to if I don’t want to.  Even with the Microsoft stack, I still have my options.

Bonus: WebSite Spark and BizSpark

WebSite Spark and BizSpark are two initiatives by Microsoft to bring down the initial cost of getting up and running with the Microsoft platform so that you can get all the tools and resources you need to do it right.


With Microsoft's new commitment to building better tools for the web and working with the community, I feel that my decision to go with the Microsoft stack is the right one, for me.  You may not agree – and that is fine – but don’t knock it before you try it.  Version 3 of ASP.NET MVC brings some really great stuff to the table. Nuget makes finding packages just as easy as it has been for Ruby developers for years. Portability of the code means not needing to worry about vendor lock-in.

So far, I am more than happy with my decision.


ASP.NET MVC 3, Razor, & RenderAction causing server to hang and crash

Posted by Bryan Migliorisi

I am working on a new project and decided to use the awesome new ASP.NET MVC3 (RC2) framework with Razor templating.  Razor is awesome.  Its clean and gets the job done nicely, but when I tried to render an action into my view, all hell broke loose.

This is what my controller looked like:

public class MyController : BaseController
        public ActionResult Index()
            return View();

        public ActionResult ProductDetails(String id)
            return View();

And in my view I had this:

@{Html.RenderAction("ProductDetails", "My", new { id = ViewBag.Id});}

But whenever I would navigate to a URL that corresponded to a view containing this RenderAction snippet, the browser would hang and the server would stop responding completely.  I double checked my code, I restarted the server, I restarted the browser, I restarted the IDE.  I tried everything, but no luck.

Problem Solved

It has something to do with Razor’s layouts and what I believe may be an infinite loop.  I think what is happening is that the action being rendered via RenderAction is also trying to load the base template that is found in _ViewStart.cshtml which is rendering the body which is calling RenderAction… and this continues infinitely.

The solution is to either tell your view to NOT use a layout or to change your action from ActionResult to PartialViewResult.  I chose the latter and once I restarted the server again, everything started working correctly.

Tagged as: , , 4 Comments

ObjectID’s with MongoDB and the mongodb-csharp driver

Posted by Bryan Migliorisi

I mist say, the latest release of mongodb-csharp is rather awesome.  Typed collections and LINQ support mean I can worry more about my application than about the data layer.

Here is an example of using typed collections:

public class Customer
	public Oid Id { get; set; }
	public string Name { get; set; }
	public CustomerBillingInfo Billing { get; set; }
	public List Depts { get; set; }

public void AddCustomer(Customer customer) {
	...(code removed for simplicity)...
	IMongoCollection collection = database.GetCollection();

Gotcha: Beware of the ID property!

One thing that threw me off when I began using the typed collections was that I had defined my ID property as “_id” because That is what MongoDB uses internally.  While this made sense to me, the mongod process kept throwing errors whenever I tried the following:

...(code removed for simplicity)...
IMongoCollection<customer> collection = database.GetCollection<customer>();
Customer customer = collection.Linq().First(c => c.Name == "test customer");
customer.Name = "A new name!";

The error looked something like this:

Fri Jun 25 10:43:14 Exception 11000:E11000 duplicate key error index: test06.Customer.$_id_  dup key: { : ObjId(4c24c07a189cf31bd4000002) }
Fri Jun 25 10:43:14    Caught Assertion in insert , continuing
Fri Jun 25 10:43:14 insert test06.Customer exception userassert:E11000 duplicate key error index: test06.Customer.$_id_  dup key: { : ObjId(4c24c07a189cf31bd4000002) } 21ms

It was driving me crazy for days before I realized that the driver was doing some magic – the POCO object needed its ID to be named “Id” instead of “_id” an as soon as I changed that – it started working properly.


Convert C# classes to and from MongoDB Documents automatically using .NET reflection

Posted by Bryan Migliorisi

There are a number of C# based MongoDB projects being actively developed right now but one thing that I needed was a way to convert a standard C# class to a MongoDB document for easy insertion.  It isn't hard to manually type out and set each property by hand, but it certainly is not the most efficient way, especially when you know you are going to be doing it a lot.


For example:

Lets say I have a class called SomeClass that looks something like this:

class SomeClass {
	public string StringTest;
	public int IntTest;

And somewhere in my code, I have an instance of this class named someClassInstance.  If I want to create a MongoDB document from this class, I’d have to do something like this:

Document document = new Document();
document.add('StringTest', someClassInstance.StringTest);
document.add('IntTest', someClassInstance.IntTest);

So that isn't such a big deal, right? But what about when I have a class with many more properties?  Then it starts to get messy and cumbersome.  I thought that there should be an straightforward way to easily convert any class to a mongo-csharp compatible Document object. (I am using Sam Corder’s mongo-csharp driver, so that I am targeting the Document object from that library.)

Default values

I also wanted to have a way to specify what the default values were for each class property so that when we did the conversion, we would (hopefully) not end up with any null values.  Plus, if for some reason there was a document in MongoDB that was missing a particular key-value pair, the DocumentConverter would automatically fill in that empty field with the default value so in the code we should never have any nulls.

This is something that I would like for my own purposes and may not suit everyone’s needs.  If it doesn't, simply leave off the DefaultValueAttribute and you’ll never know the difference.

My proposed solution

I figured the easiest way to accomplish this was to create a class that would encapsulate all the functionality needed to convert to and from Document objects and have my other classes inherit from that one. I imagined that the above code would change to something like this:

class SomeClass : DocumentConverter {
	[Attributes.DefaultValue("Default StringTest value!")]
	public string StringTest;
	public int IntTest;

And to do the conversion would be very simple.  To convert from someClass to Document would be:

Document document = someClassInstance.ToMongoDocument();

To convert from a Document object to someClass would be:

SomeClass someOtherClassInstance = new SomeClass();
omeOtherClassInstance .FromMongoDocument(someDocumentObject);

The DefaultAttribute class

using System;
namespace MyApp.Attributes
    [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = false)]
    class DefaultValueAttribute : Attribute
        private readonly object _value;

        public DefaultValueAttribute(object Value)
            _value = Value;

        public object GetDefaultValue()
            return _value;

The DocumentConverter class

Reflection isn't something that I use too often so there may be better ways of accomplishing what I am trying to do, but this is what I’ve got for now.  If there are better ways, please let me know.  Without further ado…

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text;
using MyApp.Classes.Attributes;
using MongoDB.Driver;

namespace MyApp.Classes
    public class DocumentConverter
        public void FromMongoDocument(Document document)
            foreach (DictionaryEntry kvp in document)
                object propertyValue;
                if (kvp.Value != null && (kvp.Value.GetType() == typeof(Document)))
                    // We have a document object - Now lets get a reference to the class property's type
                    var propertyType = GetType().GetProperty(kvp.Key.ToString()).PropertyType;

                    // create new instance of that class
                    var propertyInstance = Activator.CreateInstance(propertyType);

                    // call FromMongoDocument on that class and pass in the document
                    MethodInfo method = propertyInstance.GetType().GetMethod("FromMongoDocument");
                    method.Invoke(propertyInstance, new[] { kvp.Value });

                    propertyValue = propertyInstance;
                    // This is not a Document so lets just assign the value
                    propertyValue = kvp.Value;

                GetType().GetProperty(kvp.Key.ToString()).SetValue(this, propertyValue, null);


        public Document ToMongoDocument()
            Document document = new Document();

            foreach (PropertyInfo property in GetType().GetProperties())
                // Get the value of this property
                object propertyValue = property.GetValue(this, null);

                // If this value is null, then lets try to see if there is a default value attribute and assign that
                if (propertyValue == null)
                    object[] attributes = property.GetCustomAttributes(typeof(DefaultValueAttribute), true);
                    foreach (DefaultValueAttribute defaultValue in attributes.Cast())
                        propertyValue = defaultValue.GetDefaultValue();
                    document.Add(property.Name, propertyValue);
                    // We have a property, now lets see if this property has a ToMongoDocument method
                    MethodInfo method = propertyValue.GetType().GetMethod("ToMongoDocument");

                    if (method == null)
                        document.Add(property.Name, property.GetValue(this, null));
                        document.Add(property.Name, method.Invoke(propertyValue, null));
            return document;

That’s all for now

I hope this is useful for someone.  It is a rough draft of what I threw together last night at around 1AM while half asleep.  So far, it has passed all of my initial tests but if you have suggestions to make it better, please leave some comments here.


The Current State of MongoDB and C#

Posted by Bryan Migliorisi

As a C# developer, I am often disappointed with the lack of drivers and connectors to cool services like MongoDB.  All the cool languages (and Java) get all the love but C# is often an afterthought.

Luckily for me, there are some kickass developers in the C# community who also share my frustration and as such, they have begun building their own C# MongoDB drivers.

I keep stumbling across more and more C# related MongoDB projects, so I figured I would write up a list and some short descriptions of these projects.

List of C# MongoDB Projects

Each of these projects are still rather new, so expect some features to be missing or not fully functional.  A couple of them are usable in your projects today while the rest are still under heavy development


Originally written by Sam Corder (@SamCorder) with help from a handful of contributers, this is the most complete driver of the bunch. It has been evolving quickly and Sam & team are very quick to resolve any bugs that may arise.

I am using this driver in 2 projects that I am working on and so far things have been great.  It even includes GridFS suport.

From the project description:

Current Features

  • Connect to a server.
  • Query
  • Insert
  • Update
  • Delete
  • All BSON types supported
  • DBRef support
  • Isolation and conversion between BSON types and native .net types.
  • Database, Collection and Cursor objects.
  • Index handling routines (List, Create, Drop)
  • Count
  • Roughly 80% unit test coverage. This can and will be improved on.
  • Paired connections
  • Authentication (Does not reauthorize on auto reconnect yet).
  • Database Commands
  • Basic Linq support
  • GridFS support
  • Map Reduce helpers.
  • hint, explain, $where

They are currently working on connection management features (auto reconnect, connection pooling, etc).

Get involved or check out the code at their mongodb-csharp project page on Github.


Written by the unnamed developer at, the mongo-net project aims to be a C# port of the 10Gen\MongoDB official Java driver.  While a lot of work has been done and a load of code written, it is currently unusable.  Over the past couple of weeks a significant amount of progress has been made and functionality is beginning to work but it seems that a functional state is months away.

I would keep an eye on this project, though, as having an API compatible with the official Java driver has its benefits.

Get involved or check out the code at their mongodb-net project page on Google Code.


Andrew Rondeau’s MongoDB.Emitter is a pretty cool project that provides a strongly-typed Document mapper for C#.  It works in conjunction with Sam Corder’s mongodb-csharp driver allowing the programmer to define strongly typed interfaces and properties.

I have not tried this yet, but this will be on my list of things to check out.

Get involved or check out the code at their MongoDB.Emitter project page on bitbucket.


Hugo Bonacci (@hugoware) has been working on a driver of his own called CSMongo. CSMongo doesnt support everything that mongodb-csharp does but it does have some interesting features, such as the approach to creating Mongo Documents.  Their approach definitely has a more dynamic feel to it which fites nicely in the unstructured MongoDB world.

I am looking forward to the next version which should have more features including support for Hugo’s own jLinq.

Code doesn't appear to be released yet but you can follow his progress at his blog, Hugoware.


Simple-mongdb is another project without public source that is being worked on by Daniel Wertheim (@danielwertheim).  I am not sure if it is even being actively developed but it too has some nice ideas.  The goal of this project is to keep the driver JSON-centric and should be compatible with awesome Newtonsoft’s library.

They have a few examples of the proposed API but no code has been released to make said examples work.  This is another one to keep an eye on in the meantime.

Check out the simple-mongodb project page on Google Code.


This is a small class I wrote that works with mongodb-csharp.  Its name will likely change at some point if and when it gets packaged up and put on source control.  It exposes two functions that will allow any C# class to convert to and from a MongoDB Document object automatically.  It makes my life a lot easier and it uses System.Reflection to do this.

Read more about DocumentConverter on this blog post.


Well it looks like there is a significant amount of interest in MongoDB from the C# community which is great news because it looks like MongoDB is going to continue to thrive and grow.  My bet is that Sam Corder’s driver will be the most common C# driver, simply because it is so far ahead of the rest but time will tell.  Extensions of Sam’s project, such as MongoDB.Emitter, are equally as cool as the drivers they are built on.

Thanks toeveryone who has contributed to these drivers.  Each of them have some great concepts and I hope that one day we will have a driver that supports all these great ideas and features.

If there are any more projects that I have missed – let me know in the comments!


Gotcha: 32-bit applications may not be able to see files on 64-bit Windows

Posted by Bryan Migliorisi

winlogo-300x265 This one really threw me off tonight.  I am running Windows Server 2008 on one of my boxes and I was trying to set up some advanced URL Routing on IIS7.  IIS7 Manager has a very nice easy to use GUI interface, but I prefer working directly in the configuration files.

I fire up my Notepad++ and attempt to open a file through the file browser.  I navigate to the IIS config folder (c:\windows\system32\inetsrv\config\) and I see an empty directory.  Huh? How is that possible?!

Now I switch over to Windows Explorer and go to the same folder as above and to my disbelief… there are all the config files.  Ok now I am truly confused!


While I cannot seem to find anything from Microsoft about this issue, my findings are that 32-bit applications cannot see the entire file system!  I installed a few applications that I know are only 32-bit to verify this and sure enough they all suffered from the exact same issue. 

Files simply do not show up and if you attempt to open the file (because you do know the full path & filename), it simply tells you that the file does not exist.


Reverse Proxy Performance – Varnish vs. Squid (Part 2)

Posted by Bryan Migliorisi


In part one of this series I tested the raw throughput performance of Varnish and Squid.  My results are consistent with all the blogs and comments floating around the blogosphere – Varnish blows away Squid.

Unfortunately, the first series of tests were somewhat uninformative.  Since they only tested the raw performance of serving cached content from memory, it did not mimic a real world scenario of serving cached content as well as fetching content from the backend and caching it.

While we would hope for a primed, full cache, it is unlikely to happen and you will undoubtedly see a decent amount of backend requests from your caching proxy.

A better test of the two proxies would involve a large set of random URLs, but not too random because we want to simulate both cache hits and cache misses.  To accomplish this, I wrote a small PHP script that would take two parameters: total number of URLs to generate and the hostname for those URLs.

Generating a usable URL list

Generating the list is simple.  This script looks like this:

 1) {
		} else {
			$as = "";
                echo "http://$host/varnish/gen/$random$as\n";
                flush(); ob_flush();
        echo "http://$host/varnish/gen/$random$as";

All this does is create a long list of URLs.  I used PHPs output buffering mechanisms to flush the buffer which is necessary when creating large URL lists so that you don’t wait forever.  Maybe it could have been written better but I don't care – that wasn't the point of this test.

The URLs that are created are in the format of:



This URL is mapped to another PHP file that simply generates dummy data of the size specified in the URL.  In the above cases, the files would be 50Kb large.  The query parameter “as” is just a useless piece of information that is meant to tell the proxy to cache it.  If the “as” query parameter does not exist, the proxy will forward the request to the backend and not cache it.  Its a simple way to generate cacheable and non-cacheable URLs.

To generate the list and store it in a local file, I used this command:

	> urls-10k.txt

Verify the results of the script

For your own sanity, make sure that the script did in fact generate a list of URLs that suits your needs.

Count the amount of URLs generated:

cat urls-10k.txt | wc –l

(yes, I know it creates one extra URL … Its fine by me.)

Count the amount of cacheable URLs containing the “as” query parameter:

cat urls-10k.txt | grep as | wc –l

Count the amount of unique cacheable URLs:

cat urls-10k.txt | grep as | sort | uniq | wc –l

Running the tests

In part one I used ApacheBench to load the servers but for these tests, I used Siege and http_load which both allowed me to load URLs from a file.

I started with Varnish using the following commands:

	> urls-100k.txt
http_load -parallel 10 -fetches 100000 urls-100k.txt
http_load -parallel 25 -fetches 100000 urls-100k.txt
http_load -parallel 50 -fetches 100000 urls-100k.txt
http_load -parallel 100 -fetches 100000 urls-100k.txt
http_load -parallel 200 -fetches 100000 urls-100k.txt
http_load -parallel 400 -fetches 100000 urls-100k.txt

In between each http_load command, I restarted the Varnish service so that each test ran with an empty cache.  When I was done with the Varnish tests, I ran the same tests against Squid using the same commands above.

The results

The results of these tests represent the typical web application much better than the original tests did.

This first graph shows the average time for the proxy to accept a connection.  As concurrency goes up, it is expected that the time to connect would go up too.  Squid suffers more than Varnish does, but the difference is negligible.


The second graph is much more interesting.  As concurrency goes up, the Time-To-First-Byte for Squid goes up very sharply while Varnish holds its ground and remains very quick around 25ms.


This third graph shows another interesting behavior.  As concurrency goes up, Varnish begins to even itself out at just under 800 fetches per second while Squid peaks at around 1100 fetches per second with around 50 concurrent connects and then sharply drops off as concurrency goes up.



Squid versus Varnish is just another holy war that may never end.  The tests that I have performed have been very helpful for me and my team but your results may vary.  Of course, there are many more things to consider and I plan to write about some of the major differences between Squid and Varnish.

My results show that in raw cache hit performance, Varnish puts Squid to shame.  In real world scenarios I found that Squid can hold its own when dealing with small amounts of traffic, but it’s performance drops off very sharply as it begins to handle more connections. Varnish handles them without a sweat, as it was designed to do.

My next blog post will detail the differences between Varnish and Squid’s architecture, features, and the reasons I am pushing for Varnish in our environment.


Some people are complaining in comments on Reddit and HackerNews that I have not provided any information about the hardware or operating system for my tests.  This information was posted in Part one of this post.


Reverse Proxy Performance – Varnish vs. Squid (Part 1)

Posted by Bryan Migliorisi

squid-vs-varnish Typical web applications require dozens of SQL queries to generate a single page.  When your application is serving over 1,000,000 pages per day, you quickly realize that the performance bottleneck is your database.  The typical answer to slow database queries is “just use memcached!”  Memcached and other data caches can only take you so far.  This is where reverse proxies come in.  There are a handful of them out there, including Nginx, Perlbal, Squid and Varnish.  Which to use is up to you.


Deciding what is best for you

Assuming that you have taken a step back and really analyzed your problem first, the next step is to analyze the possible solutions.  For us, Varnish seems like the best option with Squid close behind.  To be fair, I’ve set up a test server with both Varnish and Squid running.  I’ll use ApacheBench to generate load and requests.

I’ve analyzed our pages to see what the typical page size is and recorded the average page sizes for 5 different page types.  They range from around 10KB to 35KB (gzipped).  For my test, I’ll be benchmarking with 10KB, 15KB, 20KB, 30KB, 40KB, and 50KB files to get a good range of different size requests.

To test under different load capacities, I’ll use ApacheBench to generate loads with different amounts of concurrent users ranging from 10 to 400.


The test

I’ll be using two identical machines on the same local class C network to eliminate (as much as possible) network latency. 

The machines look something like this:

  • Pentium 4 3GHz (8KB Level 1, 512KB Level 2)
  • 2GB (4x512 DDR 400MHz)
  • 120GB ATA Western Digital Caviar WD1200JB
  • CentOS 5

(I don't have more information than that.  Suffice to say that it is a few years old and not very powerful)

I am using Varnish 2.04 and Squid 2.6.STABLE21.  There are newer versions of Squid but i am using this version because the 3.x branch is missing features found in the 2.x branch and I have read several reports of 2.7 crashing, etc.


The command to run the load test looks something like this:

ab –c concurrent_users –n total_requests “url”

This will let you specify how many concurrent users to run and how many requests to make.  I have the proxy servers running on ServerA and I run the benchmark from ServerB.


The results

In general, Varnish seems to perform twice as well as Squid does.  In every test, Varnish serves nearly 2x more requests per second and has half the average response time.

  Varnish Squid
File Size Concurrent Users (V) Requests per second (V) Avg across all requests (V) Average Request (ms) (S) Requests per second (S) Avg across all requests (S) Average Request (ms)
10k 10 6592 0.152 1 3078 0.325 3
10k 25 6915 0.145 3 3568 0.280 7
10k 50 7071 0.141 7 3539 0.283 14
10k 100 6860 0.146 13 e="3" face="Calibri">3565 0.280 28
10k 200 7252 0.138 27 3506 0.285 57
10k 400 7181 0.139 56 3518 0.284 113
15k 10 4636 0.216 2 2949 0.339 3
15k 25 5954 0.168 4 3168 0.316 7
15k 50 6036 0.166 8 3118 0.321 16
15k 100 6060 0.165 16 3247 0.308 30
15k 200 6066 0.165 32 3226 0.310 61
15k 400 6048 0.165 66 3092 0.323 129
20k 10 4689 0.213 2 2553 0.392 3
20k 25 5342 0.187 4 2675 0.374 9
20k 50 5422 0.184 9 2799 0.357 17
20k 100 5446 0.184 18 2861 0.349 34
20k 200 5430 0.184 36 2795 0.358 71
20k 400 5400 0.185 74 2656 0.376 150
25k 10 4135 0.242 2 2331 0.429 4
25k 25 4485 0.223 5 2308 0.433 10
25k 50 4488 0.223 11 2221 0.450 22
25k 100 4446 0.225 22 2217 0.451 45
25k 200 4311 0.232 46 2180 0.459 91
25k 400 4160 0.240 96 2026 0.493 197
30k 10 3463 0.289 2 1936 0.516 5
30k 25 3689 0.271 6 2002 0.499 12
30k 50 3661 0.273 13 1887 0.530 26
30k 100 3627 0.276 27 1778 0.562 56
30k 200 3589 0.279 55 1746 0.573 114
30k 400 3541 0.282 112 1798 0.556 222
40k 10 2752 0.363 3 1602 0.624 6
40k 25 2824 0.354 8 1584 0.631 15
40k 50 2826 0.354 17 1492 0.670 33
40k 100 2827 0.354 35 1551 0.645 64
40k 200 2822 0.354 70 1538 0.65 130
40k 400 2794 0.358 143 1372 0.728 291
50k 10 2254 0.443 4 1401 0.713 7
50k 25 2265 0.441 11 1379 0.725 18
50k 50 2266 0.441 22 1368 0.731 36
50k 100 2268 0.441 44 1360 0.735 73
50k 200 2266 0.441 88 1230 0.813 162
50k 400 2267 0.441 176 1216 0.822 328

Here are the graphs of the above data for easier visualization:



Something is wrong here

These are simply benchmarks and are not meant to represent real world scenarios for a few reasons.  Most importantly, this test takes place on a local network that goes through one router.  Running this test on a local network does not take into consideration the typical network latency you would find across the internet.

Secondly, this test only illustrates the raw speed of serving up cached content which isn’t a typical real world scenario.  To really test the overall performance of both of these, we need to simulate the three major steps of a reverse proxy:

  1. Forwarding a request to a backend server
  2. Physically caching it (memory or disk)
  3. Serving the cached data

Testing any of these three steps is good, and shows the raw performance of that function but it doesn’t give us a general overview of the overall performance.


Next Steps

I need to come up with a way to generate load on the server such that it represents the typical flow of requests that we would normally see on a server.  I am running this on a test server, not against production data, so if anyone has an idea of how I can do this, please do let me know.  The results of this test will be Part 2 of this post.

Additionally, please let me know if you spot inefficiencies in my testing methodology. I don’t claim to be a load testing expert so any advice you can offer is appreciated.


Some thoughts on string concatenation in C#

Posted by Bryan Migliorisi

I recently stumbled into a blog entry on that stated some things about string concatenation in C# that went against what I thought to be true, and it got me thinking.

From the article:

string sentence = "The " + "dog " + "ate " + "the " + "cat " + "all " + "day " + "for " + "for " + "fun.";

”That innocent looking line of code actually takes up much more processing power and memory than it appears to. If strings were combined in the ideal way, you would expect that the sentence would be the only string created from this operation. However, since each string is combined to its neighbor in succession, it turns out that 7 other strings are also created (shown in gray in the diagram above). The total amount of unnecessary memory allocations created from this operation is equal to the following equation, where N is the number of strings you are combining…”

But I knew that couldn’t be true, so I fired up Visual Studio and wrote a few simple tests, compiled and then analyzed the generated CIL.

First test:

Original C#:

public string createStringOne()
    return "The " + "dog " + "ate " + "the " + "cat " + "all " + "day " + "for " + "for " + "fun.";

Generated CIL:

.method public hidebysig instance string createStringOne() cil managed
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: ret

Second test:

Original C#:

public string createStringTwo()
    return "The dog ate the cat all day for for fun.";

Generated CIL:

.method public hidebysig instance string createStringTwo() cil managed
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: ret

Third test:

Original C#:

public string createStringThree()
    var sb = new StringBuilder();
    sb.Append("The ");
    sb.Append("dog ");
    sb.Append("ate ");
    sb.Append("the ");
    sb.Append("cat ");
    sb.Append("all ");
    sb.Append("day ");
    sb.Append("for ");
    sb.Append("for ");
    return sb.ToString();

Generated CIL:

.method public hidebysig instance string createStringThree() cil managed
    .maxstack 2
    .locals init (
        [0] class [mscorlib]System.Text.StringBuilder sb)
    L_0000: newobj instance void [mscorlib]System.Text.StringBuilder::.ctor()
    L_0005: stloc.0
    L_0006: ldloc.0
    L_0007: ldstr "The "
    L_000c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0011: pop
    L_0012: ldloc.0
    L_0013: ldstr "dog "
    L_0018: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_001d: pop
    L_001e: ldloc.0
    L_001f: ldstr "ate "
    L_0024: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0029: pop
    L_002a: ldloc.0
    L_002b: ldstr "the "
    L_0030: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0035: pop
    L_0036: ldloc.0
    L_0037: ldstr "cat "
    L_003c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0041: pop
    L_0042: ldloc.0
    L_0043: ldstr "all "
    L_0048: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_004d: pop
    L_004e: ldloc.0
    L_004f: ldstr "day "
    L_0054: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0059: pop
    L_005a: ldloc.0
    L_005b: ldstr "for "
    L_0060: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0065: pop
    L_0066: ldloc.0
    L_0067: ldstr "for "
    L_006c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0071: pop
    L_0072: ldloc.0
    L_0073: ldstr "fun."
    L_0078: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_007d: pop
    L_007e: ldloc.0
    L_007f: callvirt instance string [mscorlib]System.Object::ToString()
    L_0084: ret

Fourth test:

Original C#:

public string createStringFour()
    return new StringBuilder("The dog ate the cat all day for for fun.").ToString();

Generated CIL:

.method public hidebysig instance string createStringFour() cil managed
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: newobj instance void [mscorlib]System.Text.StringBuilder::.ctor(string)
    L_000a: callvirt instance string [mscorlib]System.Object::ToString()
    L_000f: ret

Fifth test:

Original C#:

public string createStringFive()
    string s = "The ";
    s += "dog ";
    s += "ate ";
    s += "the ";
    s += "cat ";
    s += "all ";
    s += "day ";
    s += "for ";
    s += "for ";
    s += "fun.";
    return s;

Generated CIL:

.method public hidebysig instance string createStringFive() cil managed
    .maxstack 2
    .locals init (
        [0] string s)
    L_0000: ldstr "The "
    L_0005: stloc.0
    L_0006: ldloc.0
    L_0007: ldstr "dog "
    L_000c: call string [mscorlib]System.String::Concat(string, string)
    L_0011: stloc.0
    L_0012: ldloc.0
    L_0013: ldstr "ate "
    L_0018: call string [mscorlib]System.String::Concat(string, string)
    L_001d: stloc.0
    L_001e: ldloc.0
    L_001f: ldstr "the "
    L_0024: call string [mscorlib]System.String::Concat(string, string)
    L_0029: stloc.0
    L_002a: ldloc.0
    L_002b: ldstr "cat "
    L_0030: call string [mscorlib]System.String::Concat(string, string)
    L_0035: stloc.0
    L_0036: ldloc.0
    L_0037: ldstr "all "
    L_003c: call string [mscorlib]System.String::Concat(string, string)
    L_0041: stloc.0
    L_0042: ldloc.0
    L_0043: ldstr "day "
    L_0048: call string [mscorlib]System.String::Concat(string, string)
    L_004d: stloc.0
    L_004e: ldloc.0
    L_004f: ldstr "for "
    L_0054: call string [mscorlib]System.String::Concat(string, string)
    L_0059: stloc.0
    L_005a: ldloc.0
    L_005b: ldstr "for "
    L_0060: call string [mscorlib]System.String::Concat(string, string)
    L_0065: stloc.0
    L_0066: ldloc.0
    L_0067: ldstr "fun."
    L_006c: call string [mscorlib]System.String::Concat(string, string)
    L_0071: stloc.0
    L_0072: ldloc.0
    L_0073: ret


So as you can see, the first two methods are essentially the same thing!  It doesn't matter if we concatenate one large string from several smaller strings if (and only if) it happens on one operation.  If concatenation is done using multiple operations, then it does in fact incur a performance and memory hit.

The use of a StringBuilder in this case (where there are only a small handful of small strings) serves no purpose as far as performance is concerned.

But… how are they equal!?

Compiler optimizations!  The C# compiler is an incredible piece of software that can look at your code and “fix” it.  It is important when working on code optimizations like the one in the article mentioned above that developers take into consideration compiler optimizations that may alter the code that they think is poorly written.

Filed under: C# 2 Comments

Comparing while() loops with for() loops in C#

Posted by Bryan Migliorisi

There is always a lot of debate about the speed and performance of loops in any language.  I was curious to see what the differences were between a for() loop and a while() loop in C# and ultimately the .NET CLR.

What I found was no big surprise to me, as it simply confirms what I have been reading for many years.  While loops are simply faster and less complex than for loops.  I did a very simple test to verify this.  I created two loops that contained empty bodies.  You can see them below:

public void doFor()
    int i = 0;
    int len = 100;

    for (i = 0; i < len; i++)

public void doWhile()
    int i = 100;
    while (i-- == 0)


As you can see, both of these loops are as basic as they can be and have no inner bodies to complicate the CIL.  Lets take a look at the generated CIL for each of them.

The generated CIL for a for() loop

.method public hidebysig instance void doFor() cil managed
    .maxstack 2
    .locals init (
        [0] int32 i,
        [1] int32 len)
    L_0000: ldc.i4.0
    L_0001: stloc.0
    L_0002: ldc.i4.s 100
    L_0004: stloc.1
    L_0005: ldc.i4.0
    L_0006: stloc.0
    L_0007: br.s L_000d
    L_0009: ldloc.0
    L_000a: ldc.i4.1
    L_000b: add
    L_000c: stloc.0
    L_000d: ldloc.0
    L_000e: ldloc.1
    L_000f: blt.s L_0009
    L_0011: ret

The generated CIL for a while() loop

.method public hidebysig instance void doWhile() cil managed
    .maxstack 3
    .locals init (
        [0] int32 i)
    L_0000: ldc.i4.s 100
    L_0002: stloc.0
    L_0003: ldloc.0
    L_0004: dup
    L_0005: ldc.i4.1
    L_0006: sub
    L_0007: stloc.0
    L_0008: brfalse.s L_0003
    L_000a: ret

The results are in and there is no surprise.

As you can see, the generated CIL for a for() loop is larger by 6 instructions. Furthermore, in this code the for() loop initializes 2 local variables rather than 1 (but this may vary depending on your code).

What does it all mean?

Probably nothing.  6 more instructions and an additional variable declaration will in most cases have no impact on the performance, even if you are measuring in microseconds.  I am curious to see what the resulting difference between the loops above would be in other languages.

Filed under: C# 1 Comment