Deserialized The Ramblings of a Web Architect

31Dec/104

ASP.NET MVC 3, Razor, & RenderAction causing server to hang and crash

Posted by Bryan Migliorisi

I am working on a new project and decided to use the awesome new ASP.NET MVC3 (RC2) framework with Razor templating.  Razor is awesome.  Its clean and gets the job done nicely, but when I tried to render an action into my view, all hell broke loose.

This is what my controller looked like:

public class MyController : BaseController
    {
        public ActionResult Index()
        {
            return View();
        }

        public ActionResult ProductDetails(String id)
        {
            return View();
        }
	}
}

And in my view I had this:

@{Html.RenderAction("ProductDetails", "My", new { id = ViewBag.Id});}

But whenever I would navigate to a URL that corresponded to a view containing this RenderAction snippet, the browser would hang and the server would stop responding completely.  I double checked my code, I restarted the server, I restarted the browser, I restarted the IDE.  I tried everything, but no luck.

Problem Solved

It has something to do with Razor’s layouts and what I believe may be an infinite loop.  I think what is happening is that the action being rendered via RenderAction is also trying to load the base template that is found in _ViewStart.cshtml which is rendering the body which is calling RenderAction… and this continues infinitely.

The solution is to either tell your view to NOT use a layout or to change your action from ActionResult to PartialViewResult.  I chose the latter and once I restarted the server again, everything started working correctly.

Tagged as: , , 4 Comments
12Jul/101

ObjectID’s with MongoDB and the mongodb-csharp driver

Posted by Bryan Migliorisi

I mist say, the latest release of mongodb-csharp is rather awesome.  Typed collections and LINQ support mean I can worry more about my application than about the data layer.

Here is an example of using typed collections:

public class Customer
{
	public Oid Id { get; set; }
	public string Name { get; set; }
	public CustomerBillingInfo Billing { get; set; }
	public List Depts { get; set; }
}

public void AddCustomer(Customer customer) {
	...(code removed for simplicity)...
	IMongoCollection collection = database.GetCollection();
	collection.Save(customer);
}

Gotcha: Beware of the ID property!

One thing that threw me off when I began using the typed collections was that I had defined my ID property as “_id” because That is what MongoDB uses internally.  While this made sense to me, the mongod process kept throwing errors whenever I tried the following:

...(code removed for simplicity)...
IMongoCollection<customer> collection = database.GetCollection<customer>();
Customer customer = collection.Linq().First(c => c.Name == "test customer");
customer.Name = "A new name!";
collection.Save(customer);

The error looked something like this:

Fri Jun 25 10:43:14 Exception 11000:E11000 duplicate key error index: test06.Customer.$_id_  dup key: { : ObjId(4c24c07a189cf31bd4000002) }
Fri Jun 25 10:43:14    Caught Assertion in insert , continuing
Fri Jun 25 10:43:14 insert test06.Customer exception userassert:E11000 duplicate key error index: test06.Customer.$_id_  dup key: { : ObjId(4c24c07a189cf31bd4000002) } 21ms

It was driving me crazy for days before I realized that the driver was doing some magic – the POCO object needed its ID to be named “Id” instead of “_id” an as soon as I changed that – it started working properly.

25Feb/106

Convert C# classes to and from MongoDB Documents automatically using .NET reflection

Posted by Bryan Migliorisi

There are a number of C# based MongoDB projects being actively developed right now but one thing that I needed was a way to convert a standard C# class to a MongoDB document for easy insertion.  It isn't hard to manually type out and set each property by hand, but it certainly is not the most efficient way, especially when you know you are going to be doing it a lot.

 

For example:

Lets say I have a class called SomeClass that looks something like this:

class SomeClass {
	public string StringTest;
	public int IntTest;
}

And somewhere in my code, I have an instance of this class named someClassInstance.  If I want to create a MongoDB document from this class, I’d have to do something like this:

Document document = new Document();
document.add('StringTest', someClassInstance.StringTest);
document.add('IntTest', someClassInstance.IntTest);

So that isn't such a big deal, right? But what about when I have a class with many more properties?  Then it starts to get messy and cumbersome.  I thought that there should be an straightforward way to easily convert any class to a mongo-csharp compatible Document object. (I am using Sam Corder’s mongo-csharp driver, so that I am targeting the Document object from that library.)

Default values

I also wanted to have a way to specify what the default values were for each class property so that when we did the conversion, we would (hopefully) not end up with any null values.  Plus, if for some reason there was a document in MongoDB that was missing a particular key-value pair, the DocumentConverter would automatically fill in that empty field with the default value so in the code we should never have any nulls.

This is something that I would like for my own purposes and may not suit everyone’s needs.  If it doesn't, simply leave off the DefaultValueAttribute and you’ll never know the difference.

My proposed solution

I figured the easiest way to accomplish this was to create a class that would encapsulate all the functionality needed to convert to and from Document objects and have my other classes inherit from that one. I imagined that the above code would change to something like this:

class SomeClass : DocumentConverter {
	[Attributes.DefaultValue("Default StringTest value!")]
	public string StringTest;
	[Attributes.DefaultValue(16)]
	public int IntTest;
}

And to do the conversion would be very simple.  To convert from someClass to Document would be:

Document document = someClassInstance.ToMongoDocument();

To convert from a Document object to someClass would be:

SomeClass someOtherClassInstance = new SomeClass();
omeOtherClassInstance .FromMongoDocument(someDocumentObject);

The DefaultAttribute class

using System;
namespace MyApp.Attributes
{
    [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = false)]
    class DefaultValueAttribute : Attribute
    {
        private readonly object _value;

        public DefaultValueAttribute(object Value)
        {
            _value = Value;
        }

        public object GetDefaultValue()
        {
            return _value;
        }
    }
}

The DocumentConverter class

Reflection isn't something that I use too often so there may be better ways of accomplishing what I am trying to do, but this is what I’ve got for now.  If there are better ways, please let me know.  Without further ado…

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text;
using MyApp.Classes.Attributes;
using MongoDB.Driver;

namespace MyApp.Classes
{
    public class DocumentConverter
    {
        public void FromMongoDocument(Document document)
        {
            foreach (DictionaryEntry kvp in document)
            {
                object propertyValue;
                if (kvp.Value != null && (kvp.Value.GetType() == typeof(Document)))
                {
                    // We have a document object - Now lets get a reference to the class property's type
                    var propertyType = GetType().GetProperty(kvp.Key.ToString()).PropertyType;

                    // create new instance of that class
                    var propertyInstance = Activator.CreateInstance(propertyType);

                    // call FromMongoDocument on that class and pass in the document
                    MethodInfo method = propertyInstance.GetType().GetMethod("FromMongoDocument");
                    method.Invoke(propertyInstance, new[] { kvp.Value });

                    propertyValue = propertyInstance;
                }
                else
                {
                    // This is not a Document so lets just assign the value
                    propertyValue = kvp.Value;
                }

                GetType().GetProperty(kvp.Key.ToString()).SetValue(this, propertyValue, null);
            }

        }

        public Document ToMongoDocument()
        {
            Document document = new Document();

            foreach (PropertyInfo property in GetType().GetProperties())
            {
                // Get the value of this property
                object propertyValue = property.GetValue(this, null);

                // If this value is null, then lets try to see if there is a default value attribute and assign that
                if (propertyValue == null)
                {
                    object[] attributes = property.GetCustomAttributes(typeof(DefaultValueAttribute), true);
                    foreach (DefaultValueAttribute defaultValue in attributes.Cast())
                    {
                        propertyValue = defaultValue.GetDefaultValue();
                    }
                    document.Add(property.Name, propertyValue);
                }
                else
                {
                    // We have a property, now lets see if this property has a ToMongoDocument method
                    MethodInfo method = propertyValue.GetType().GetMethod("ToMongoDocument");

                    if (method == null)
                    {
                        document.Add(property.Name, property.GetValue(this, null));
                    }
                    else
                    {
                        document.Add(property.Name, method.Invoke(propertyValue, null));
                    }
                }
            }
            return document;
        }
    }
}

That’s all for now

I hope this is useful for someone.  It is a rough draft of what I threw together last night at around 1AM while half asleep.  So far, it has passed all of my initial tests but if you have suggestions to make it better, please leave some comments here.

23Feb/107

The Current State of MongoDB and C#

Posted by Bryan Migliorisi

As a C# developer, I am often disappointed with the lack of drivers and connectors to cool services like MongoDB.  All the cool languages (and Java) get all the love but C# is often an afterthought.

Luckily for me, there are some kickass developers in the C# community who also share my frustration and as such, they have begun building their own C# MongoDB drivers.

I keep stumbling across more and more C# related MongoDB projects, so I figured I would write up a list and some short descriptions of these projects.

List of C# MongoDB Projects

Each of these projects are still rather new, so expect some features to be missing or not fully functional.  A couple of them are usable in your projects today while the rest are still under heavy development

mongodb-csharp

Originally written by Sam Corder (@SamCorder) with help from a handful of contributers, this is the most complete driver of the bunch. It has been evolving quickly and Sam & team are very quick to resolve any bugs that may arise.

I am using this driver in 2 projects that I am working on and so far things have been great.  It even includes GridFS suport.

From the project description:

Current Features

  • Connect to a server.
  • Query
  • Insert
  • Update
  • Delete
  • All BSON types supported
  • DBRef support
  • Isolation and conversion between BSON types and native .net types.
  • Database, Collection and Cursor objects.
  • Index handling routines (List, Create, Drop)
  • Count
  • Roughly 80% unit test coverage. This can and will be improved on.
  • Paired connections
  • Authentication (Does not reauthorize on auto reconnect yet).
  • Database Commands
  • Basic Linq support
  • GridFS support
  • Map Reduce helpers.
  • hint, explain, $where

They are currently working on connection management features (auto reconnect, connection pooling, etc).

Get involved or check out the code at their mongodb-csharp project page on Github.

mongodb-net

Written by the unnamed developer at DevFuel.com, the mongo-net project aims to be a C# port of the 10Gen\MongoDB official Java driver.  While a lot of work has been done and a load of code written, it is currently unusable.  Over the past couple of weeks a significant amount of progress has been made and functionality is beginning to work but it seems that a functional state is months away.

I would keep an eye on this project, though, as having an API compatible with the official Java driver has its benefits.

Get involved or check out the code at their mongodb-net project page on Google Code.

MongoDB.Emitter

Andrew Rondeau’s MongoDB.Emitter is a pretty cool project that provides a strongly-typed Document mapper for C#.  It works in conjunction with Sam Corder’s mongodb-csharp driver allowing the programmer to define strongly typed interfaces and properties.

I have not tried this yet, but this will be on my list of things to check out.

Get involved or check out the code at their MongoDB.Emitter project page on bitbucket.

CSMongo

Hugo Bonacci (@hugoware) has been working on a driver of his own called CSMongo. CSMongo doesnt support everything that mongodb-csharp does but it does have some interesting features, such as the approach to creating Mongo Documents.  Their approach definitely has a more dynamic feel to it which fites nicely in the unstructured MongoDB world.

I am looking forward to the next version which should have more features including support for Hugo’s own jLinq.

Code doesn't appear to be released yet but you can follow his progress at his blog, Hugoware.

simple-mongodb

Simple-mongdb is another project without public source that is being worked on by Daniel Wertheim (@danielwertheim).  I am not sure if it is even being actively developed but it too has some nice ideas.  The goal of this project is to keep the driver JSON-centric and should be compatible with awesome Newtonsoft’s JSON.net library.

They have a few examples of the proposed API but no code has been released to make said examples work.  This is another one to keep an eye on in the meantime.

Check out the simple-mongodb project page on Google Code.

DocumentConverter

This is a small class I wrote that works with mongodb-csharp.  Its name will likely change at some point if and when it gets packaged up and put on source control.  It exposes two functions that will allow any C# class to convert to and from a MongoDB Document object automatically.  It makes my life a lot easier and it uses System.Reflection to do this.

Read more about DocumentConverter on this blog post.

Conclusion

Well it looks like there is a significant amount of interest in MongoDB from the C# community which is great news because it looks like MongoDB is going to continue to thrive and grow.  My bet is that Sam Corder’s driver will be the most common C# driver, simply because it is so far ahead of the rest but time will tell.  Extensions of Sam’s project, such as MongoDB.Emitter, are equally as cool as the drivers they are built on.

Thanks toeveryone who has contributed to these drivers.  Each of them have some great concepts and I hope that one day we will have a driver that supports all these great ideas and features.

If there are any more projects that I have missed – let me know in the comments!

11Nov/090

Gotcha: 32-bit applications may not be able to see files on 64-bit Windows

Posted by Bryan Migliorisi

winlogo-300x265 This one really threw me off tonight.  I am running Windows Server 2008 on one of my boxes and I was trying to set up some advanced URL Routing on IIS7.  IIS7 Manager has a very nice easy to use GUI interface, but I prefer working directly in the configuration files.

I fire up my Notepad++ and attempt to open a file through the file browser.  I navigate to the IIS config folder (c:\windows\system32\inetsrv\config\) and I see an empty directory.  Huh? How is that possible?!

Now I switch over to Windows Explorer and go to the same folder as above and to my disbelief… there are all the config files.  Ok now I am truly confused!

Conclusion

While I cannot seem to find anything from Microsoft about this issue, my findings are that 32-bit applications cannot see the entire file system!  I installed a few applications that I know are only 32-bit to verify this and sure enough they all suffered from the exact same issue. 

Files simply do not show up and if you attempt to open the file (because you do know the full path & filename), it simply tells you that the file does not exist.

8Sep/0920

Reverse Proxy Performance – Varnish vs. Squid (Part 2)

Posted by Bryan Migliorisi

squid-vs-varnish

In part one of this series I tested the raw throughput performance of Varnish and Squid.  My results are consistent with all the blogs and comments floating around the blogosphere – Varnish blows away Squid.

Unfortunately, the first series of tests were somewhat uninformative.  Since they only tested the raw performance of serving cached content from memory, it did not mimic a real world scenario of serving cached content as well as fetching content from the backend and caching it.

While we would hope for a primed, full cache, it is unlikely to happen and you will undoubtedly see a decent amount of backend requests from your caching proxy.

A better test of the two proxies would involve a large set of random URLs, but not too random because we want to simulate both cache hits and cache misses.  To accomplish this, I wrote a small PHP script that would take two parameters: total number of URLs to generate and the hostname for those URLs.

Generating a usable URL list

Generating the list is simple.  This script looks like this:

 1) {
			$as="?as=$random2";
		} else {
			$as = "";
		}
                echo "http://$host/varnish/gen/$random$as\n";
                flush(); ob_flush();
        }
        echo "http://$host/varnish/gen/$random$as";
?>

All this does is create a long list of URLs.  I used PHPs output buffering mechanisms to flush the buffer which is necessary when creating large URL lists so that you don’t wait forever.  Maybe it could have been written better but I don't care – that wasn't the point of this test.

The URLs that are created are in the format of:

http://host/varnish/gen/50?as=100

http://host/varnish/gen/50

This URL is mapped to another PHP file that simply generates dummy data of the size specified in the URL.  In the above cases, the files would be 50Kb large.  The query parameter “as” is just a useless piece of information that is meant to tell the proxy to cache it.  If the “as” query parameter does not exist, the proxy will forward the request to the backend and not cache it.  Its a simple way to generate cacheable and non-cacheable URLs.

To generate the list and store it in a local file, I used this command:

curl http://192.168.165.101/varnish/makelist.php?total=10000&host=192.168.165.104:8080
	> urls-10k.txt

Verify the results of the script

For your own sanity, make sure that the script did in fact generate a list of URLs that suits your needs.

Count the amount of URLs generated:

cat urls-10k.txt | wc –l

(yes, I know it creates one extra URL … Its fine by me.)

Count the amount of cacheable URLs containing the “as” query parameter:

cat urls-10k.txt | grep as | wc –l

Count the amount of unique cacheable URLs:

cat urls-10k.txt | grep as | sort | uniq | wc –l

Running the tests

In part one I used ApacheBench to load the servers but for these tests, I used Siege and http_load which both allowed me to load URLs from a file.

I started with Varnish using the following commands:

curl http://192.168.165.101/varnish/makelist.php?total=100000&host=192.168.165.104:8080
	> urls-100k.txt
http_load -parallel 10 -fetches 100000 urls-100k.txt
http_load -parallel 25 -fetches 100000 urls-100k.txt
http_load -parallel 50 -fetches 100000 urls-100k.txt
http_load -parallel 100 -fetches 100000 urls-100k.txt
http_load -parallel 200 -fetches 100000 urls-100k.txt
http_load -parallel 400 -fetches 100000 urls-100k.txt

In between each http_load command, I restarted the Varnish service so that each test ran with an empty cache.  When I was done with the Varnish tests, I ran the same tests against Squid using the same commands above.

The results

The results of these tests represent the typical web application much better than the original tests did.

This first graph shows the average time for the proxy to accept a connection.  As concurrency goes up, it is expected that the time to connect would go up too.  Squid suffers more than Varnish does, but the difference is negligible.

image

The second graph is much more interesting.  As concurrency goes up, the Time-To-First-Byte for Squid goes up very sharply while Varnish holds its ground and remains very quick around 25ms.

image

This third graph shows another interesting behavior.  As concurrency goes up, Varnish begins to even itself out at just under 800 fetches per second while Squid peaks at around 1100 fetches per second with around 50 concurrent connects and then sharply drops off as concurrency goes up.

image

Conclusion

Squid versus Varnish is just another holy war that may never end.  The tests that I have performed have been very helpful for me and my team but your results may vary.  Of course, there are many more things to consider and I plan to write about some of the major differences between Squid and Varnish.

My results show that in raw cache hit performance, Varnish puts Squid to shame.  In real world scenarios I found that Squid can hold its own when dealing with small amounts of traffic, but it’s performance drops off very sharply as it begins to handle more connections. Varnish handles them without a sweat, as it was designed to do.

My next blog post will detail the differences between Varnish and Squid’s architecture, features, and the reasons I am pushing for Varnish in our environment.

Edit:

Some people are complaining in comments on Reddit and HackerNews that I have not provided any information about the hardware or operating system for my tests.  This information was posted in Part one of this post.

2Sep/091

Reverse Proxy Performance – Varnish vs. Squid (Part 1)

Posted by Bryan Migliorisi

squid-vs-varnish Typical web applications require dozens of SQL queries to generate a single page.  When your application is serving over 1,000,000 pages per day, you quickly realize that the performance bottleneck is your database.  The typical answer to slow database queries is “just use memcached!”  Memcached and other data caches can only take you so far.  This is where reverse proxies come in.  There are a handful of them out there, including Nginx, Perlbal, Squid and Varnish.  Which to use is up to you.

 

Deciding what is best for you

Assuming that you have taken a step back and really analyzed your problem first, the next step is to analyze the possible solutions.  For us, Varnish seems like the best option with Squid close behind.  To be fair, I’ve set up a test server with both Varnish and Squid running.  I’ll use ApacheBench to generate load and requests.

I’ve analyzed our pages to see what the typical page size is and recorded the average page sizes for 5 different page types.  They range from around 10KB to 35KB (gzipped).  For my test, I’ll be benchmarking with 10KB, 15KB, 20KB, 30KB, 40KB, and 50KB files to get a good range of different size requests.

To test under different load capacities, I’ll use ApacheBench to generate loads with different amounts of concurrent users ranging from 10 to 400.

 

The test

I’ll be using two identical machines on the same local class C network to eliminate (as much as possible) network latency. 

The machines look something like this:

  • Pentium 4 3GHz (8KB Level 1, 512KB Level 2)
  • 2GB (4x512 DDR 400MHz)
  • 120GB ATA Western Digital Caviar WD1200JB
  • CentOS 5

(I don't have more information than that.  Suffice to say that it is a few years old and not very powerful)

I am using Varnish 2.04 and Squid 2.6.STABLE21.  There are newer versions of Squid but i am using this version because the 3.x branch is missing features found in the 2.x branch and I have read several reports of 2.7 crashing, etc.

 

The command to run the load test looks something like this:

ab –c concurrent_users –n total_requests “url”

This will let you specify how many concurrent users to run and how many requests to make.  I have the proxy servers running on ServerA and I run the benchmark from ServerB.

 

The results

In general, Varnish seems to perform twice as well as Squid does.  In every test, Varnish serves nearly 2x more requests per second and has half the average response time.

  Varnish Squid
File Size Concurrent Users (V) Requests per second (V) Avg across all requests (V) Average Request (ms) (S) Requests per second (S) Avg across all requests (S) Average Request (ms)
10k 10 6592 0.152 1 3078 0.325 3
10k 25 6915 0.145 3 3568 0.280 7
10k 50 7071 0.141 7 3539 0.283 14
10k 100 6860 0.146 13 e="3" face="Calibri">3565 0.280 28
10k 200 7252 0.138 27 3506 0.285 57
10k 400 7181 0.139 56 3518 0.284 113
 
15k 10 4636 0.216 2 2949 0.339 3
15k 25 5954 0.168 4 3168 0.316 7
15k 50 6036 0.166 8 3118 0.321 16
15k 100 6060 0.165 16 3247 0.308 30
15k 200 6066 0.165 32 3226 0.310 61
15k 400 6048 0.165 66 3092 0.323 129
 
20k 10 4689 0.213 2 2553 0.392 3
20k 25 5342 0.187 4 2675 0.374 9
20k 50 5422 0.184 9 2799 0.357 17
20k 100 5446 0.184 18 2861 0.349 34
20k 200 5430 0.184 36 2795 0.358 71
20k 400 5400 0.185 74 2656 0.376 150
 
25k 10 4135 0.242 2 2331 0.429 4
25k 25 4485 0.223 5 2308 0.433 10
25k 50 4488 0.223 11 2221 0.450 22
25k 100 4446 0.225 22 2217 0.451 45
25k 200 4311 0.232 46 2180 0.459 91
25k 400 4160 0.240 96 2026 0.493 197
 
30k 10 3463 0.289 2 1936 0.516 5
30k 25 3689 0.271 6 2002 0.499 12
30k 50 3661 0.273 13 1887 0.530 26
30k 100 3627 0.276 27 1778 0.562 56
30k 200 3589 0.279 55 1746 0.573 114
30k 400 3541 0.282 112 1798 0.556 222
 
40k 10 2752 0.363 3 1602 0.624 6
40k 25 2824 0.354 8 1584 0.631 15
40k 50 2826 0.354 17 1492 0.670 33
40k 100 2827 0.354 35 1551 0.645 64
40k 200 2822 0.354 70 1538 0.65 130
40k 400 2794 0.358 143 1372 0.728 291
 
50k 10 2254 0.443 4 1401 0.713 7
50k 25 2265 0.441 11 1379 0.725 18
50k 50 2266 0.441 22 1368 0.731 36
50k 100 2268 0.441 44 1360 0.735 73
50k 200 2266 0.441 88 1230 0.813 162
50k 400 2267 0.441 176 1216 0.822 328

Here are the graphs of the above data for easier visualization:

   

 

Something is wrong here

These are simply benchmarks and are not meant to represent real world scenarios for a few reasons.  Most importantly, this test takes place on a local network that goes through one router.  Running this test on a local network does not take into consideration the typical network latency you would find across the internet.

Secondly, this test only illustrates the raw speed of serving up cached content which isn’t a typical real world scenario.  To really test the overall performance of both of these, we need to simulate the three major steps of a reverse proxy:

  1. Forwarding a request to a backend server
  2. Physically caching it (memory or disk)
  3. Serving the cached data

Testing any of these three steps is good, and shows the raw performance of that function but it doesn’t give us a general overview of the overall performance.

 

Next Steps

I need to come up with a way to generate load on the server such that it represents the typical flow of requests that we would normally see on a server.  I am running this on a test server, not against production data, so if anyone has an idea of how I can do this, please do let me know.  The results of this test will be Part 2 of this post.

Additionally, please let me know if you spot inefficiencies in my testing methodology. I don’t claim to be a load testing expert so any advice you can offer is appreciated.

20Aug/092

Some thoughts on string concatenation in C#

Posted by Bryan Migliorisi

I recently stumbled into a blog entry on CodeProject.com that stated some things about string concatenation in C# that went against what I thought to be true, and it got me thinking.

From the article:

string sentence = "The " + "dog " + "ate " + "the " + "cat " + "all " + "day " + "for " + "for " + "fun.";

”That innocent looking line of code actually takes up much more processing power and memory than it appears to. If strings were combined in the ideal way, you would expect that the sentence would be the only string created from this operation. However, since each string is combined to its neighbor in succession, it turns out that 7 other strings are also created (shown in gray in the diagram above). The total amount of unnecessary memory allocations created from this operation is equal to the following equation, where N is the number of strings you are combining…”

But I knew that couldn’t be true, so I fired up Visual Studio and wrote a few simple tests, compiled and then analyzed the generated CIL.

First test:

Original C#:

public string createStringOne()
{
    return "The " + "dog " + "ate " + "the " + "cat " + "all " + "day " + "for " + "for " + "fun.";
}

Generated CIL:

.method public hidebysig instance string createStringOne() cil managed
{
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: ret
}

Second test:

Original C#:

public string createStringTwo()
{
    return "The dog ate the cat all day for for fun.";
}

Generated CIL:

.method public hidebysig instance string createStringTwo() cil managed
{
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: ret
}

Third test:

Original C#:

public string createStringThree()
{
    var sb = new StringBuilder();
    sb.Append("The ");
    sb.Append("dog ");
    sb.Append("ate ");
    sb.Append("the ");
    sb.Append("cat ");
    sb.Append("all ");
    sb.Append("day ");
    sb.Append("for ");
    sb.Append("for ");
    sb.Append("fun.");
    return sb.ToString();
}

Generated CIL:

.method public hidebysig instance string createStringThree() cil managed
{
    .maxstack 2
    .locals init (
        [0] class [mscorlib]System.Text.StringBuilder sb)
    L_0000: newobj instance void [mscorlib]System.Text.StringBuilder::.ctor()
    L_0005: stloc.0
    L_0006: ldloc.0
    L_0007: ldstr "The "
    L_000c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0011: pop
    L_0012: ldloc.0
    L_0013: ldstr "dog "
    L_0018: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_001d: pop
    L_001e: ldloc.0
    L_001f: ldstr "ate "
    L_0024: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0029: pop
    L_002a: ldloc.0
    L_002b: ldstr "the "
    L_0030: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0035: pop
    L_0036: ldloc.0
    L_0037: ldstr "cat "
    L_003c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0041: pop
    L_0042: ldloc.0
    L_0043: ldstr "all "
    L_0048: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_004d: pop
    L_004e: ldloc.0
    L_004f: ldstr "day "
    L_0054: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0059: pop
    L_005a: ldloc.0
    L_005b: ldstr "for "
    L_0060: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0065: pop
    L_0066: ldloc.0
    L_0067: ldstr "for "
    L_006c: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_0071: pop
    L_0072: ldloc.0
    L_0073: ldstr "fun."
    L_0078: callvirt instance class [mscorlib]System.Text.StringBuilder [mscorlib]System.Text.StringBuilder::Append(string)
    L_007d: pop
    L_007e: ldloc.0
    L_007f: callvirt instance string [mscorlib]System.Object::ToString()
    L_0084: ret
}

Fourth test:

Original C#:

public string createStringFour()
{
    return new StringBuilder("The dog ate the cat all day for for fun.").ToString();
}

Generated CIL:

.method public hidebysig instance string createStringFour() cil managed
{
    .maxstack 8
    L_0000: ldstr "The dog ate the cat all day for for fun."
    L_0005: newobj instance void [mscorlib]System.Text.StringBuilder::.ctor(string)
    L_000a: callvirt instance string [mscorlib]System.Object::ToString()
    L_000f: ret
}

Fifth test:

Original C#:

public string createStringFive()
{
    string s = "The ";
    s += "dog ";
    s += "ate ";
    s += "the ";
    s += "cat ";
    s += "all ";
    s += "day ";
    s += "for ";
    s += "for ";
    s += "fun.";
    return s;
}

Generated CIL:

.method public hidebysig instance string createStringFive() cil managed
{
    .maxstack 2
    .locals init (
        [0] string s)
    L_0000: ldstr "The "
    L_0005: stloc.0
    L_0006: ldloc.0
    L_0007: ldstr "dog "
    L_000c: call string [mscorlib]System.String::Concat(string, string)
    L_0011: stloc.0
    L_0012: ldloc.0
    L_0013: ldstr "ate "
    L_0018: call string [mscorlib]System.String::Concat(string, string)
    L_001d: stloc.0
    L_001e: ldloc.0
    L_001f: ldstr "the "
    L_0024: call string [mscorlib]System.String::Concat(string, string)
    L_0029: stloc.0
    L_002a: ldloc.0
    L_002b: ldstr "cat "
    L_0030: call string [mscorlib]System.String::Concat(string, string)
    L_0035: stloc.0
    L_0036: ldloc.0
    L_0037: ldstr "all "
    L_003c: call string [mscorlib]System.String::Concat(string, string)
    L_0041: stloc.0
    L_0042: ldloc.0
    L_0043: ldstr "day "
    L_0048: call string [mscorlib]System.String::Concat(string, string)
    L_004d: stloc.0
    L_004e: ldloc.0
    L_004f: ldstr "for "
    L_0054: call string [mscorlib]System.String::Concat(string, string)
    L_0059: stloc.0
    L_005a: ldloc.0
    L_005b: ldstr "for "
    L_0060: call string [mscorlib]System.String::Concat(string, string)
    L_0065: stloc.0
    L_0066: ldloc.0
    L_0067: ldstr "fun."
    L_006c: call string [mscorlib]System.String::Concat(string, string)
    L_0071: stloc.0
    L_0072: ldloc.0
    L_0073: ret
}

Results:

So as you can see, the first two methods are essentially the same thing!  It doesn't matter if we concatenate one large string from several smaller strings if (and only if) it happens on one operation.  If concatenation is done using multiple operations, then it does in fact incur a performance and memory hit.

The use of a StringBuilder in this case (where there are only a small handful of small strings) serves no purpose as far as performance is concerned.

But… how are they equal!?

Compiler optimizations!  The C# compiler is an incredible piece of software that can look at your code and “fix” it.  It is important when working on code optimizations like the one in the article mentioned above that developers take into consideration compiler optimizations that may alter the code that they think is poorly written.

Filed under: C# 2 Comments
19Aug/091

Comparing while() loops with for() loops in C#

Posted by Bryan Migliorisi

There is always a lot of debate about the speed and performance of loops in any language.  I was curious to see what the differences were between a for() loop and a while() loop in C# and ultimately the .NET CLR.

What I found was no big surprise to me, as it simply confirms what I have been reading for many years.  While loops are simply faster and less complex than for loops.  I did a very simple test to verify this.  I created two loops that contained empty bodies.  You can see them below:

public void doFor()
{
    int i = 0;
    int len = 100;

    for (i = 0; i < len; i++)
    {

    }
}
public void doWhile()
{
    int i = 100;
    while (i-- == 0)
    {

    }
}

As you can see, both of these loops are as basic as they can be and have no inner bodies to complicate the CIL.  Lets take a look at the generated CIL for each of them.

The generated CIL for a for() loop

.method public hidebysig instance void doFor() cil managed
{
    .maxstack 2
    .locals init (
        [0] int32 i,
        [1] int32 len)
    L_0000: ldc.i4.0
    L_0001: stloc.0
    L_0002: ldc.i4.s 100
    L_0004: stloc.1
    L_0005: ldc.i4.0
    L_0006: stloc.0
    L_0007: br.s L_000d
    L_0009: ldloc.0
    L_000a: ldc.i4.1
    L_000b: add
    L_000c: stloc.0
    L_000d: ldloc.0
    L_000e: ldloc.1
    L_000f: blt.s L_0009
    L_0011: ret
}

The generated CIL for a while() loop

.method public hidebysig instance void doWhile() cil managed
{
    .maxstack 3
    .locals init (
        [0] int32 i)
    L_0000: ldc.i4.s 100
    L_0002: stloc.0
    L_0003: ldloc.0
    L_0004: dup
    L_0005: ldc.i4.1
    L_0006: sub
    L_0007: stloc.0
    L_0008: brfalse.s L_0003
    L_000a: ret
}

The results are in and there is no surprise.

As you can see, the generated CIL for a for() loop is larger by 6 instructions. Furthermore, in this code the for() loop initializes 2 local variables rather than 1 (but this may vary depending on your code).

What does it all mean?

Probably nothing.  6 more instructions and an additional variable declaration will in most cases have no impact on the performance, even if you are measuring in microseconds.  I am curious to see what the resulting difference between the loops above would be in other languages.

Filed under: C# 1 Comment
18Aug/098

WordPress 2.8 Pretty URLs and IIS6

Posted by Bryan Migliorisi

WordPress

WordPress

For my first blog post, I thought it would be a good idea to write about the experience and process of setting up WordPress 2.8 on my Windows Server 2003 box with IIS6.  I wanted to enable pretty links (you know, without the ugly index.php prefix), but IIS6 does not support this sort of behavior out of the box.

Yes, I know - WordPress and PHP belong on Linux but I prefer to think of myself as a Windows developer (C#, ASP.NET), though I work full time developing with PHP, Java\JSP, and JavaScript.

Installing WordPress is pretty straight forward, so I wont bother getting into the easy stuff.  It is just a matter of extracting the files and creating a new web site in the IIS Manager with script execution permissions.  I am going to assume that you already have IIS and PHP running.  If you do not, read up on it at the IIS6 FastCGI forums on IIS.net.  You'll find that Windows handles PHP and WordPress just fine, but you'll be missing one major Linux\Apache feature - mod_rewrite.

mod_rewrite is the module that allows you to create those fancy "pretty URLs" that everyone loves so much (yes, I love them too).  IIS6 does not support this out of the box so we will need to install an ISAPI filter to accomplish this for us.  My favorite is the open-source Codeplex project IIRF.  I've used it in a number of projects and it always works out well for me.

For the remainder of this post I am going to assume that you already have a working WordPress installation.  On to the fun part...

Installing IIRF in IIS6

Download the latest build of IIRF from Codeplex and extract it to somewhere on your hard drive.  I usually have the following structure:

C:\inetpub\wwwroot\sitename.com
C:\inetpub\wwwroot\sitename.com\www
C:\inetpub\wwwroot\sitename.com\iirf

I do this to separate the web content from the ISAPI filter.  When you extract the archive, you'll find there are a ton of files in there.  Most of them are for testing your rules but you only need the following two files:

IsapiRewrite4.dll
IsapiRewrite4.ini

The DLL is the actual ISAPI filter and is the equivalent of mod_rewrite on Apache and the INI file is the configuration file which is the equivalent of an .htaccess file on Apache.

Once you have the files extracted in the correct location you must enable the filter in IIS6.  Start by opening up the IIS Manager.  (I'll assume you already know how to do this.)  On the left side, click on the [+] sign to expand the list.  You should see Application Pools, Web Sites, and Web Service Extensions.

Enabling the ISAPI filter is a two step process.  The first step is to add it as a Web Service Extension so click on that list item on the left.  The list of installed ISAPI filters will be shown on the right side.  Click on "Add a new Web service extension" and you'll see the screen below:

new-web-service-extension

You may enter any name, but I would recommend that you enter something that identifies this filter so that you can easily remember what it is for.  A good idea is something like  "IIRF for MySite.com"  Once you have chosen a name, you must add the required files.  Click on Add and then click on Browse.  Navigate to the folder that you extracted the IsapiRewrite4.dll file and select it. Press OK.  You should now be looking at the "New Web Service Extension" window again, with one file listed in the "Required files" list.  Check "Set extension status to Allowed" and press OK.

The ISAPI filter is now allowed to run on this web server.  For the second step, you'll need to tell IIS which sites will actually execute this filter.  Expand the "Web Sites" list by pressing the [+] button on the left.  Right click on the web site that you want to enable IIRF for and click Properties.  Select the "ISAPI Filters" tab and click Add.  For filter name you may enter anything you want.  I usually choose "IIRF" but you can enter whatever you want.  Next click on Browse and once again navigate to the same  IsapiRewrite4.dll file that you just selected in Step 1. Press OK to return to the web site properties window.  You should now see the filter listed in the filters list.  Assuming it all went according to plan, press OK again to return to the IIS Manager.

You may need to restart IIS at this point so the server loads the new ISAPI filters.

Setting up IIRF rewrite rules

This is the easy part.  IIRF supports many (but not all) of the same rewrite directives that mod_rewrite does.  It is worth noting that mod_rewrite is incredibly powerful and you can usually find rules for doing all sorts of cool things but they might not always work with IIRF due to lack of support with some mod_rewrite directives.  IIRF is still under development so the future may hold some good news, but your mileage may vary.

IsapiRewrite4.ini must reside in the same folder as IsapiRewrite4.dll.  If it does not exist, create the file now.  To enable pretty URLs for WordPress all we need to do is copy the contents of a valid .htaccess file from a typical Linux\Apache WordPress installation, which I have pasted here:

RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

Save IsapiRewrite4.ini and then head over to your WordPress Administration panel.

Enabling Pretty URLs in WordPress

Log in to your WordPress Administration Panel and click on Settings and then on Permalinks.  Click on "Custom Structure" and enter the following:

/%postname%/

Now press Save.  At this point, you should be ready to go.  Head over to your blog and refresh it.  Click on some articles and look at the URL.  You should now have pretty, clean URLs.

Filed under: IIS, wordpress 8 Comments