Running with Code Like with scissors, only more dangerous

10Aug/100

C# 4.0 and Dynamic Programming @ AZDNUG

Posted by Rob Paveza

I’m officially posting my code samples and slides BEFORE the presentation tonight so that I can actually claim that they’re online.  They are downloadable at http://robpaveza.net/pub/dynamics-presentation.zip – includes the project folder, the “building-up-the-code” part (including the IronPython code), and the slides.

The project requires Visual C# 2010 to open.  Executing the Python code will require IronPython 2.6 for .NET 4.0 (available here).  You may need to modify the paths in the Python code to make it work exactly right as it relies on some paths, and in fact, the Python code won’t run until you’ve build an executable for the Cs4Dynamics project.

27May/103

Launching OpenGraph.NET

Posted by Rob Paveza

Tonight I’m publishing to Codeplex a project that I’ve been working on for about a month, that I’ve called OpenGraph.NET.  It’s a C# client for Facebook’s still-new Graph API.  It currently supports regular desktop applications, web sites (using Web Forms and ASP.NET MVC), and to some extent, Silverlight.  All of the groundwork is there – it’s just going to take a bit more work to get it across the finish line.  I’m calling it version 0.9.1 "Beta”.  (Maybe I’ll come up with some clever name like “Froyo,” like the operating system on my phone).

image

OpenGraph.NET’s documentation is available at http://robpaveza.net/opengraph.net/docs/ and the project can be downloaded from CodePlex at http://opengraph.codeplex.com/.  There are also a couple demos on the CodePlex site within the download.

OpenGraph.NET is licensed with the new BSD license – basically, you can use it for whatever you want, but if you hand out the project publically, either compiled or as source code, you should include a copy of my copyright notice and license terms.  I’m not an advocate of copyleft, but I would certainly welcome patch submissions.  Over the weekend, I’ll be porting the source code repository from my web server onto CodePlex.

One more note – it IS indeed working out there.  We’re using it on a currently-undisclosed project at Terralever for an event being hosted by one of our clients, and I am using the Real Time Updates handler for it as well.

Over the coming weeks, I’ll be talking about the internals of how this works, including dynamic methods.

I’d like to mention a big thank-you to James Newton-King, for the awesome Json.NET library which is used extensively throughout OpenGraph.NET.

19Jan/100

Improving Performance with Dynamic Methods Part 1: The Problem Definition

Posted by Rob

One of the problems that a large part of the a certain gaming community has understood over the years has been one of version checking.  A common, though now older, method of version checking among this community has been to execute a known algorithm based on a seeded value; however, the algorithm would change based on a formula sent over the wire.  For instance, suppose for every four bytes in a file, there are four state values: A, B, C, and S.  The S value is the current four bytes of the file.  The server might send the following formula as an initialization: A=A-S B=B-C C=C+A A=A+B.  In addition, it sends some startup values for A, B, and C.  It means, that for every four bytes of the file, we need to perform the math in the stops outlined in the above file initialization string.

Now, one of the common ways to approach this problem has been to, basically, attack it by brute force.  We’d keep track of the state values in an array, then keep track of the indices of the state values in another array offset by their letters, then keep track of operators in another array, and finally doing double-dereferencing (dereferencing the index of the state value then actually dereferencing the state value.  So you might have code that looks like this:

foreach (step)
{
    states[Transform('S')] = ReadNext();
    foreach (string formula in list)
    {
        states[Transform(formula[0])] = DoMath(states[Transform(formula[2])], states[Transform(formula[4])], GetOperator(formula));
    }
}

Here, the “Transform” function translates a character to its index into the state value index.  This is a pretty sub-optimal solution given all of the extra dereferencing, and this is really a pseudo-implementation of this activity.  What would be best is if we could somehow unroll that inner loop and access the values directly (or through a single dereference, as a pointer would do).  In other words, it could be rewritten better like so:

foreach (step)
{
    S = ReadNext();
    A = A - S;
    B = B - C;
    C = C + A;
    A = A + B;
}

The challenge is that, the server provides the verification string, and it changes over time, so the client can’t reliably predict which combination of formulae will be used.  Although in the wild only a fixed set of combinations have ever been observed, there are a number of others that could potentially be presented, with no fixed number of formulas, three potential writeable state values and four readable state values per formula, and eight binary operators (+, –, *, /, %, &, |, and ^).  So, either we keep going with the inner loop, or we figure out some way to get all the benefits of compilation without the headaches of having to know exactly what we’re programming before we program it.  Fortunately, the .NET framework provides a way for us to do exactly that: dynamic methods.

To simplify the code that we need to generate, we’ll rewrite the inner code to look like this:

foreach (step)
{
    S = ReadNext();
    ExecuteStep(ref A, ref B, ref C, ref S);
}

Now, all we need to do is dynamically emit the ExecuteStep method.  To do so we’ll need to get into the System.Reflection.Emit namespace – kind of a scary place to be!  Fortunately, Reflector is going to make this easier for us – and we’ll be glad we’re doing this in IL.

In Part 2, we’ll look at how to actually emit the dynamic method by writing the equivalent code in C# and then looking at it in Reflector, then figuring out how to generate it at run-time.  Along the way, we’ll learn a little bit about the .NET evaluation stack.

Oh – one more thing – here’s why you should care about all of this.  A simple testing framework indicated a speed increase of a factor of four when changing this to use a dynamic method instead of the previous implementation.  Over 50 iterations, I observed the dynamic method versions taking a little less than 1/4 of the execution time of the original array-based implementation.

Now, if that’s not a marked improvement, I don’t know what is.  But remember, as with all performance optimizations, your mileage may vary.

Improving Performance with Dynamic Methods

  • Part 1: The Problem Definition
  • Part 2: Emit and Execute
8Apr/090

Unsung C# Hero: Closure

Posted by Rob

Today I’m going to talk about a feature of C# that has been around since 2.0 (with the introduction of anonymous delegates) but which gets nearly no lip service and, despite the fact that most C# developers have probably used it, they’ve probably used it without thinking about it.  This feature is called closure, and it refers to the ability of a nested function to make reference to the surrounding function’s variables.

This article will make extensive discussion of how delegates are implemented in C#; a review may be appropriate before diving in.  Also, we’ll be making a lot of use of The Tool Formerly Known as Lutz Roeder’s .NET Reflector, which is now owned by Red Gate Software.

Anonymous Methods without Closure

Supposing that I had a reason to do so, I could assign an event handler as an anonymous method.  I think this is generally bad practice (there is no way to explicitly dissociate the event handler, because it doesn’t have a name), but you can:

    public partial class SampleNoClosure : Form
    {
        public SampleNoClosure()
        {
            InitializeComponent();

            button1.Click += delegate
            {
                MessageBox.Show("I was clicked!  See?");
            };
        }
    }

This will work as expected; on click, a small alert dialog will appear.  Nothing terribly special about that, right?  We could have written that as a lambda expression as well, not that it buys us anything.  It looks like this in Reflector:

Anonymous method with no closure.

We see that the compiler auto-generates a method that matches the appropriate signature.  Nothing here should be completely surprising.

Simple Example of Closure

Here is a sample class that includes closure.  The enclosed variable is sum.  You’ll note that everything just makes sense internally, right? 

    public partial class SimpleClosureExample : Form
    {
        public SimpleClosureExample()
        {
            InitializeComponent();

            int sum = 1;
            for (int i = 1; i <= 10; i++)
                sum += sum * i;

            button1.Click += delegate
            {
                MessageBox.Show("The sum was " + sum.ToString());
            };
        }
    }

So, it only makes sense that sum can be part of that anonymous function, right?  But we need to bear in mind that all C# code must be statically-compiled; we can’t just calculate sum.  Besides, what happens if the value was a parameter to the function?  Something that couldn’t be precompiled?  Well, in order to handle these scenarios, we need to think about how this will work.

In order to keep that method state alive, we need to create another object.  That’s how the state can be maintained regardless of threads and regardless of calls to the function.  We can see it as a nested class here, and the anonymous method looks just like it does in code:

Closure supporting class

A More Advanced Example

Whenever you work with a LINQ expression, chances are you’re using closure and anonymous functions (and lambda expressions) and don’t realize it.  Consider this LINQ-to-SQL query:

            int boardID = 811;
            int perPage = 20;
            int pageIndex = 0;

            var topics = (from topic in dc.Topics
                          orderby topic.IsSticky descending, topic.LastMessage.TimePosted descending
                          where topic.BoardID == boardID
                          select new
                          {
                              topic.TopicID,
                              Subject = topic.FirstMessage.Subject,
                              LatestSubject = topic.LastMessage.Subject,
                              LatestChange = topic.LastMessage.ModifiedTime,
                              NameOfUpdater = topic.LastMessage.PosterName,
                              Updater = topic.LastMessage.User,
                              Starter = topic.FirstMessage.User,
                              NameOfStarter = topic.FirstMessage.PosterName,
                              topic.ReplyCount,
                              topic.ViewCount
                          })
                            .Skip(perPage * pageIndex)
                            .Take(perPage);
            foreach (var topic in topics)
            {
                Console.WriteLine("{0} - {1} {2} {3} {4} by {5}", topic.Subject, topic.NameOfStarter, topic.ReplyCount, topic.ViewCount, topic.LatestChange, topic.NameOfUpdater);
            }

The closure here is happening within the where clause; you may recall that the C# where clause evaluates to the IEnumerable<T> extension method Where(Func<TSource, bool> predicate).

Here, it’s very easy to imagine a case where we wanted to write actual parameters.  This query is used to generate and display a topic list for a message board; all “stickied” posts should be at the top and the rest should be sorted by last time posted.  If I’m making that into a web server control, I’m going to need to not hard-code the board ID, the number of topics per page to display, and which page I’m looking at.

Now, this is kind of a hard thing to conceptualize; when I was going through this project, I expected all three variables to be incorporated into the class.  It turns out that Skip() and Take() don’t evaluate a lambda expression – they take straight values – so we don’t ultimately have to store them for evaluation later.  However, as expected, boardID does have to be stored, and that provides us with an interesting question of why.  And you might be asking why that is even the case; LINQ-to-SQL translates this into SQL for us:

SELECT TOP (20) [t0].[TopicID], [t2].[Subject], [t1].[Subject] AS [LatestSubject], [t1].[ModifiedTime] AS [LatestChange], [t1].[PosterName] AS [NameOfUpdater], [t4].[test], [t4].[UserID], [t4].[Username], [t4].[Email], [t4].[PasswordHash], [t6].[test] AS [test2], [t6].[UserID] AS [UserID2], [t6].[Username] AS [Username2], [t6].[Email] AS [Email2], [t6].[PasswordHash] AS [PasswordHash2], [t2].[PosterName] AS [NameOfStarter], [t0].[ReplyCount], [t0].[ViewCount]
FROM [dbo].[Topics] AS [t0]
LEFT OUTER JOIN [dbo].[Messages] AS [t1] ON [t1].[MessageID] = [t0].[LastMessageID]
LEFT OUTER JOIN [dbo].[Messages] AS [t2] ON [t2].[MessageID] = [t0].[FirstMessageID]
LEFT OUTER JOIN (
    SELECT 1 AS [test], [t3].[UserID], [t3].[Username], [t3].[Email], [t3].[PasswordHash]
    FROM [dbo].[Users] AS [t3]
    ) AS [t4] ON [t4].[UserID] = [t1].[UserID]
LEFT OUTER JOIN (
    SELECT 1 AS [test], [t5].[UserID], [t5].[Username], [t5].[Email], [t5].[PasswordHash]
    FROM [dbo].[Users] AS [t5]
    ) AS [t6] ON [t6].[UserID] = [t2].[UserID]
WHERE [t0].[BoardID] = @p0
ORDER BY [t0].[IsSticky] DESC, [t1].[TimePosted] DESC

So why, if we already have the SQL generated, do we need to bother with it?  Well, you may recall that LINQ-to-SQL doesn’t support all possible operators.  If we break support for the LINQ-to-SQL query and we have to pull back all of the relevant items, we’ll have to use that class.  At this point though, it goes unused.

Review

A closure is when you take the variables of a function and use them within a function declared inside of it – in C#, this is through anonymous delegates and lambda expressions.  C# typically will accomplish the use of closures by creating an implicit child class to contain the required state of the function as it executes, handing off the actual method to the contained class.

Further Reading

2Apr/090

Your Own Transactions with LINQ-to-SQL

Posted by Rob

I’m working on porting an existing forum-based community from SMF to a new .NET-based forum platform that I’m authoring.  I’m excited about it; I love SMF, but it doesn’t have what I want and frankly, it’s a scary beast to try to tackle.  I’d considered using some kind of bridge between it and my code, but I knew I wanted deep integration of the forums with the new community site, and I wanted the community site in .NET.  So I made the decision to write an importer to talk between MySQL and my SQL Server-based solution.  I chose LINQ-to-SQL as my O/R mapper because, quite frankly, I find it much easier and more elegant to work with; so far as I know, I’m not the only one who thinks so.

Because of the nature of the data that I’m importing, I needed to run several SubmitChanges() calls to get the data into the database.  But I wanted to make sure that these submissions only worked if they ALL worked.  So I needed a transaction external to the normal LINQ-to-SQL in-memory object mapper.  Unfortunately, when I began a transaction using the underlying Connection property of the DataContext, I was met with an error:

System.InvalidOperationException: SqlConnection does not support parallel transactions.
   at System.Data.SqlClient.SqlInternalConnection.BeginSqlTransaction(IsolationLevel iso, String transactionName)
   at System.Data.SqlClient.SqlInternalConnection.BeginTransaction(IsolationLevel iso)
   at System.Data.SqlClient.SqlConnection.BeginDbTransaction(IsolationLevel isolationLevel)
   at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)

The solution was simple: DataContext has a Transaction property!  By setting this to the transaction that I was beginning, I was able to run the complete import in a single transaction:

dc.Connection.Open();
using (DbTransaction transaction = dc.Connection.BeginTransaction(IsolationLevel.ReadCommitted))
{
    dc.Transaction = transaction;
    try
    {
        // do databasey things
        dc.SubmitChanges();

        transaction.Commit();
    }
    catch (Exception ex)
    {
        transaction.Rollback();
        Console.WriteLine("Exception caught; transaction rolled back.");
        Console.WriteLine(ex.ToString());
    }
}

It took about 2 minutes to import 37,000 or so messages, plus all users, categories, forums, private messages, and polls from SMF.  The app ends up taking something in the neighborhood of 120mb of memory (I need to keep objects around to reference them for their children, since I assign new IDs), but it’s still a small one-off price to pay.

Tagged as: , , No Comments
17Mar/090

Invoking a Partial Update from Flash

Posted by Rob

This morning I received an email that posed a question so interesting that I thought I would blog about the answer.  The question was, essentially, how can we invoke a partial update (using ASP.NET AJAX triggers), from an on(up) button handler in Flash?

There are a few different ways to approach this problem.  I believe the method I’m going to write out here is what I like to call the “path of least resistance” – it’ll get you there quickly.  However, it will create some interdependencies among your controls.  However, using this technique as a baseline, you should be able to adapt it to fit other better techniques, which I’ll describe later.

Flash lives in a sandbox relative to your web page; it doesn’t interact with the rest of your page’s code (by default), and so you need to provide it with a way to do so.  In ActionScript 1 and 2 it’s relatively easy to invoke JavaScript using getURL():

Invoking a JavaScript call

I won’t get into the details of how to access script with ActionScript; I’ll simply leave it to the professionals (and use this as an example).  With this capability we should have everything we need to build an AJAX-enabled Flash button.  I’ve created a sample ASPX page that will host what we need:

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="150" height="60" id="blog" align="middle">
            <param name="allowScriptAccess" value="always" />
            <param name="allowFullScreen" value="false" />
            <param name="movie" value="blog.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" />    
            <embed src="blog.swf" quality="high" bgcolor="#ffffff" width="150" height="60" name="blog" align="middle" allowScriptAccess="always" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
        </object>
    <asp:ScriptManager ID="sm" runat="server" EnablePartialRendering="true">
        
    </asp:ScriptManager>
    
    <asp:UpdatePanel ID="update" runat="server">
        <Triggers>
            
        </Triggers>
        <ContentTemplate>
            <asp:Label ID="lblText" runat="server" Text="Click the Flash button to turn me blue." />
        </ContentTemplate>
    </asp:UpdatePanel>
    </div>
    </form>
</body>
</html>

We still need a way to invoke the partial update.  Unfortunately, what I termed the “path of least resistance” is going to involve a little voodoo: we’re going to create a Button, set its display to none (so that it’s on the page but hidden), and then treat it as a trigger for the UpdatePanel:

        <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="150" height="60" id="blog" align="middle">
            <param name="allowScriptAccess" value="always" />
            <param name="allowFullScreen" value="false" />
            <param name="movie" value="blog.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" />    
            <embed src="blog.swf" quality="high" bgcolor="#ffffff" width="150" height="60" name="blog" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
        </object>
        <asp:Button runat="server" style="display: none;" ID="btnFlashGo" />
    <asp:ScriptManager ID="sm" runat="server" EnablePartialRendering="true">
        
    </asp:ScriptManager>
    
    <asp:UpdatePanel ID="update" runat="server">
        <Triggers>
            <asp:AsyncPostBackTrigger ControlID="btnFlashGo" EventName="Click" />
        </Triggers>
        <ContentTemplate>
            <asp:Label ID="lblText" runat="server" Text="Click the Flash button to turn me blue." />
        </ContentTemplate>
    </asp:UpdatePanel>

We’re still missing one piece: we need to generate the function call to actually make a postback.  There’s a relatively convenient way to do that, using the ClientScriptManager; drop this Literal onto the page:

        <asp:Button runat="server" style="display: none;" ID="btnFlashGo" OnClick="btnFlashGo_Click" />
        <asp:Literal ID="flashScript" runat="server" />

and wire it up in the backend:

public partial class _Default : System.Web.UI.Page 
{
    protected void Page_Load(object sender, EventArgs e)
    {
        this.flashScript.Text = string.Format(@"<script type=""text/javascript"">
function flashClicked()
{{
    {0}
}}
</script>", ClientScript.GetPostBackEventReference(this.btnFlashGo, ""));
    }

    protected void btnFlashGo_Click(object sender, EventArgs e)
    {
        lblText.ForeColor = Color.Blue;
    }
}

This setup should give us everything we need to make the AJAX call, and sure enough:

image

Better Approaches

I would be happier – much happier – with this solution if it didn’t depend on so much black magic.  There are a few ways to get it to work better, and while I don’t want to take the time to demonstrate them here, I can describe them a bit.

A FlashButton Control

In my mind, a FlashButton control is the best solution; it can derive from Control or WebControl (although to be honest simply Control is preferable), and can automatically do the heavy lifting.  You could incorporate SWFObject as a script resource and provide it automatically.  FlashButton could expose its own events, which means that you could eliminate that Button (the hidden one) and instead create the script callback reference pointing to the FlashButton’s event itself (and the UpdatePanel’s trigger could point there as well). 

A FlashButtonManager Control

A FlashButtonManager could extend the support for FlashButton much like ScriptManager does for UpdatePanel.  While the approach with a single FlashButton works well when only a single FlashButton is on the page, incorporating multiple FlashButton objects becomes tricky when you factor in things like handling multiple callbacks (for instance, naming the flashClicked() function).  A FlashButtonManager could be designed such that it handles each flashButton on the page, perhaps setting up FlashButtons with a FlashVar to specify a parameter when calling flashClicked(), and then using that parameter to determine which one was clicked and firing the appropriate postback event.

Final Thoughts

You need to enable your Flash app to talk to JavaScript in order to make AJAX partial updates work correctly.  Fortunately it’s not super-challenging to do so!  You should be careful though – there is some voodoo going on in the back-end of this kind of solution – but with creative architecture, you can avoid a lot of headache.

13Mar/090

Speedy C#, Part 4: Using – and Understanding – CLR Profiler

Posted by Rob

CLR Profiler is a free and incredibly useful tool offered by Microsoft.  I'm fairly certain its primary use (at least from Microsoft's perspective) is to illustrate use of the CLR Profiling COM APIs, which aren't exceptionally clear-cut (in my opinion), particularly from a .NET programmer's point of view.  The really difficult part of using CLR Profiler is becoming accustomed to its interface and the data it presents; however, once you do so, I'm certain you'll find it incredibly helpful in addressing difficulties with memory usage.  This article aims to introduce you to the "important parts" of CLR Profiler - specifically, which graphs you should view, how you should interpret them, and how to address the problems you find.  This article will not review some of the more complicated parts of injecting CLR Profiler into something such as your ASP.NET application; there are other resources for that purpose.

For the purposes of this article, I've re-introduced a wasteful error into BN# that I found by using CLR Profiler.  We'll work through finding it in this article.

Getting Started

Once you have CLR Profiler "installed" - and I use the term loosely - you can start the application from the install path (don't look for a Start Menu item).  There are two versions of binaries, x86 and x64 versions; you should know which edition of the application you'd like to run.  If you're running a platform-neutral application (most .NET apps would fall under this category), and you're on an x64 system, you should use that one.  If you're running 32-bit Windows, or are running a program specifically targeted to x86, then you should run the x86 version of CLR Profiler.

As an important note, for Windows Vista users, if you're running with UAC enabled, make sure to run CLR Profiler as an administrator.  CLR Profiler works by injecting a COM DLL into the target, but it can't do that if you're not running the process as an administrator.

CLR Profiler while it's not running anything

When profiling memory, I turn off Calls tracking: it's located in the bottom-right of the UI window.

If your application requires access to the local application directory - for instance, by using the Application class in Windows Forms - you should go through the explicit Profile Application menu item within the File menu, and set the working directory option of that UI.  Otherwise, go ahead and click Start Application, browse to your application, and go.

During Operation

Other than the fact that your application will be measurably slower, you should be able to run the application as you otherwise would.  Your mileage will vary, but you'll get better results with more memory in your system.  But all developers have at least 4gb powering their boxes now, right?

During the application, you can click on the Show Heap now button on the main CLR Profiler GUI, which will display a heap graph of the current application, displaying the path to all currently allocated memory:

Heap Graph of current profile

To be honest, I find the heap graph to be relatively confusing, but the good news is that you don't need to keep using it.  But once you've dumped that temporary log, you can view the current heap and interesting information by closing that window and, in the main CLR Profiler window, going to the View menu, and choosing Summary, which displays a cool window:

A result summary of a profile

This window helps you understand what's happening:

  • Allocated bytes is really interesting – it relates the total amount of memory that you’ve allocated within managed code.
  • Final Heap Bytes is the amount of managed memory that currently is in use on the heap.  This doesn't necessarily reflect unmanaged items.
  • Relocated Bytes is the amount of memory that has been moved by the garbage collector during compaction operations.
  • Gen X collections shows the number of garbage collections that have occurred for each generation.
  • Garbage Collector Generation Sizes shows the number of bytes being used by each heap.

What's Happening with BN#?

I had a suspicion based on memory usage (reported by Task Manager) that BN# wasn’t quite as efficient as I would have hoped.  I wanted to do some investigation, so I plugged in CLR Profiler.  After a 30-second (or so) connection to Battle.net, joining Clan Recruitment, this is what I saw:

Profile of BN# with intentional memory bug

That’s pretty heavy – 31mb or so total allocated memory but only ending up with about 3mb on the heap and only 3.5mb were relocated throughout the lifetime of the app – that told me that I was doing a lot of allocating and freeing very rapidly.  What’s the next step?

I clicked on the Allocation Graph button and took a look:

Allocation graph indicating 10mb of byte[] on the heap.

In this we can see that byte arrays are on the heap frequently and account for about 35% of all memory allocations.  That’s a big problem – especially since I pooled their creation already!  CLR profiler helps me track it down though, as I follow the highlighted call chain back to its source:

The culprit

This image indicates that I have a problem with a method called DataReader::get_m_data().  Now, as I mentioned, I had to recreate this problem, and the path of least resistance for me was to change the identifier m_data (used frequently in DataReader) to be a property instead of a field, so originally this said get_Data.  I thought that was odd until I saw its implementation:

        protected virtual byte[] Data
        {
            get
            {
                byte[] dataCopy = new byte[_m_data.Length];
                Buffer.BlockCopy(_m_data, 0, dataCopy, 0, dataCopy.Length);
                return dataCopy;
            }
        }

So here, for every operation that accesses the Data property (in the original implementation, it was every operation, because the Data property was virtual), I was duplicating the entire arrayEVERY TIME.

I then changed the implementation so that operations defined within the base class wouldn’t needlessly go through a property, and derived classes had direct access to the buffer by reference (via the UnderlyingBuffer property).  What were my results?

Final Results

I think that fairly well speaks to the effectiveness of using tools like this. :)  A decrease of 27% in allocations, 33% in gen-0 collections, and 53% decrease of the amount of byte[] allocations:

Updated allocation graph

Further Reading

The "Speedy C#" Series:

13Aug/080

Speedy C#, Part 3: Understanding Memory References, Pinned Objects, and Pointers

Posted by Rob

So often in the managed world we're able to get away with not worrying about memory management.  "But the GC takes care of cleaning my objects for me!"  That's true; but if you want your application to be performant, you should at least understand what's going on in all of those circuits and silicon.

In Part 2, I talked a bit about how creating object pools can help you to avoid garbage collections by keeping memory allocated for a long time.  Here, I'm going to talk a bit more extensively about how objects are stored in memory, what a "pinned object" is, and how pointers can be used quickly in C#.

NOTE: This article assumes you are familiar with pointers and pointer arithmetic.  If not, you may wish to brush up.

Objects in Memory - A Closer Look at the Heap

When you create an instance of a class (not a struct or an enum), your object is being stored on the "heap" - a large contiguous area of memory that is just there.  (For more information on the heap, read up on Part 2).  This includes, interestingly enough, any Array objects you create (such as a byte[[) - they're reference objects, not value objects.  (The one exception is if you use the stackalloc operator in C#).  So, suppose I make the following class:

   1: class Sample
   2: {
   3:     public int A;
   4:     public long B;
   5:     public short C;
   6:     public short D;
   7: }

Here's how it would conceptually look in a memory block:

An instance of Sample in memory

As you can see, the class is laid out contiguously (although the CLR does not guarantee this behavior unless it is decorated with [StructLayout(LayoutKind.Sequential)]).  Still, you get the idea.

However, when we create an object and get a reference to it, we don't actually get a pointer to the object - we get a "reference".  This isn't a reference like you might expect in C or C++, either; rather, it's similar to a handle.  We can use it just like it's conceptually in memory like I laid out.  However, the CLR hides implementation details; for example, every object on the heap has at least a reference to its RuntimeTypeHandle so that casting can be checked at runtime.  To demonstrate, let's take a byte[].  When it's stored on the heap, it's pretty clear what we're looking at.  Arrays of any type are an interesting edge case in .NET; normally, C# does not allow you to obtain a pointer of a managed type (and in fact you can't do what I'm about to demonstrate with a reference type), but arrays themselves ARE managed types (don't worry about the last two lines of output just yet).

   1: static unsafe void Main(string[] args)
   2: {
   3:     byte[] bytes = new byte[100];
   4:     bytes[0] = 1;
   5:     bytes[1] = 2;
   6:     bytes[2] = 3;
   7:     bytes[3] = 4;
   8:  
   9:     Type arrayType = null;
  10:     fixed (byte* pMem = &bytes[0])
  11:     {
  12:         Console.WriteLine("{0:x16}", (long)pMem);
  13:         int* pArrayBase = (int*) pMem;
  14:         Console.WriteLine("{0:x8}", *pArrayBase);
  15:         pArrayBase--;
  16:         Console.WriteLine("{0:x8}", *pArrayBase);
  17:         pArrayBase--;
  18:         Console.WriteLine("{0:x8}", *pArrayBase);
  19:         pArrayBase--;
  20:         Console.WriteLine("{0:x8}", *pArrayBase);
  21:         pArrayBase--;
  22:         Console.WriteLine("{0:x8}", *pArrayBase);
  23:         long rtth = *(long*) pArrayBase;
  24:         RuntimeTypeHandle handle;
  25:         // RTTH is a value-type whose only member is an IntPtr; can be set as a long on x64
  26:         RuntimeTypeHandle* pH = &handle;
  27:         *((long*) pH) = rtth;
  28:         arrayType = Type.GetTypeFromHandle(handle);
  29:     }
  30:  
  31:     if (arrayType != null)
  32:     {
  33:         Console.WriteLine(arrayType.Name);
  34:     }
  35:  
  36:     Console.WriteLine("byte[] RTTH: {0:x16}", typeof (byte[]).TypeHandle.Value.ToInt64());
  37:     int a = 1;
  38:     int b = 2;
  39:     int* pA = &a;
  40:     int* pB = &b;
  41:     Console.WriteLine(*pB);
  42:     Console.WriteLine(*(pB - 1));
  43:  
  44:     Console.ReadLine();
  45: }

Now, just to clarify: I run on x64.  The above code will not function as expected on x86.  There are a few items that will also produce slightly varying results for you; for instance, pMem shouldn't be cast to a long on x86, and to get to the instance's stored RTTH, you only need to decrement the pointer 3 times on x86 (whereas the RTTH on x64 is 8 bytes long).  Here's the output on my machine:

0000000002a31748                Console.WriteLine("{0:x16}", (long)pMem);
04030201                        Console.WriteLine("{0:x8}", *(pMem));
00000000                        Console.WriteLine("{0:x8}", *(pMem - 1));
00000064                        Console.WriteLine("{0:x8}", *(pMem - 2));
00000642                        Console.WriteLine("{0:x8}", *(pMem - 3));
7890a4a8                        Console.WriteLine("{0:x8}", *(pMem - 4));
Byte[] Console.WriteLine(arrayType.Name); byte[] RTTH: 00000642789562c2 Console.WriteLine("{0:x16}", typeof(byte[]).TypeHandle.Value.ToInt64()); 2 Console.WriteLine(*pB); 1 Console.WriteLine(*(pB - 1));

So, here we see that the runtime type identifier is stored as part of the object reference on the heap; so is the array length (that's the hex value 00000064 that you see on the fourth line of output - it's 100 in decimal).  That's how arrays are stored on the heap, and it's pretty much how objects are stored; when we have an object reference, we can treat it as if it's a pointer into memory.  But it's more than that; below our "pointer" exists additional information about the object.  We don't get to see that additional information because the CLR hides it from us.

What are reference variables then?  Ultimately, they're stack variables that contain our "pointer" that isn't really a pointer.  I said not to worry too much about the last two lines before, but they are intended to show you one thing: stack variables are allocated sequentially on the stack.  I declared a, then b; by obtaining a pointer to b, I was also able to obtain a pointer to a by decrementing the pointer by the size of the variable (in this case, 32 bits).  To show you that my handle is in fact legitimately pointing to a stack variable, take a look at the following code:

   1: static unsafe void Main(string[] args)
   2: {
   3:     Sample s = new Sample {A = 0x01020304, B = 0x0f0e0d0c0b0a0908, C = 0x0706, D = 0x0504};
   4:     long a = 1;
   5:     long b = 2;
   6:     long* pA = &a;
   7:     long* pB = &b;
   8:     Console.WriteLine("{0:x16}", *pB);
   9:     Console.WriteLine("{0:x16}", *(pB - 1));
  10:     Console.WriteLine("{0:x16}", *(pB - 2));
  11:  
  12:     long prS = (long)(pB - 2); // the location of s on the stack
  13:     long* pS = *(long**)prS;
  14:     Console.WriteLine("{0:x16}", *pS);
  15:     Console.WriteLine("{0:x16}", *(pS + 1));
  16:     Console.WriteLine("{0:x16}", *(pS + 2));
  17:  
  18:     Console.ReadLine();
  19: }

Again, the above code will not function as expected on x86 (to make it do so, replace all long references with int).  The output of this code is fascinating:

0000000000000002      b
0000000000000001      a
0000000002be16c8      s
00000642801a4400      *pS
0f0e0d0c0b0a0908      *(ps + 1) 
0504070601020304      *(ps + 2) 

You might notice that s is a pointer to the heap, and that dereferencing it gives us a number that looks suspiciously similar to a RuntimeTypeHandle just like in the last example, and you'd be correct.  The other interesting thing is the variable order: the B variable in the Sample class was aligned so that it would be first (8-byte alignment on x64 appears to be the default).  Applying [StructLayout] to it as noted before makes it look right (although to the untrained eye it will look entirely backwards due to endianness).

In Part 2, I talked about how garbage collection allows us to not worry so much about external fragmentation of the heap, because the GC performs a process called "compaction," by which objects are moved around in memory so that there aren't small areas of free space.  The interesting question is: what happens if a GC compaction happens and we have a pointer to an object?

Accessing Memory Locations with Pinned Objects

The CLR allows us to "pin" an object so that it is not moved during garbage collection.  This can potentially have some big consequences for garbage collection, though; the heap is still fragmented if an object is pinned during a pass.  What's more is that if the object becomes eligible for compaction after the pass, it's still considered a gen-0 object even though it should have moved to gen-1.  C# enables us to pin an object via the fixed statement.

In truth, the only objects worth pinning are arrays.  You can't pin a regular reference object to get a pointer for the reason shown above (it's not guaranteed to follow any particular pattern), and single value-type objects can be accessed directly on the stack without pinning.  Pinning arrays has some good performance benefits (which I'll get to a bit later), but like I said, not without a cost.

The neatest part about pointers in C# is that a pointer can be cast to a pointer of any other value-type; this is exceptionally common in C code (reading a file into memory by reading the length of a struct, and then treating the memory as a pointer to that struct, for example).  Sometimes it's simply easier for us to do that in C# than it is to use a stream.  Consider the case of reading a PE file header; it's a nightmare!  So many lines of code when you could simply read in a buffer and call it a PE file header.  Strong typing imposes that limitation, but thankfully even on edge cases like this, we can work around it.

I'm not going to discuss the performance characteristics of pinned objects during a garbage collection; for one, they're hard to measure, but more importantly, it's been well-documented to hurt the performance of the garbage collector.

Getting Pointers without the Pinning

There are other means by which to obtain, create, and manage pointers aside from the standard fixed statement.  As mentioned earlier, you can use the stackalloc statement to allocate a block of memory on the stack; it provides a pointer to the stack with the base of an array.  Alternatively, if you don't care about portability, you can use native Windows functions to allocate memory for you.  These functions might include LocalAlloc, HeapAlloc, VirtualAlloc, or VirtualAllocEx, depending on what your needs are.

An interesting prospect might be to allocate multiple heaps using the HeapCreate APIs; this would allow you to manage your memory per-area of responsibility; Noel Llopis suggests such a strategy in his book C++ for Game Programmers.  Although all of this memory management might seem like overkill, if you're really hunting for the next tweak to speed up your code, this might help you get over the line.

Performance Characteristics of Unsafe vs. Safe Code

Let's not kid ourselves; unsafe code is inherently unsafe because the runtime doesn't manage the code for us.  So before using code like this in your applications, be absolutely certain that you need it.

The CLR provides the means to access heap memory via the Marshal.AllocHGlobal method.  The documentation notes that it uses LocalAlloc, probably because LocalAlloc doesn't require a pointer to a heap.  Despite the admonition that you'll get better performance and more features out of the other functions, the use of LocalAlloc does not seem to be a hindrance in speed relative to using HeapCreate/HeapAlloc/HeapDestroy.  The execution times are shown here:

  Debug Mode - 5 Iterations Release Mode - 5 Iterations Debug Mode - 25 Iterations Release Mode - 25 Iterations
Normal .NET Array [] notation x86: 17ms; x64: 45ms x86: 15ms; x64: 65ms x86: 109ms; x64: 252ms x86: 95ms; x64: 333ms
Marshal.AllocHGlobal with pointers x86: 15ms; x64: 36ms x86: 14ms; 30ms x86: 95ms; x64: 193ms x86: 80ms; x64: 148ms
LocalAlloc P/Invoke with Pointers x86: 16ms; x64: 37ms x86: 14ms; x64: 31ms x86: 96ms; x64: 193ms x86: 78ms; x64: 161ms
HeapAlloc P/Invoke with Pointers x86: 16ms; x64: 42ms x86: 14ms; x64: 32ms x86: 102ms; x64: 197ms x86: 88ms; x64: 166ms

Surprisingly, the normal array bracket notation performed significantly worse in release builds than in debug builds on x64; I don't really have an answer for why that would be.  I did not perform extensive statistical regression or even provide averages; I ran each set three times, and if they all looked mostly the same, I used the data.  These data are from x64 machines; the x86 results were from setting compilation target to x86 and running the program in WOW64.  I was surprised how much slower x64 was, though it might have been because we were using machine words on x86, and half-words on x64.  Perhaps memory access would be faster if we were using longs on x64.  (Prelim tests seem to confirm this; I will post a follow-up soon.)

Here are the P/Invoke declarations:

   1: public enum LocalAllocFlags
   2: {
   3:     Fixed = 0,
   4:     Moveable = 2,
   5:     ZeroInit = 0x40,
   6: }
   7:  
   8: public enum HeapCreateFlags
   9: {
  10:     None = 0,
  11:     EnableExecute = 0x40000,
  12:     GenerateExceptions = 4,
  13:     NoSerialize = 1,
  14: }
  15:  
  16: public enum HeapAllocFlags
  17: {
  18:     None = 0,
  19:     GenerateExceptions = 4,
  20:     NoSerialize = 1,
  21:     ZeroMemory = 8,
  22: }
  23:  
  24: static class UnsafeNativeMethods
  25: {
  26:     [DllImport("kernel32.dll")]
  27:     public static extern IntPtr LocalAlloc(LocalAllocFlags flags, UIntPtr uBytes);
  28:  
  29:     [DllImport("kernel32.dll")]
  30:     public static extern IntPtr LocalFree(IntPtr hMem);
  31:  
  32:     [DllImport("kernel32.dll")]
  33:     public static extern IntPtr HeapCreate(HeapCreateFlags flOptions, UIntPtr dwInitialSize, UIntPtr dwMaxSize);
  34:  
  35:     [DllImport("kernel32.dll")]
  36:     public static extern IntPtr HeapAlloc(IntPtr hHeap, HeapAllocFlags dwFlags, UIntPtr dwBytes);
  37:  
  38:     [DllImport("kernel32.dll")]
  39:     public static extern IntPtr HeapFree(IntPtr hHeap, HeapAllocFlags dwFlags, IntPtr lpMem);
  40:  
  41:     [DllImport("kernel32.dll")]
  42:     [return: MarshalAs(UnmanagedType.Bool)]
  43:     public static extern bool HeapDestroy(IntPtr hHeap);
  44: }

And finally, here's the benchmarking code:

   1: class Program
   2: {
   3:     private const int ITERATIONS = 25;
   4:     static unsafe void Main(string[] args)
   5:     {
   6:         Console.WriteLine("Press <enter> to start.");
   7:         Console.ReadLine();
   8:  
   9:         Stopwatch arrayClock = Stopwatch.StartNew();
  10:         for (int iter = 0; iter < ITERATIONS; iter++)
  11:         {
  12:             RunArrayTest();
  13:         }
  14:         arrayClock.Stop();
  15:         Console.WriteLine("{0}ms elapsed for Array test, {1} iterations.  Press <enter> to continue.", arrayClock.ElapsedMilliseconds, ITERATIONS);
  16:         Console.ReadLine();
  17:  
  18:         Stopwatch marshalClock = Stopwatch.StartNew();
  19:         for (int iter = 0; iter < ITERATIONS; iter++)
  20:         {
  21:             RunMarshalAllocHGlobalTest();
  22:         }
  23:         marshalClock.Stop();
  24:         Console.WriteLine("{0}ms elapsed for Marshal test, {1} iterations.  Press <enter> to continue.", marshalClock.ElapsedMilliseconds, ITERATIONS);
  25:         Console.ReadLine();
  26:  
  27:         Stopwatch localClock = Stopwatch.StartNew();
  28:         for (int iter = 0; iter < ITERATIONS; iter++)
  29:         {
  30:             RunLocalAllocTest();
  31:         }
  32:         localClock.Stop();
  33:         Console.WriteLine("{0}ms elapsed for LocalAlloc P/Invoke test, {1} iterations.  Press <enter> to continue.", localClock.ElapsedMilliseconds, ITERATIONS);
  34:         Console.ReadLine();
  35:  
  36:         Stopwatch heapClock = Stopwatch.StartNew();
  37:         for (int iter = 0; iter < ITERATIONS; iter++)
  38:         {
  39:             RunHeapAllocTest();
  40:         }
  41:         heapClock.Stop();
  42:         Console.WriteLine("{0}ms elapsed for HeapAlloc P/Invoke test, {1} iterations.  Press <enter> to continue.", heapClock.ElapsedMilliseconds, ITERATIONS);
  43:         Console.ReadLine();
  44:     }
  45:  
  46:     private unsafe static void RunHeapAllocTest()
  47:     {
  48:         UIntPtr pSize = new UIntPtr((uint)(1048576 * sizeof(int)));
  49:         IntPtr pHeap = UnsafeNativeMethods.HeapCreate(HeapCreateFlags.None, pSize, UIntPtr.Zero);
  50:         if (pHeap == IntPtr.Zero)
  51:         {
  52:             Console.WriteLine("Could not create heap.");
  53:             return;
  54:         }
  55:         IntPtr pMem = UnsafeNativeMethods.HeapAlloc(pHeap, HeapAllocFlags.ZeroMemory, pSize);
  56:         if (pMem == IntPtr.Zero)
  57:         {
  58:             Console.WriteLine("Could not allocate heap.");
  59:             return;
  60:         }
  61:  
  62:         int* pNumbers = (int*)pMem.ToPointer();
  63:         for (int i = 0; i < 1048576; i++)
  64:         {
  65:             pNumbers[i] = i;
  66:         }
  67:         UnsafeNativeMethods.HeapFree(pHeap, HeapAllocFlags.None, pMem);
  68:         UnsafeNativeMethods.HeapDestroy(pHeap);
  69:     }
  70:  
  71:     private unsafe static void RunLocalAllocTest()
  72:     {
  73:         UIntPtr pSize = new UIntPtr((uint)(1048576 * sizeof(int)));
  74:         IntPtr pMem = UnsafeNativeMethods.LocalAlloc(LocalAllocFlags.ZeroInit, pSize);
  75:         if (pMem == IntPtr.Zero)
  76:         {
  77:             Console.WriteLine("Could not allocate heap memory.");
  78:             return;
  79:         }
  80:  
  81:         int* pNumbers = (int*)pMem.ToPointer();
  82:         for (int i = 0; i < 1048576; i++)
  83:         {
  84:             pNumbers[i] = i;
  85:         }
  86:         UnsafeNativeMethods.LocalFree(pMem);
  87:     }
  88:  
  89:     private unsafe static void RunMarshalAllocHGlobalTest()
  90:     {
  91:         IntPtr pMem = Marshal.AllocHGlobal(1048576 * sizeof (int));
  92:         if (pMem == IntPtr.Zero)
  93:         {
  94:             Console.WriteLine("Could not allocate memory.");
  95:             return;
  96:         }
  97:  
  98:         int* pNumbers = (int*) pMem.ToPointer();
  99:         for (int i = 0; i < 1048576; i++)
 100:         {
 101:             pNumbers[i] = i;
 102:         }
 103:         Marshal.FreeHGlobal(pMem);
 104:     }
 105:  
 106:     private static void RunArrayTest()
 107:     {
 108:         int[] array = new int[1048576]; //4mb array
 109:         for (int i = 0; i < 1048576; i++)
 110:         {
 111:             array[i] = i;
 112:         }
 113:     }
 114: }

There isn't anything to complicated; a 4MB buffer is allocated using the selected method and then each 32-bit element is populated with its array index.  Unsafe code outperforms safe code in each x64 test, though the difference is much more marginal on x86.  The explanation is simple; safe code is checking the array index on every lookup. 

Summary

Using pointers and unsafe code can be a boost to your application's performance, but you should consider where, when, and how you do it.  Since you don't have control over when the GC is invoked, pinning objects like arrays can be costly.  You might instead consider using Windows API functions or direct memory access functions through the Marshal class to organize your memory if you absolutely need to chug that last piece of speed out of your code, but be warned - it's not safe out there.

The "Speedy C#" Series:

7Aug/082

Speedy C#, Part 2: Optimizing Memory Allocations – Pooling and Reusing Objects

Posted by Rob

In C#, Visual Basic .NET, C++/CLI, J# - the list goes on - we're freed from having to worry about our memory management.  Objects take care of themselves, and when the CLR garbage collector detects that an object is no longer in use, it frees the associated memory.  That doesn't mean that we should run around allocating and deallocating objects all willy-nilly; in fact, since we have less control over memory, we arguably have the opportunity to be more careful with the way we use high-frequency objects.

Memory Regions in .NET

In .NET, and generally in most programming, we can think of two places in which we can access memory: the stack and the heap.  We can think of Stack memory as temporary workspace, or scratch space; when we leave a function, all of our stack goes away.  Way, way down in the machine architecture, the stack also stores the return addresses of functions.  The stack also stores function parameters.  It's generally very orderly, inexpensive, but its volatile nature makes it a poor candidate for long-term storage.  In .NET, all types that derive from the ValueType class are stored on the stack unless they are boxed into an object reference; this includes types defined with the struct and enum keywords, as well as all of the primitive types except string (including int, double, and bool).

Heap memory is another matter.  The heap is a region of memory reserved for the use of the program and is intended to store objects that aren't quite so transient.  This might be something like a database connection, a file or buffer, or a window.

The Enemy: Fragmentation

Over time, objects are allocated and eventually released, and because there's not really any rhyme or reason, the heap becomes chaotic.  Allocations grab the first free block that's large enough (sometimes larger than necessary) and hold onto it until they go into the abyss.  This leads to fragmentation - all the free memory must be tracked somehow, and here's the real killer: contiguous blocks of free memory may not always be recognized as such.  Check this out: let's say we have a heap allocated at memory location 0x4000 that is 32 bytes wide:

An un-used heap

(Yes, my awesome artwork was done with none other than Excel!)

Suppose we allocate an 8-byte object and another 8-byte object, then a 16-byte object.  The first is in red, the second in orange, and the third in gray:

The heap after it was filled

Now I'll free the first and the third objects; we'll have 24 bytes of total free memory:

A fragmented heap

Either we need to keep track of every little piece of memory, which might be the fastest algorithm for releasing but slow for allocating (not to mention potentially VERY wasteful), or try to come up with another solution.  This type of memory fragmentation is referred to as external fragmentation.

The Garbage Collector and Compaction

The garbage collector has two components: a reference counter and a compaction engine.  The reference counter is responsible for determining when objects no longer have references to them; this frees programmers from having to explicitly destroy objects (as is the practice in C++ with the delete operator, or in C with the free function).  A lazy thread is then able to release and compact memory as needed, avoiding much of the overhead of external fragmentation and also allowing unused memory to be reclaimed.  The garbage collector in .NET is generational; it checks the newest objects first (what are called "gen-0"), and if the newest objects are still in use, they get moved to gen-1.  If the memory pressure requires, gen-1 objects are evaluated, and if they are still in use, they get moved to gen-2.  Gen-2 objects are considered long-lasting, and are only checked when memory pressure is severe.

Let's go back to our heap example; supposing I had an 8-byte, another 8-byte, and a 12-byte allocation, here's my heap graph:

A new heap

Object 1 (in red) has gone out of scope, but objects 2 and three are sticking around.  Using normal memory freeing rules, the largest object that could be allocated would still only be 8 bytes, because that would be the largest contiguous free space.  However, using .NET's compacting garbage collector, we could expect something along these lines:

A compacted heap graph

Here we can see we've dealt with the problem of external fragmentation by compacting the heap.  This convenience doesn't come without a cost, though; while the garbage collector performed the compaction, all of your application threads were suspended.  The GC can't guarantee object integrity if memory is getting abused during a garbage collection!

Preventing Compaction: Stop Killing off Objects!

Object pooling is a pattern to use that allows objects to be reused rather than allocated and deallocated, which helps to prevent heap fragmentation as well as costly GC compactions.  A pool can be thought of as an object factory; in essence, the most rudimentary pool could look like this:

   1: public class Pool<T> where T : new()
   2: {
   3:     private Stack<T> _items = new Stack<T>();
   4:     private object _sync = new object(); 
   5:  
   6:     public T Get()
   7:     {
   8:         lock (_sync)
   9:         {
  10:             if (_items.Count == 0)
  11:             {
  12:                 return new T();
  13:             }
  14:             else
  15:             {
  16:                 return _items.Pop();
  17:             }
  18:         }
  19:     }
  20:  
  21:     public void Free(T item)
  22:     {
  23:         lock (_sync)
  24:         {
  25:             _items.Push(item);
  26:         }
  27:     }
  28: }

Here, objects are created entirely on-demand and, when freed, are stored in a stack.  The reason we want to use a Stack is the performance characteristic of adding and removing objects; operations are always performed at the end of the list, which makes it highly efficient to add or remove items.  If possible, it may be prudent to pre-create a number of objects for use throughout the lifetime of your application. 

Here's an example: the project I've been discussing lately uses a pool of byte arrays to handle incoming network messages received and sent via a Socket.  When pooling is enabled, over the course of the application's lifetime, there were 17 Gen-0 collections, 5 Gen-1 collections, and 3 Gen-2 collections; a total of 270 byte[] instances were allocated, of which 44 were eligible for pooling and were pooled.  When pooling is disabled, there were 22 Gen-0 collections, 5 Gen-1 collections, and 3 Gen-2 collections; a total of 11,660 byte[] instances were allocated, of which approximately 10,900 were eligible for pooling.  That's a lot of memory!

Summary - When and Why

Object pooling is a powerful optimization technique, and if you're already using factory patterns it shouldn't be terribly foreign to you.  The .NET Framework includes the ThreadPool class as part of System.Threading.  Other objects you might consider pooling are database connections, any expensive links to unmanaged code, or anything that needs to be allocated frequently and can then be thrown away.  In my example, byte arrays are exceptionally good for this because they can be overwritten easily.

Further Reading

The "Speedy C#" Series:

5Aug/080

Speedy C#, Part 1: Optimizing Long if-else or switch Branches

Posted by Rob

Lately I've been doing some interesting work that I've alluded to elsewhere dealing with the binary communications protocol hosted Blizzard Entertainment's Battle.net game service.  It's kind of what brought me into C# development in the first place; I walked away from it for a few years, and now I've been digging into it again.  And I've learned a few things between then and now; I've been particularly interested in looking at the under-the-hood workings of the CLR, and so I'm starting a new series on "Speedy C#".  Let me be the first to point out that optimizations have a unique way of obfuscating code; particularly in this example, if you don't explain why you're doing what you're doing, and exactly what result you expect, you could run into trouble, or worse, your colleagues may run into trouble.  So while going through this series,

A little background: the binary protocol used for Battle.net has about 80 or so message IDs, which generally have a different structure for each.  The messages don't necessarily come as a result of sending a message first, and so the general pattern is that a receive loop is in place that receives the data, parses it, and then sends events back to the client.  In fact, there are no synchronous requests defined by the protocol.

When I first started programming, I had handlers for every message ID in a switch/case branching construct:

   1: switch (packetID)
   2: {
   3:     case BncsPacketId.Null:
   4:         break;
   5:     case BncsPacketId.EnterChat:
   6:         string ecUniqueName = pck.ReadNTString();
   7:         string ecStatstring = pck.ReadNTString();
   8:         string ecAcctName = pck.ReadNTString();
   9:         EnteredChatEventArgs ecArgs = new EnteredChatEventArgs(ecUniqueName, ecStatstring, ecAcctName);
  10:         OnEnteredChat(ecArgs);
  11:         break;
  12:     // ... ad nauseum
  13: }

When I looked at this in ildasm, I noticed that it declared a max stack size of something ridiculously large (sorry I don't have a specific number - it was about 6 years ago).  I also noticed that there were a LOT of branches, but not necessarily in the order in which I had written them.  The compiler had intrinsically optimized my code to perform a binary search.  Fairly interesting, optimal speed at O(log N), and something that most of us wouldn't have thought of naturally!

When I last revisited this type of development, I broke all of my handlers out of the branching conditional, calling a separate method to handle each message.  This had a nice effect of making me not have to worry about variable name collisions like I had to in the above example, and it made the code slightly more maintainable.  It's difficult to gauge on paper whether that would have been better or worse performance; there was certainly far less stack allocation, but there was an additional (potentially virtual) method call.

The latest code incorporated into my library takes a different approach: I declare a Dictionary<BncsPacketId, ParseCallback>, populate it with default handlers, and allow existing handlers to be replaced and new ones to be added provided certain conditions are met.  This has had several benefits:

  • According to MSDN, Dictionary<TKey, TValue> approaches O(1), which is (obviously) the fastest lookup we could hope for. 
  • Adding support for new or changed messages does not require change to the code, only that a handler be updated via a method call.
  • Handlers can be switched at runtime.

In this code, a ParseCallback is a delegate that accepts information provided by the message header and the message contents themselves.  This has modified the entire parsing thread to be:

   1: private void Parse()
   2: {
   3:     try
   4:     {
   5:         while (IsConnected)
   6:         {
   7:             m_parseWait.Reset();
   8:  
   9:             while (m_packetQueue.Count == 0)
  10:             {
  11:                 m_parseWait.WaitOne();
  12:             }
  13:  
  14:             ParseData data = m_packetQueue.Dequeue();
  15:             if (m_packetToParserMap.ContainsKey(data.PacketID))
  16:             {
  17:                 m_packetToParserMap[data.PacketID](data);
  18:             }
  19:             else
  20:             {
  21:                 switch (data.PacketID)
  22:                 {
  23:                     #region SID_NULL
  24:                     case BncsPacketId.Null:
  25:                         break;
  26:                     #endregion
  27:                     default:
  28:                         Trace.WriteLine(data.PacketID, "Unhandled packet");
  29:                         if (!BattleNetClientResources.IncomingBufferPool.FreeBuffer(data.Data))
  30:                         {
  31:                             Debug.WriteLine(data.PacketID, "Incoming buffer was not freed for packet");
  32:                         }
  33:                         break;
  34:                 }
  35:             }
  36:         }
  37:     }
  38:     catch (ThreadAbortException)
  39:     {
  40:         // exit the thread gracefully.
  41:     }
  42: }

Now, obviously, this is a very domain-specific optimization that I wouldn't make unless it makes sense in the problem domain.  For mine, it does; I am writing the library so that others are able to integrate functionality without having to worry about modifying code that they maybe are not familiar with or are worried about breaking.  If you absolutely need to use this method, be sure to document why.

The "Speedy C#" Series:

Tagged as: , No Comments