Running with Code Like with scissors, only more dangerous


Windows Live Platform Airlift: Thoughts

Posted by Rob

So as I sit in the airport, hoping that one of the beyond-overbooked flights has an opening so I can get home early and not have to go through Vegas, I thought I'd share my thoughts about the Windows Live platform that we picked up during partner training here in Redmond.

Single Sign-On

Without a doubt, the coolest feature enabled by the platform is single sign-on.  And, while we're at it, the way it's implemented is really cool.

When a user goes to a site that uses the single-sign-on feature (that is, Windows Live ID integration), the user is presented with a user login screen that is typically fairly consistent across sites.  I've seen two user experience-style screens that present it: has a themed version of the older, Passport-style login screen, while others (particularly those that host Live Controls) display a more Windows Vista-style login screen. 

When a user needs to login, they are redirected to to provide the authentication credentials via HTTPS.  Once authentication is complete, they are redirected to the original website with a login token which provides information about that user.  The cool thing is that the login token is the same for that user on that site across sessions, but not the same across sites, so you're able to identify the same user during multiple visits.  This makes integration with the profiling API fairly straightforward.

Unfortunately, I can see issues with it; signing up or signing in can make it unclear to the users that they're still logging into the site, and people who are not Live ID users might be hesitant to sign up for a service that might not be directly related to the site they're working on.  The sites aren't particularly customizable right now, either, so they might not really get the user to trust that they're still working with your site.

Contacts and Contacts Control

The Contacts control is an easy way to drop your Windows Live contacts onto a web page.  It's branded for Windows Live, and uses a similar authentication mechanism to Live ID, but it requires that the user grant access to his or her information to the site.  In that respect, it's similar to Facebook, but Microsoft doesn't put restrictions on what you can do with that data once you have it (it's the onus of the user to read your privacy policy).  I can see the pro's and con's of that particular implementation, and I can't say which I like better.

The Control may or may not be appropriate for a given site, but fortunately, you can also simply query the data via a REST interface.  Very slick, and definitely usable.  A good example in which this was appropriate was Buxfer - a site that helps you track your money.  You might notice along the top of the page:

Buxfer login options

Clicking on the Windows icon pops up a dialog asking you to enter your Windows Live ID credentials.  Very slick!  I believe that, using this information, you're able to invite other friends (maybe the site was an example of single sign-on - it's starting to blur together).


Also cool (but in this case, probably more for the "cool to nerd developers" factor) is the ability to include a fully functional Windows Live Messenger client in your website.  Want to sign in?  No problem!  Just pop open a new window with your account information and new windows for your conversations, just like the Windows client.

I don't really see the value in this for most websites, but there are a couple great exceptions:

  • A site that wants to support Live Chat support can use this service to allow even anonymous guests to present the user with a chat window to the support person who is signed into Messenger.  There are scalability concerns for this, but all told, it's a pretty neat way to do it.
  • Message board or other community software might allow users to allow access to the user's Live Messenger account so that users can contact them through the web.  The actual Live ID is shielded and not presented; an obfuscated, site-specific ID is used in its place.

Other Stuff

I was a bit unsure about using Live Spaces, but the way it's discussed for example sites makes it sound like a bit of an easy storage solution similar to SharePoint.  It might be worthwhile, especially for our clients who want to focus on virality.

Silverlight Streaming currently seems a little limited to us; 22mb max file size is quite restrictive, but as our trainers pointed out, the Terms of Use are not finalized.

We got some cool info that we can't talk about (nor can the Microsoft guys in charge), but it's really exciting nonetheless.

Final Thoughts

I'm not sure that Terralever can use the platform effectively, but it depends on seeing what kinds of technologies our clients can leverage, and how much Microsoft is willing to work with us.

Another one of the attendees voiced this concern, and it was something I'd thought about before we even got here - a lot of this technology is too late-to-market (my thoughts were "too little, too late").  People are sharing their thoughts on Digg, their photos and profiles on Facebook, photos on Flickr, documents on Google, and a lot of other stuff is already done.  Single sign-on is GREAT - but I'm not sure that it's enough to make people buy-in to Live.  Still, maybe it's a moot point; with 400MM monthly Live ID users and 100MM monthly Live Spaces users, perhaps greater propagation doesn't matter.

For more information about the Windows Live development platform, check out their website!


Comment Responses: C# 3.0 Best Practices

Posted by Rob

I've received some comments and composed some responses, included below:

First Comment/Question:

I've got questions about performance in LINQ and LINQ to SQL. 
Is it more effective to create a query against the context or against a collection on another object?  For example, which is better:

     = from t in db.Things
        where t.Something = "MyVal"
        && t.ForeignKeyId = 28

or is it better to do this:

     = from t in db.ForeignKeys
        where t.Something = "MyVal"

?  In the former, I'm running against the data context, in the latter, the (kinda) array of foreign key matches

Is it more effective to select the object or a property in the object if I am using an agregate function?  For example:

    = ( from t in db.Things select t ).Count()

or is it better to do this:

    = ( from t in db.Things select t.ThingId ).Count()

?  In the former, I select the entire t but do nothing.  Does it actually query the data, or just a place-holder?  In the latter, I've got a specific property, which I still don't need.

My answer:

I'm going to preface this by saying - as far as LINQ goes, there's SO MUCH in the toolset, and I can't claim to be much more than a novice.  What I can say about it, though, is that I can make some educated guesses about how LINQ will behave vs. LINQ-to-SQL.

In your first example, with the query against the database, looking at the "where t.Something == "MyVal" && t.ForeignKeyId == 28" (you're using double-equals-signs in those, right? ;-)), you're probably better-off doing the complete query inside of the where clause.  Especially in LINQ-to-SQL, where that where clause is going to end up in a SQL statement anyway, you're going to be pulling against a database column, which should be fairly fast.  The only performance tweak I could suggest in this instance is ordering the foreign key ID comparison first, to avoid the char-by-char comparison of a string compare if the foreign key IDs don't match up.  This is kind of a difficult choice to make, as it might sacrifice a magnitude of code readability ("Why is Rob checking a group of values for their foreign key ID?"). 

In your second question related to the Count() aggregate, your LINQ-to-SQL and LINQ-to-Objects queries will (I believe) have similar performance characteristics, but for different reasons.

If you're using LINQ-to-SQL, the entire query should be ported to SQL and should end up reading like:


The alternative (in the alternative case you suggested) would be:

SELECT COUNT(ThingId) FROM db.Things;

I don't recall if COUNT counts NULL rows, and so you have a potential for actual incorrect data if you use the latter, though if you're sure to go against something like the PK column you're probably allright.

In any case, I don't believe it really matters.  When using LINQ-to-SQL, you'll pull from the DB.  I just pulled up a SQL Profiler, ran a query from LINQ, and I got the results I expected:

SELECT COUNT(*) AS [value] FROM [Events].[Events] AS [t0]

So it looks like we're good.  Unfortunately this query can't be visualized like non-aggregate queries:

var result = (from ev in context.Events select ev).Count();

Hovering over "result" just results in a 0 being displayed.

As for LINQ-to-Objects, my personal opinion is to use the first syntax (select t) as opposed to creating a new anonymous type.  The reason for this is that a new series of objects needs to be created, even if it just contains your one property. 

One of the things we're doing in 3.5 is to extend our existing object model, which generally shields us from database-specific implementation code, to support objects like data contexts.  However, any code generated by LINQ-to-SQL will ONLY be in our provider-specific implementation libraries that are dynamically bound at runtime.  This allows us to program with LINQ-to-SQL when we want to, but it requires that we map .NET Entity objects to our custom entity objects.  Still, this is sometimes beneficial.

Second Comment/Question:

I don't understand this discussion on Object Initializers:

"Always validate input to properties; an exception raised while setting a property will cause a memory leak, even inside try/catch

The object will already be allocated if an exception is raised within a property"

I am not sure how to express what it is that I do not understand, so maybe you can just talk to this a little bit. Typically, I would not validate parameters in an object initializer scenerio (unless they came from an external source, of course, such as user input). For example, if I pull data out of a database (my database) and build an object from it, I would not validate the data first - an exception seems correct in this case (corrupt database data). I also would not fear "memory leaks" because I assume that my "partially-constructed" object would be garbage collected like any properly-constructed object would.

In other words, I image that using object initializers is *equivalent* to (a) creating an object using the default contructor, followed by (b) setting properties. If an exception occurred setting properties - so what? [At least from "memory leak" point of view.]

My answer:

If you're populating from a database, your assumption is that the input is going to be valid, and that's certainly a valid point.  That you're thinking about this is really what I was after.

The "memory leak" I'm referring to is, again, a potential non-issue as you pointed out (the stranded object will indeed get garbage-collected).  The problem is, for me anyway, that we have no idea and no control over *when* that object gets garbage-collected, or how.  In fact, because it's implicitly compiler-created, we have no way to do anything at all with that temporary object.

Suppose we had this code:

   1:  SomeObj blah;
   2:  try {
   3:      blah = new SomeObj() { ExceptionTest = "This text generates an exception." };
   4:  } catch { }

Even here, the "<>blah_temp01" object (or whatever it's called) is still valid on the stack, but we have no way to access it.  The benefit is that code still behaves as if it was part of a constructor, and the "blah" variable is null (which is my guess of what the developers were gunning for).  But until we run out of memory and do a GC, that memory's still there, still allocated, and possibly causing heap fragmentation.

The other difficulty we run into, and this is perhaps a cheap excuse, is that when all of the properties are set on the same line, we can't tell (at least in Visual Studio) which line is causing the exception.  We could do this:

   1:  blah = new SomeObj() {
   2:                          ExceptionTest = "This text generates an exception."
   3:                       };

But does that really gain you much over:

   1:  blah = new SomeObj();
   2:  blah.ExceptionTest = "This text generates an exception.";

Ultimately, my point is this: you can handle cleaning up, disposing, and you otherwise know that an object exists when you don't use initializers.  You don't when you don't use initializers.  Suppose I have a smart card file stream that implements IDisposable.

Stream fs = new SmartCardFileStream() { FilePath = "003F/0040", Mode = FileMode.ReadWrite };

Suppose that, for whichever reason, the file opens when you set FilePath.  But, FileMode.ReadWrite isn't supported (only read or write).  You can't programmatically close that file; you have to wait for the finalizer to be invoked.  Arguably, you should wrap that declaration within a using {} block, but that's not always feasible (e.g., when you need to deal with it across events in a Winforms app). 

The primary difference is that, when an exception is raised in a constructor, it is automatically marked for garbage collection by the CLR (except in cases of a static constructor raising a TypeInitializationException, making the whole class inaccessible for the duration).  A partially-"constructed" class that raised an exception during a property-set would be problematic in that it doesn't get that benefit - we have to rely on the garbage collector to determine that there are no outstanding references to it at the next GC pass.


I chopped off some of the thank yous/hellos/I appreciates from the e-mails, and just wanted to once again say that I appreciate the comments and feedback.  Hopefully, those of you who chose to comment weren't too thrown off by my e-mail replies; I'm not particularly a fan of starting threads and posting comments on my own blog post (although I may once I get a new blog).  Also - I'd just like to point out that these are my personal opinions and not necessarily the end-all, be-all of C#; there are certainly places to use each new feature, and my goal is to investigate how features are implemented so that we know when a situation calls for them.


C# 3.0 Best Practices: Downloads

Posted by Rob

I'm publishing the PowerPoint slides and sample code from the C# 3.0 Best Practices talk I gave tonight at the Arizona .NET Users Group meeting.  Once again I'd like to thank Scott Cate for extending the opportunity to me, Lorin Thwaits for pinch-hitting for Scott, and to Hudson IT Staffing for the event sponsorship.

Please note that the code sample isn't really annotated, and was designed to generate the code in the slides and the Reflector output shown.  The theme of the presentation wasn't so much "How can we do these things the best?" but more "How is this implemented, so I can choose the best way to do something?"  So the code isn't by any means a beacon of effectiveness - it's meant to go with the slides.

I'd welcome feedback and comments, as well as suggestions for additional content.  Enjoy!


Windows Live Platform Training

Posted by Rob

Tomorrow after I present at the Arizona .NET Users Group, I'm getting on a plane to head to Redmond, WA for some first-party training on the Windows Live platform.  I'm not particularly certain what to expect; I think by and large, most of us are a bit ignorant about what the platform has to offer, and while some of the components are incredibly cool (the Virtual Earth API, for example), others seem... not so cool (such as the web-based gadget hosting platform). 

One thing is certain - with the Windows Live platform, Microsoft is making a huge effort at hitting web-connected technologies.  Sometimes I wonder if it isn't too little, too late (Blogger, Facebook, Google Maps, and Flickr come to mind as functional precursors for the corresponding Windows Live services), but at the same time, the ability to have everything integrated into one service could be advantageous.  To that end, we'll need to see how things play out.

Stay tuned for new information about the Windows Live platform, direct from Redmond.


A Synchronized Reference Counter

Posted by Rob

In a recent project that I've been working on, I had the need to access a singleton object from multiple callers on multiple threads, but then restrict access to the singleton during a change operation, during which the singleton was changed out and replaced with a new one.  New requests must block until the singleton change is complete, but existing requests must complete before the singleton change occurs.

A WaitHandle - specifically, an EventWaitHandle with the reset mode set to Manual - is an effective way to allow an arbitrary number of threads go through the primitive until its reset condition is met.  In this case, however, because we needed to perform custom behaviors for using Set() and Reset() for domain-specific reasons, I contained the underlying EventWaitHandle within my derived class and simply derived the class from WaitHandle.  This class is called SynchronizedReferenceCounter:

   1:  using System;
   2:  using System.Threading;
   4:  namespace Terralever.Threading
   5:  {
   6:      public class SynchronizedReferenceCounter : WaitHandle
   7:      {
   8:          private EventWaitHandle m_handle;
   9:          private int m_refCount;
  11:          public SynchronizedReferenceCounter()
  12:          {
  13:              m_handle = new EventWaitHandle(true, EventResetMode.ManualReset);
  14:          }
  16:          public void AddReference()
  17:          {
  18:              WaitOne();
  20:              Interlocked.Increment(ref m_refCount);
  21:          }
  23:          public void RemoveReference()
  24:          {
  25:              if (Interlocked.Decrement(ref m_refCount) == 0)
  26:              {
  27:                  OnZeroReferencesInUse(EventArgs.Empty);
  28:              }            
  29:          }
  31:          public override bool WaitOne()
  32:          {
  33:              return m_handle.WaitOne();
  34:          }
  36:          public override bool WaitOne(int millisecondsTimeout, bool exitContext)
  37:          {
  38:              return m_handle.WaitOne(millisecondsTimeout, exitContext);
  39:          }
  41:          public override bool WaitOne(TimeSpan timeout, bool exitContext)
  42:          {
  43:              return m_handle.WaitOne(timeout, exitContext);
  44:          }
  46:          public void AllowNewReferences()
  47:          {
  48:              m_handle.Set();
  49:          }
  51:          public void StopNewReferences()
  52:          {
  53:              m_handle.Reset();
  54:          }
  56:          public event EventHandler ZeroReferencesInUse;
  57:          protected virtual void OnZeroReferencesInUse(EventArgs e)
  58:          {
  59:              if (ZeroReferencesInUse != null)
  60:                  ZeroReferencesInUse.BeginInvoke(this, e, null, null);
  61:          }
  62:      }
  63:  }

In this class, I override the WaitOne() methods and marshal them to the contained EventWaitHandle's WaitOne() methods as appropriate.  Reference additions are blocked whenever the contained EventWaitHandle is in a Reset state, when defined by a call to the StopNewReferences() method.  An object that contains this reference counter should call AddReference() whenever it is accessed, and RemoveReference() whenever it is released.  For example:

   1:  public class ReferenceCountedSingleton
   2:  {
   3:      // ...
   4:      public static ReferenceCountedSingleton GetCurrent()
   5:      {
   6:          s_counter.AddReference();
   7:          return s_current;
   8:      }
  10:      public static void ReleaseReference() 
  11:      {
  12:          s_counter.ReleaseReference();
  13:      }
  15:      public static void InitiateChange()
  16:      {
  17:          s_counter.StopNewReferences();
  18:      }
  20:      private static void s_counter_ZeroReferencesInUse(object sender, EventArgs e)
  21:      {
  22:          // finish the change
  24:          s_counter.AllowNewReferences();
  25:      }
  26:  }

This shell class assumes a few things, but should be fairly straightforward.

There is some information that you should know about, though.  One thing that might strike you is that if ReleaseReference() isn't called, you can run into a deadlock scenario.  This can be solved by ensuring that you encapsulate this functionality in a try/finally block:

   1:  try
   2:  {
   3:      ReferenceCountedSingleton instance = ReferenceCountedSingleton.GetCurrent();
   4:      // perform operations that could be risky here.
   5:  }
   6:  finally
   7:  {
   8:      ReferenceCountedSingleton.ReleaseReference();
   9:  }

As shown here, it is important to access the reference-counted variable only once, and so you should take a local variable of the instance.  To do so, declare the local variable as an instance.

The issue you can potentially run into here is that singletons tend to do their magic via the Instance property like so:

   1:  ReferenceCountedSingleton.Instance.DoSomeAction();
   2:  ReferenceCountedSingleton.Instance.DoAnotherAction();
   3:  ReferenceCountedSingleton.Instance.DoLastAction();

If you do this perilous series of lines, you'll increment the counter three times.  If you only call ReleaseReference() once, you'll run into a deadlock scenario where you'll never get to a zero-reference count, and you'll never finish your action.

Hopefully this helps someone.  It took a lot of research (coming soon) to find the right synchronization primitive to use for this scenario.

Tagged as: , No Comments

Mass Effect: Promised the Galaxy, Delivered… a Star System

Posted by Rob

Oooh, shiny!Mass Effect, the latest installment of Bioware's epic RPGs, promised us the world, nay, the galaxy.  It spoke of a universe of incredible depth, a future world in which humanity has discovered the means by which to travel to distant stars, interact with aliens, and eek our our place in the galaxy.  It's got an incredibly compelling story, in which you are selected as humanity's first "Spectre," an agent of the galactic government empowered to do anything you see fit to preserve galactic stability.  The visual artwork is compelling; the game has a "graininess filter," to make it look like an old 80's sci fi movie, and the music fits right into that genre (it makes the game feel all nostalgic).  The use of HDR is gorgeous, and while at times it might feel a little overused, in general it is fantastic.


Although graphical quality is excellent, there is one exception: load times.  Please, take five more seconds to load the textures and the bump maps.

Load Times - a mass immersion breaker

As you can see in the photo to the left (I apologize, they were taken from my digital camera pointed at my TV), the bump map and texture map on the ground didn't load immediately.  Take a few extra seconds and load the bump maps before I get into the game.  In my opinion, it's incredibly immersion breaking, and should have been unacceptable to Bioware.  Unless load time was one of the highest-priority requirements, which I can't imagine (and I'll say why in a second), it amazes me that it made it to production.

One of the COOL things about loading in this game, though, is the elevator load style.  Loading happens in a lot of areas by going into elevators - definitely one of the least immersion-breaking aspects of the game.  That was a fantastic move, and in fact while we're on the Citadel, you can pick up information on new missions and get a little more information about your squad members.  It was definitely one of the strongest parts of the game.


Where did it go?

This is without a doubt one of the best games I've ever played in terms of control.  It is incredibly easy to control the characters, the selection of dialogue is fantastic, and the menu interface is great (with the single exception of - surprise, surprise - inventory management).

One thing I'm curious about - the E3 2006 dialog depicted to the left - where did it go?  "A billion lives are hanging in the balance here.  I won't let some piss-ant bartender slow me down."  That seemed like such a great scene.  I'm disappointed that it's not in the game.

There is one downer - the vehicle is ridiculous in terms of control.  I'm on my third playthrough, and I haven't figure out how to control it steadily.  One thing I totally loved, though, was the running-over the Geth.  It was fun to run them into the lava on the planet, incidentally, shown in the image with the bad load times.

I also don't like the mission assignments.  Besides the main quests, it's annoying to hear from the Systems Alliance admiral whom you never meet, who just says "You're a Spectre, and you answer to the Council, but you're still a human."  Come on, give me a break.  Seth Green, give me a chance to say no - don't just say "Transmission comin' in - patchin' it through" every freaking time I look at the Galaxy Map.  PLEASE, SETH GREEN, LET ME DO MY OWN THING!


The music is absolutely fantastic.  It's one of the best soundtracks I've ever heard; it's absolutely distinctive.  And it was inexpensive to buy on iTunes!  Other than that, I can't say much.

Finally - the Universe

I think this is possibly Mass Effect's greatest weakness, as well as the greatest strength.  While places like the Citadel are built up very well, the Citadel feels like about 60% of the universe's civilization.  That's pretty sad considering that they indicate a few million (maybe seven million) people live on the Citadel, and there have to be hundreds of billions, if not trillions, of people across the galaxy.  The planets you can visit outside of the main quest feel forced and random, not to mention barren.  It's supposed to be the galaxy - why do I hardly ever run into people who aren't trying to kill me?

But on the other hand, the way the other races are introduced and described is incredibly thorough.  It's exactly how I'd like to see a game introduce the races it has.  I can't sing the praises of the story or the universe (in this sense) enough.

Go buy it, if you haven't yet.  It's worth the cash.  Get it new - support BioWare.

Tagged as: , , No Comments

Looking at .NET: The Disposable Pattern

Posted by Rob

One of the more obscure things about the .NET Framework is the Disposable pattern used throughout the framework, supported via the IDisposable interface.  This pattern is so pervasive throughout .NET, that C# intrinsically supports it via the using keyword.  There is also a standard pattern for implementing the interface that the interface just can't express (perhaps because interfaces can't specify protected methods; maybe that's a C# 4.0 Wishlist part 6 item?).

We can use an IDisposable object with the using keyword like so:

   1:  using (SqlConnection con = new SqlConnection(WebConfigurationManager.ConnectionStrings["DbConnection"].ConnectionString))
   2:  using (SqlCommand cmd = new SqlCommand("[dbo].[GetAllItems]", con))
   3:  {
   4:      cmd.CommandType = CommandType.StoredProcedure;
   6:      con.Open();
   7:      using (SqlDataReader reader = cmd.ExecuteReader())
   8:      {
   9:          return GetAllItemsFromReader(reader);
  10:      }
  11:  }

In this example, SqlConnection, SqlCommand, and SqlDataReader all implement IDisposable, because they interoperate with unmanaged code.  By using the using blocks, the C# compiler actually transforms these into try/catch blocks:

   1:  List<Item> _compilerGeneratedResult;
   2:  SqlConnection con = null;
   3:  SqlCommand cmd = null;
   4:  try 
   5:  {
   6:      con = new SqlConnection(WebConfigurationManager.ConnectionStrings["DbConnection"].ConnectionString;
   7:      cmd = new SqlCommand("[dbo].[GetAllItems]", con);
   9:      cmd.CommandType = CommandType.StoredProcedure;
  11:      con.Open();
  13:      SqlDataReader reader = null;
  14:      try
  15:      {
  16:          reader = cmd.ExecuteReader();
  17:          _compilerGeneratedResult = GetAllItemsFromReader(reader);
  18:      }
  19:      finally
  20:      {
  21:          reader.Dispose();
  22:      }
  23:  }
  24:  finally
  25:  {
  26:      if (cmd != null)
  27:          cmd.Dispose();
  28:      if (con != null)
  29:          con.Dispose();
  30:  }
  32:  return _compilerGeneratedResult;

Yeah, if it was left up to the C# users to use this pattern correctly, we'd never do it (not without the using keyword, anyway).  But the question is, how do we implement it?

Traditionally, we implement it by creating a protected, virtual method (or private method if the class is sealed), that accepts a boolean value that indicates whether we're calling this method via Dispose() or via the destructor.  The Dispose() method calls this with true, and we also implement a destructor that calls it with false.  The Dispose(bool) method then implements the cleanup logic, and if the parameter is true, also tells the garbage collector to not perform the invoke the finalizer (the destructor) on this object.  Here's a sample:

   1:  using System;
   2:  using System.Runtime.InteropServices;
   4:  namespace DisposableSample
   5:  {
   6:      public class HGlobalPtr : IDisposable
   7:      {
   8:          #region IDisposable Members
  10:          ~HGlobalPtr()
  11:          {
  12:              Dispose(false);
  13:          }
  15:          public void Dispose()
  16:          {
  17:              Dispose(true);
  18:          }
  20:          protected virtual void Dispose(bool disposing)
  21:          {
  22:              if (disposing)
  23:                  GC.SuppressFinalize(this);
  24:          }
  26:          #endregion
  27:      }
  28:  }

This is the most basic implementation of the IDisposable pattern.  We're going to evolve it a bit, actually add use to it, and also look at Static Code Analysis (SCA) output from this.  First, to the SCA (using FxCop).

This basic example generates two warnings, incidentally, from the same rule.  Here they are:

warning : CA1816 : Microsoft.Usage : Change 'HGlobalPtr.Dispose()' to call 'GC.SuppressFinalize(object)'. This will prevent unnecessary finalization of the object once it has been disposed and it has fallen out of scope.

warning : CA1816 : Microsoft.Usage : 'HGlobalPtr.Dispose(bool)' calls 'GC.SuppressFinalize(object)', a method that is typically only called within an implementation of 'IDisposable.Dispose'. Refer to the IDisposable pattern for more information.

If you look at help for this rule, you'll see that to properly implement this change, you actually call GC.SuppressFinalize(this) within the Dispose() method (as opposed to Dispose(bool) method).  Doing so ensures that GC.SuppressFinalize is called via IDisposable.Dispose even if derived classes override the virtual Dispose(bool) method.

Here's a more complete implementation of HGlobalPtr, with IDisposable fully implemented:

   1:      public class HGlobalPtr : IDisposable
   2:      {
   3:          private IntPtr m_ptr;
   5:          public HGlobalPtr(int size)
   6:          {
   7:              m_ptr = Marshal.AllocHGlobal(size);
   8:          }
  10:          #region IDisposable Members
  12:          ~HGlobalPtr()
  13:          {
  14:              Dispose(false);
  15:          }
  17:          public void Dispose()
  18:          {
  19:              Dispose(true);
  20:              GC.SuppressFinalize(this);
  21:          }
  23:          protected virtual void Dispose(bool disposing)
  24:          {
  25:              if (disposing)
  26:              {
  27:                  // free the state of any contained objects
  28:                  // we don't contain any other objects!
  29:              }
  30:              // free my own state
  31:              if (m_ptr != IntPtr.Zero)
  32:              {
  33:                  Marshal.FreeHGlobal(m_ptr);
  34:                  m_ptr = IntPtr.Zero;
  35:              }
  36:          }
  38:          #endregion
  39:      }

Now, there are other code analysis problems; I need to handle security warnings for calls to Marshal.AllocHGlobal and Marshal.FreeHGlobal, which have a LinkDemand permission set on them.  I should also consider replacing m_ptr with a SafeHandle.  But aside from these, IDisposable is correctly implemented.

Just remember - if your object is IDisposable - please, use the using () statement!

For more information on implementing IDisposable, refer to the Microsoft documentation article on Technet.