Category Archives: CodeProject

Don’t Make This Dumb Locking Mistake

What’s wrong with this code:

try
{
    if (!Monitor.TryEnter(this.lockObj))
    {
        return;
    }
    
    DoWork();
}
finally
{
    Monitor.Exit(this.lockObj);
}

This is a rookie mistake, but sadly I made it the other day when I was fixing a bug in haste. The problem is that the Monitor.Exit in the finally block will try to exit the lock, even if the lock was never taken. This will cause an exception. Use a different TryEnter overload instead:

bool lockTaken = false;
try
{
    Monitor.TryEnter(this.lockObj, ref lockTaken))
    if (!lockTaken)
    {
        return;
    }
    
    DoWork();
}
finally
{
    if (lockTaken)
    {
        Monitor.Exit(this.lockObj);
    }
}

You might be tempted to use the same TryEnter overload we used before and rely on the return value to set lockTaken. That might be fine in this case, but it’s better to use the version shown here. It guarantees that lockTaken is always set, even in cases where an exception is thrown, in which case it will be set to false.

Get Your Thread Synchronization Right the First Time

I was recently debugging a problem that just didn’t make any sense at first.

The code looked like this:

class App
{
    public bool IsRunning {get; set;}
    private Thread houseKeepingThread;
    
    public void Start()
    {
        this.IsRunning = true;
        this.houseKeepingThread = new Thread(ThreadFunc);
        this.houseKeepingThread.Start();
    }
    
    private void ThreadFunc()
    {
        while (this.IsRunning)
        {
            DoWork();
            // wait for 30 seconds
        }
    }
};

The actual code was of course a bit more complicated, but this demonstrates the essence of the problem.

The outward manifestation of the bug was that there was evidence that DoWork wasn’t being called over time as it should have.

To debug this, I first concentrated on reasons that the thread could end early, and none of them made any sense.

To finally figure it out, I attached a debugger to a process known to be in a suspect state and discovered evidence in the state of the App object that DoWork had never run, not even once.

I stared at the code for five seconds and then said out loud: “IsRunning isn’t volatile!”

Despite setting IsRunning to true before starting the thread, it is completely possible that the thread starts running before the memory is coherent across all of the threads.

What’s amazing about this bug is that it existed from the very beginning of this application. As in checkin #1. It has probably been causing problems for a while at some low level, but it recently must have gotten worse—it could be for any reason, some change in the pattern of execution, the load on the system, who knows.

The fix was quite easy:

Make a volatile backing field for the IsRunning property.

By using volatile, there will be a memory barrier that will enforce coherency across all threads that access this variable. Problem solved.

The lesson here isn’t about any particular debugging skills. The real lesson is to make sure you get this right in the first place, because debugging it takes far longer and is much trickier than just reasoning about it in the first place. These types of problems can stay hidden and extremely rare for a very long time until some other innocuous change causes them to start manifesting in strange, seemingly impossible ways.

If you want to learn more about memory barriers and when they’re required, you can check out Chapter 4 of Writing High-Performance .NET Code, and there is also this tutorial by Joseph Albahari.

Announcing Microsoft.IO.RecycableMemoryStream

It is with great pleasure that I announce the latest open source release from Microsoft. This time it’s coming from Bing.

Before explaining what it is and how it works, I have to mention that nearly all of the work for actually getting this setup on GitHub and preparing it for public release was done by by Chip Locke [Twitter | Blog], one of our superstars.

What It Is

Microsoft.IO.RecyclableMemoryStream is a MemoryStream replacement that offers superior behavior for performance-critical systems. In particular it is optimized to do the following:

  • Eliminate Large Object Heap allocations by using pooled buffers
  • Incur far fewer gen 2 GCs, and spend far less time paused due to GC
  • Avoid memory leaks by having a bounded pool size
  • Avoid memory fragmentation
  • Provide excellent debuggability
  • Provide metrics for performance tracking

In my book Writing High-Performance .NET Code, I had this anecdote:

In one application that suffered from too many LOH allocations, we discovered that if we pooled a single type of object, we could eliminate 99% of all problems with the LOH. This was MemoryStream, which we used for serialization and transmitting bits over the network. The actual implementation is more complex than just keeping a queue of MemoryStream objects because of the need to avoid fragmentation, but conceptually, that is exactly what it is. Every time a MemoryStream object was disposed, it was put back in the pool for reuse.

-Writing High-Performance .NET Code, p. 65

The exact code that I’m talking about is what is being released.

How It Works

Here are some more details about the features:

  • A drop-in replacement for System.IO.MemoryStream. It has exactly the same semantics, as close as possible.
  • Rather than pooling the streams themselves, the underlying buffers are pooled. This allows you to use the simple Dispose pattern to release the buffers back to the pool, as well as detect invalid usage patterns (such as reusing a stream after it’s been disposed).
  • Completely thread-safe. That is, the MemoryManager is thread safe. Streams themselves are inherently NOT thread safe.
  • Each stream can be tagged with an identifying string that is used in logging. This can help you find bugs and memory leaks in your code relating to incorrect pool use.
  • Debug features like recording the call stack of the stream allocation to track down pool leaks
  • Maximum free pool size to handle spikes in usage without using too much memory.
  • Flexible and adjustable limits to the pooling algorithm.
  • Metrics tracking and events so that you can see the impact on the system.
  • Multiple internal pools: a default “small” buffer (default of 128 KB) and additional, “large” pools (default: in 1 MB chunks). The pools look kind of like this:

RecylableMemoryStream

In normal operation, only the small pool is used. The stream abstracts away the use of multiple buffers for you. This makes the memory use extremely efficient (much better than MemoryStream’s default doubling of capacity).

The large pool is only used when you need a contiguous byte[] buffer, via a call to GetBuffer or (let’s hope not) ToArray. When this happens, the buffers belonging to the small pool are released and replaced with a single buffer at least as large as what was requested. The size of the objects in the large pool are completely configurable, but if a buffer greater than the maximum size is requested then one will be created (it just won’t be pooled upon Dispose).

Examples

You can jump right in with no fuss by just doing a simple replacement of MemoryStream with something like this:

var sourceBuffer = new byte[]{0,1,2,3,4,5,6,7}; 
var manager = new RecyclableMemoryStreamManager(); 
using (var stream = manager.GetStream()) 
{ 
    stream.Write(sourceBuffer, 0, sourceBuffer.Length); 
}

Note that RecyclableMemoryStreamManager should be declared once and it will live for the entire process–this is the pool. It is perfectly fine to use multiple pools if you desire.

To facilitate easier debugging, you can optionally provide a string tag, which serves as a human-readable identifier for the stream. In practice, I’ve usually used something like “ClassName.MethodName” for this, but it can be whatever you want. Each stream also has a GUID to provide absolute identity if needed, but the tag is usually sufficient.

using (var stream = manager.GetStream("Program.Main"))
{
    stream.Write(sourceBuffer, 0, sourceBuffer.Length);
}

You can also provide an existing buffer. It’s important to note that this buffer will be copied into the pooled buffer:

var stream = manager.GetStream("Program.Main", sourceBuffer, 
                                    0, sourceBuffer.Length);

You can also change the parameters of the pool itself:

int blockSize = 1024;
int largeBufferMultiple = 1024 * 1024;
int maxBufferSize = 16 * largeBufferMultiple;

var manager = new RecyclableMemoryStreamManager(blockSize, 
                                                largeBufferMultiple, 
                                                maxBufferSize);

manager.GenerateCallStacks = true;
manager.AggressiveBufferReturn = true;
manager.MaximumFreeLargePoolBytes = maxBufferSize * 4;
manager.MaximumFreeSmallPoolBytes = 100 * blockSize;

Is this library for everybody? No, definitely not. This library was designed with some specific performance characteristics in mind. Most applications probably don’t need those. However, if they do, then this library can absolutely help reduce the impact of GC on your software.

Let us know what you think! If you find bugs or want to improve it in some way, then dive right into the code on GitHub.

Links

Using MemoryStream to Wrap Existing Buffers: Gotchas and Tips

A MemoryStream is a useful thing to have around when you want to process an array of bytes as a stream, but there are a few gotchas you need to be aware of, and some alternatives that may be better in a few cases. In Writing High-Performance .NET Code, I mention some situations where you may want to use pooled buffers, but in this article I will talk specifically about using MemoryStream specifically to wrap existing buffers to avoid additional memory allocations.

There are essentially two ways to use a MemoryStream:

  1. MemoryStream creates and manages a resizable buffer for you. You can write to and read from it however you want.
  2. MemoryStream wraps an existing buffer. You can choose how the underlying buffer is treated.

Let’s look at the constructors of MemoryStream and how they lead to one of those situations.

  • MemoryStream() – The default constructor. MemoryStream owns the buffer and resizes it as needed. The initial capacity (buffer size) is 0.
  • MemoryStream(int capacity) – Same as default, but initial capacity is what you pass in.
  • MemoryStream(byte[] buffer)MemoryStream wraps the given buffer. You can write to it, but not change the size—basically, this buffer is all the space you will have. You cannot call GetBuffer() to retrieve the original array.
  • MemoryStream(byte[] buffer, bool writable)MemoryStream wraps the given buffer, but you you can choose whether to make the stream writable at all. You could make it a pure read-only stream. You cannot call GetBuffer() to retrieve the original array.
  • MemoryStream(byte[] buffer, int index, int count) – Wraps an existing buffer, allowing writes, but allows you to specify an offset (aka origin) into the buffer that the stream will consider position 0. It also allows you to specify how many bytes to use after that origin as part of the stream. The stream is read-only. You cannot call GetBuffer() to retrieve the original array.
  • MemoryStream(byte[] buffer, int index, int count, bool writable) – Same as previous, but you can choose whether the stream is read-only. The buffer is still not resizable, and you cannot call GetBuffer() to retrieve the original array.
  • MemoryStream(byte[] buffer, int index, int count, bool writable, bool exposable)– Same as previous, but now you can specify whether the buffer should be exposed via GetBuffer(). This is the ultimate control you’re given here, but using it comes with an unfortunate catch, which we’ll see later.

Stream-Managed Buffer

If MemoryStream is allowed to manage the buffer size itself (the first two constructors above), then understanding the growth algorithm is important. The algorithm as currently coded looks like this:

  1. If requested buffer size is less than the current size, do nothing.
  2. If requested buffer size is less than 256 bytes, set new size to 256 bytes.
  3. If requested buffer size is less than twice the current buffer size, set the new size to twice the current size.
  4. Otherwise set capacity to exactly what was requested.

Essentially, if you’re not careful, you will start doubling the amount of memory you’re using, which may be overkill for some situations.

Wrapping an Existing Buffer

You would want to wrap an existing buffer in any situation where you have an existing array of bytes and don’t want to needlessly copy them, causing more memory allocations and thus a higher rate of garbage collections.  For example, you’ve read a bunch of bytes from the wire via HTTP, you’ve got an existing buffer. You need to pass those bytes to a parser that expects a Stream. So far, so good.

However, there is a gotcha here. To illustrate, let’s see a hypothetical example.

You pull some bytes off the wire. The first few bytes are a standard header that all input has, and can be ignored. For the actual content, you have a parser that expects a Stream. Within that stream, suppose there are a subsections of data that have their own parsers. In other words, you have a buffer that looks something like this:

image

To deal with this, you wrap a MemoryStream around the content section, like this:

// comes from network

byte[] buffer = …

// Content starts at byte 24

MemoryStream ms = new MemoryStream(buffer, 24, buffer.Length – 24, writable:false, publiclyVisible:true);

So far so good, but what if the parser for the sub-section really needs to operate on the raw bytes instead of as a Stream? This should be possible, right? After all, publiclyVisible was set to true, so you can call GetBuffer(), which returns the original buffer. There’s just one (major) problem: You don’t know where you are in that buffer.

This may sound like a contrived situation, but it’s completely possible and I’ve run into it multiple times.

See, when you wrapped that buffer and told MemoryStream to consider byte 24 as the start, it set a private field called origin to 24. If you set the stream’s Position to 24, the index into the array is set to 24. That’s position 0. Unfortunately, MemoryStream will not surface the origin to you. You can’t even deduce it from other properties like Capacity, Length, and Position. The origin just disappears, which means that the buffer you get back from GetBuffer() is useless. I consider this a bug in the .NET Framework—why have the ability to retrieve the original buffer if you don’t surface the offset being used? It may be that it would be more confusing, with an additional property that many people won’t understand.

There are a few ways you can choose to handle this:

  1. Derive a child class from MemoryStream that doesn’t do anything except mimic the underlying constructors and store this offset itself and make it available via a property.
  2. Pass down the offset separately. Works, but feels very annoying.
  3. Instead of using MemoryStream, use an ArraySegment<byte> and wrap that (or parts of it) as needed in a MemoryStream.

Probably the cleanest option is #1. If you can’t do that, convert the deserialization methods to take ArraySegment<byte>, which wraps a buffer, an offset, and a length into a cheap struct, which you can then pass to all deserialization functions. If you need a Stream, you can easily construct it from the segment:

byte[] buffer = …

var segment = new ArraySegment<byte>(buffer, 24, buffer.Length – 24);

var stream = new MemoryStream(segment.Array, segment.Offset, segment.Count);

Essentially, make your master buffer source the ArraySegment, not the Stream.

If you find this kind of tip useful, you will love my book Writing High-Performance .NET Code, which will teach you hundreds of optimizations like this as well as the fundamentals of .NET performance engineering.

5 More Attributes of Highly Effective Programmers

Nearly 7 years ago, I wrote a little article called The Top 5 Attributes of Highly Effective Programmers that got some good feedback and has proven popular over time.

One matures as a developer, of course. I wrote that last article quite closer to the beginning of my career. Over the last few years, especially at Microsoft, I’ve had the opportunity to witness a much wider range of behaviors. I’ve been able to develop a much better sense of what differentiates the novice from the truly effective developer.

The difference in skills can be truly staggering if you’re not used to seeing it. A new programmer, or one who has not learned much from experience, can often be an order of magnitude or more less productive than a good, experienced developer. You don’t want to spend very long at the bottom of this kind of ranking. Some of this is just experience, but in many cases it’s just a mindset–there are plenty of “experienced” developers who haven’t actually learned to improve. It’s true in many professions, but especially so in programming–you can’t plateau. You have to keep learning. The world changes, programming changes, and what was true 10 years ago is laughably outdated.

The attributes I listed in the previous article are still applicable. They are still valuable, but there is more. Note that I am not claiming in this article that I’ve mastered these. I still aspire to meet higher standards in each of these areas. Remember that it is not hypocrisy to espouse good ideas, even while struggling to live up to them. These are standards to live up to, not descriptions of any one person I know (though I do know plenty of people who are solid in at least one of these areas).

Sense of Ownership

Ownership means a lot of things, but mainly that you don’t wait for problems to find you. It means that if you see a problem, you assume it’s your job to either fix it or find someone else who can, and then to make sure it happens. It means not ignoring emails because, hey, not my problem! It means taking issues seriously and making sure they are dealt with. Someone with a sense of ownership would never sweep a problem under the rug or blithely hope that someone else will deal with it.

You could equate ownership with responsibility, but I think it goes beyond that. “Responsibility” often takes on the hue of a burden or delegation of an unwelcome task, while “ownership” implies that you are invested in the outcome.

Ownership often means stepping outside of your comfort zone. You may think you’re not the best person to deal with something, but if no one else is doing it, than you absolutely are. Just step up, own the problem, and get it done.

Ownership does not mean that you do all the work–that would be draining, debilitating, and ultimately impossible. It does not mean that you specify bounds for your responsibility and forbid others to encroach. It especially does not mean code ownership in the sense that only you are allowed to change your code.

Ownership is a mentality that defies strict hierarchies of control in favor of a more egalitarian opportunism.

Closely related to the idea of ownership is taking responsibility for your mistakes. This means you don’t try to excuse yourself, shift blame, or minimize the issue unnecessarily. If there’s a problem you caused, be straight about it, explain what happened, what you’re going to do to prevent it, and move on.

Together, these ideas on ownership will gain you a reputation as someone who wants the best for the team or product. You want to be that person.

Remember, if you are ever having the thought, Someone ought to…–stop! That someone is you.

Data-Driven

A good developer does not make assumptions. Experience is good, yes, but data is better. Far, far better. Knowing how to measure things is far more important than being able to change them. If you make changes without measuring, then you’re just a random-coding monkey, just guessing that you’re doing something useful. Especially when it comes to performance, building a system to automatically measure performance is actually more important than the actual changes to performance. This is because if you don’t have that system, you will spend far more time doing manual measurement than actual development. See the section on Automation below.

Measurement can be simple. For some bugs, the measurement is merely, does the bug repro or not? For performance tuning of data center server applications, it will likely be orders of magnitude more complicated and involve systems dedicated to measurement.

Determining the right amount of data to make a decision is not always easy. You do have to balance this with expediency, and you don’t want to hold good ideas hostage to more measurement than necessary. However, there is very little you should that do completely blind with no data at all. As a developer, your every action should be independently justifiable.

The mantra of performance optimization is Measure, Measure, Measure. This should be the mantra of all software development. Are things improving or not? Faster or not? How much? Are customers happier or not? Can tasks be completed easier? Are we saving more money? Does it use less memory? Is our capacity larger? Is the UI more responsive? How much, exactly?

The degree to which you measure the answer to those questions is in large part dependent on how important it is to your bottom line.

My day job involves working on an application that runs on thousands of servers, powering a large part of Bing. With something like this, even seemingly small decisions can have a drastic effect in the end. If I make something a bit more inefficient, it could translate into us needing to buy more machines. Great, now my little coding change that I didn’t adequately measure is costing the company hundreds of thousands of extra dollars per year. Oops.

Even for smaller applications, this can be a big deal. For example, making a change that causes the UI to be 20% more sluggish in some cases may not be noticed if you don’t have adequate measurement in place, but if it leads to a bad review by someone who noticed it, and there are adequate competitors, that one decision could be a major loss of revenue.

Solid Tests

Notice that I don’t say “tests”, unqualified. Good tests, solid, repeatable tests. Those are the only ones worth having.

If you see a code change that doesn’t have accompanying test changes, don’t be afraid to ask the question, “Where are the tests?” The answer might be that existing tests cover the change, or that tests at a larger scope, or in a different change will cover it, but the point is to ask the question, and make sure there is a satisfactory answer. “Manual test” is a valid response sometimes, but this should be very rare, and justifiable.

I cannot say how many times I’ve been saved due to the hundreds of unit tests that exercise my code, especially when I’m attempting a big internal refactor, usually for performance reasons.

As important as good tests are, it’s also important to get rid of bad tests. Don’t waste resources on things that aren’t helpful. Insist on a clean, reliable test suite. I’m not sure which is worse: no tests, or tests you can’t rely on. Eventually, unreliable tests become the same as having no tests at all.

Automation

An effective developer is always trying to put themselves out of a job. Seriously. There is more work than you can possibly fit in the time allotted. Automate the heck out of the stuff that annoys you, trips you up, is repetitive, is frequent, is error-prone. Once you can break down a process into something so deterministic that you could write a script for someone else to follow and get the same result, then make sure that someone else is a program.

This is more than just simple maintenance scripts for server management. This is ANY part of your job. Collecting data? Get it automatically ingested into the systems that need it. Generating reports? If you’ve generated the same report more than twice, don’t do it a third time. Your build system requires more than a single step? What’s wrong with you?

You have to free yourself up for more interesting, more creative work. You’re a highly paid programmer. Act like it.

Example: One of my jobs in the last year has been to run regular performance profiling, analyze the results, and send them to my team, making suggestions for future focus. This involved a bunch of steps:

  1. Log onto a random machine in the datacenter.
  2. Start a 120-second CPU profile.
  3. Wait for 120-seconds plus a few minutes for processing, symbol resolution, etc.
  4. Compress file, copy to my machine
  5. Analyze file–group, filter, and sort data according to various rules.
    1. Look for a bunch of standard things that I always report on
  6. Do the same thing for a 900-second memory/exception/thread/etc. profile.

This took about an hour each time, sometimes more.

I realized that every single part of this could happen automatically. I a wrote a service that gets deployed to every datacenter machine. A couple of times per day it checks to see whether we need a profile, whether the machine is in a good state to profile, etc.. It then runs the profiler, collects the data, and even analyzes the data automatically (See Chapter 8 of Writing High-Performance .NET Code for a hint about how I did this). This all gets uploaded to a file server and the analysis gets displayed on a web-site. No intervention whatsoever. Not only do I not have to do this work myself anymore, but others are empowered to look at the data for themselves, and we can easily add more analysis components over time.

Unafraid of Communication

The final thing I want to talk about is communication. This has been a challenge for me. I definitely have the personality type that really likes to disappear into a cave and pound on a keyboard for a few days, to emerge at the end with some magical piece of code. I would delete Outlook from my computer if I could.

This kind of attitude might serve you well for a while, but it’s ultimately limiting.

As you get more senior, communication becomes key. Effective communication skills are one of the things you can use to distinguish yourself to advance your career.

Effective communication can begin with a simple acknowledgement of someone’s issue, or an explanation that you’re working on something, with a follow-up to everyone involved at regular points. Nobody likes to be kept in the dark, especially for burning issues. For time-critical issues, a “next update in XX hours” can be vital.

Effective communication also means being able to say what you’re working on and why it’s cool.

Eventually, it means a lot more–being able to present complicated ideas to many other people in a simple, understandable, logical way.

Good communication skills enable you to be able to move beyond implementing software all by yourself to helping teams as a whole do better software. You can have a much wider impact by helping and teaching others. This is good for your team, your company, and your career.

Do you have a good engineering culture?

I assume one big prerequisite to all of these attributes: You must have a solid engineering environment to operate in. If management gives short shrift to employee happiness, sound software engineering principles, or the workplace is otherwise toxic, than perhaps you need to focus on changing that first.

If your leaders are so short-sighted that they can’t stand the thought of you automating your work instead of just getting the job done, that’s a problem.

If bringing up problems or admitting fault to a mistake is a career-limiting move, then you need to get out soon. That’s a team that will eventually implode under the weight of cumulative failure that no one wants to address.

Don’t settle for this kind of workplace. Either work to change it or find some place better.

 

Practical uses of WeakReference

In Part 1, I discussed the basics of WeakReference and WeakReference<T>. Part 2 introduced short and long weak references as well as the concept of resurrection. I also covered how to use the debugger to inspect your memory for the presence of weak references. This article will complete this miniseries with a discussion of when to use weak references at all and a small, practical example.

When to Use

Short answer: rarely. Most applications won’t require this.

Long answer: If all of the following criteria are met, then you may want to consider it:

  1. Memory use needs to be tightly restricted – this probably means mobile devices these days. If you’re running on Windows RT or Windows Phone, then your memory is restricted.
  2. Object lifetime is highly variable – if you can predict the lifetime of your objects well, then using WeakReference doesn’t really make sense. In that case, you should just control their lifetime directly.
  3. Objects are relatively large, but easy to create – WeakReference is really ideal for that large object that would be nice to have around, but if not, you could easily regenerate it as needed (or just do without).
  4. The object’s size is significantly more than the overhead of using WeakReference<T> – Using WeakReference<T> adds an additional object, which means more memory pressure, an extra dereference step. It would be a complete waste of time and memory to use WeakReference<T> to store an object that’s barely larger than WeakReference<T> itself. However, there are some caveats to this, below.

There is another scenario in which WeakReference may make sense. I call this the “secondary index” feature. Suppose you have an in-memory cache of objects, all indexed by some key. This could be as simple as Dictionary<string, Person>, for example. This is the primary index, and represents the most common lookup pattern, the master table, if you will.

However, you also want to look up these objects with another key, say a last name. Maybe you want a dozen other indexes. Using standard strong references, you could have additional indexes, such as Dictionary<DateTime, Person> for birthdays, etc. When it comes time to update the cache, you then have to modify all of these indexes to ensure that the Person object gets garbage collected when no longer needed.

This might be a pretty big performance hit to do this every time there is an update. Instead, you could spread that cost around by having all of the secondary indexes use WeakReference instead: Dictionary<DateTime, WeakReference<Person>>, or, if the index has non-unique keys (likely), Dictionary<DateTime, List<WeakReference<Person>>>.

By doing this, the cleanup process becomes much easier: you just update the master cache, which removes the only strong reference to the object. The next time a garbage collection runs (of the appropriate generation), the Person object will be cleaned up. If you ever access a secondary index looking for those objects, you’ll discover the object has been cleaned up, and you can clean up those indexes right then. This spreads out the cost of cleanup of the index overhead, while allowing the expensive cached objects to be cleaned up earlier.

Other Uses

This Stack Overflow thread has some additional thoughts, with some variations of the example below and other uses.

A rather famous and involved example is using WeakReferences to prevent the dangling event handler problem (where failure to unregister an event handler keeps objects in memory, despite them having no explicit references anywhere in your code).

Practical Example

I had mentioned in Chapter 2 (Garbage Collection) of Writing High-Performance .NET Code that WeakReference could be used in a multilevel caching system to allow objects to gracefully fall out of memory when pressure increases. You can start with strong references and then demote them to weak references according to some criteria you choose.

That is the example I’ll show here. Note that this not production-quality code. It’s only about 5% of the code you would actually need, even assuming this algorithm makes sense in your scenario. At a minimum, you probably want to implement IDictionary<TKey, TValue> on it, perhaps tighten up some of the temporary memory allocations, and more.

This is a very simple implementation. When you add items to the cache, it adds them as strong references (removing any existing weak references for that key). When you attempt to read a value from the cache, it tries the strong references first, before attempting the weak references.

Objects are demoted from strong to weak references based simply on a maximum age. This is admittedly rather simplistic, but it gets the point across.

using System; using System.Collections.Concurrent; using System.Collections.Generic; using System.Diagnostics; namespace WeakReferenceCache { sealed class HybridCache<TKey, TValue>

where TValue:class { class ValueContainer<T> { public T value; public long additionTime; public long demoteTime; } private readonly TimeSpan maxAgeBeforeDemotion; private readonly ConcurrentDictionary<TKey, ValueContainer<TValue>> strongReferences = new ConcurrentDictionary<TKey, ValueContainer<TValue>>(); private readonly ConcurrentDictionary<TKey, WeakReference<ValueContainer<TValue>>> weakReferences = new ConcurrentDictionary<TKey, WeakReference<ValueContainer<TValue>>>(); public int Count { get { return this.strongReferences.Count; } } public int WeakCount { get { return this.weakReferences.Count; } } public HybridCache(TimeSpan maxAgeBeforeDemotion) { this.maxAgeBeforeDemotion = maxAgeBeforeDemotion; } public void Add(TKey key, TValue value) { RemoveFromWeak(key); var container = new ValueContainer<TValue>(); container.value = value; container.additionTime = Stopwatch.GetTimestamp(); container.demoteTime = 0; this.strongReferences.AddOrUpdate(key, container, (k, existingValue) => container); } private void RemoveFromWeak(TKey key) { WeakReference<ValueContainer<TValue>> oldValue; weakReferences.TryRemove(key, out oldValue); } public bool TryGetValue(TKey key, out TValue value) { value = null; ValueContainer<TValue> container; if (this.strongReferences.TryGetValue(key, out container)) { AttemptDemotion(key, container); value = container.value; return true; } WeakReference<ValueContainer<TValue>> weakRef; if (this.weakReferences.TryGetValue(key, out weakRef)) { if (weakRef.TryGetTarget(out container)) { value = container.value; return true; } else { RemoveFromWeak(key); } } return false; } public void DemoteOldObjects() { var demotionList = new List<KeyValuePair<TKey, ValueContainer<TValue>>>(); long now = Stopwatch.GetTimestamp(); foreach(var kvp in this.strongReferences) { var age = CalculateTimeSpan(kvp.Value.additionTime, now); if (age > this.maxAgeBeforeDemotion) { demotionList.Add(kvp); } } foreach(var kvp in demotionList) { Demote(kvp.Key, kvp.Value); } } private void AttemptDemotion(TKey key, ValueContainer<TValue> container) { long now = Stopwatch.GetTimestamp(); var age = CalculateTimeSpan(container.additionTime, now); if (age > this.maxAgeBeforeDemotion) { Demote(key, container); } } private void Demote(TKey key, ValueContainer<TValue> container) { ValueContainer<TValue> oldContainer; this.strongReferences.TryRemove(key, out oldContainer); container.demoteTime = Stopwatch.GetTimestamp(); var weakRef = new WeakReference<ValueContainer<TValue>>(container); this.weakReferences.AddOrUpdate(key, weakRef, (k, oldRef) => weakRef); } private TimeSpan CalculateTimeSpan(long offsetA, long offsetB) { long diff = offsetB - offsetA; double seconds = (double)diff / Stopwatch.Frequency; return TimeSpan.FromSeconds(seconds); } } }

That’s it for the series on Weak References–I hope you enjoyed it! You may never need them, but when you do, you should understand how they work in detail to make the smartest decisions.

Short vs. Long Weak References and Object Resurrection

Last time, I talked about the basics of using WeakReference, what they meant and how the CLR treats them. Today in part 2, I’ll discuss some important subtleties. Part 3 of this series can be found here.

Short vs. Long Weak References

First, there are two types of weak references in the CLR:

  • Short – Once the object is reclaimed by garbage collection, the reference is set to null. All of the examples in the previous article, with WeakReference and WeakReference<T>, were examples of short weak references.
  • Long – If the object has a finalizer AND the reference is created with the correct options, then the reference will point to the object until the finalizer completes.

Short weak references are fairly easy to understand. Once the garbage collection happens and the object has been collected, the reference gets set to null, the end. A short weak reference can only be in one of two states: alive or collected.

Using long weak references is more complicated because the object can be in one of three states:

  1. Object is still fully alive (has not been promoted or garbage collected).
  2. Object has been promoted and the finalizer has been queued to run, but has not yet run.
  3. The object has been cleaned up fully and collected.

With long weak references, you can retrieve a reference to the object during stages 1 and 2. Stage 1 is the same as with short weak references, but stage 2 is tricky. Now the object is in a possibly undefined state. Garbage collection has started, and as soon as the finalizer thread starts running pending finalizers, the object will be cleaned up. This can happen at any time, so using the object is very tricky. The weak reference to the target object remains non-null until the target object’s finalizer completes.

To create a long weak reference, use this constructor:

WeakReference<MyObject> myRefWeakLong 
    = new WeakReference<MyObject>(new MyObject(), true);

The true argument specifies that you want to track resurrection. That’s a new term and it is the whole point of long weak references.

Aside: Resurrection

First, let me say this up front: Don’t do this. You don’t need it. Don’t try it. You’ll see why. I don’t know if there is a special reason why resurrection is allowed in .NET, or it’s just a natural consequence of how garbage collection works, but there is no good reason to do something like this.

So here’s what not to do:

class Program
{
    class MyObject
    {
        ~MyObject()
        {
        myObj = this;
        }
    }

    static MyObject myObj = new MyObject();

    static void Main(string[] args)
    {
        myObj = null;
        GC.Collect();
        GC.WaitForPendingFinalizers();
    }
}

By setting the myObj reference back to an object, you are resurrecting that object. This is bad for a number of reasons:

  • You can only resurrect an object once. Because the object has already been promoted to gen 1 by the garbage collector, it has a guaranteed limited lifetime.
  • The finalizer will not run again, unless you call GC.ReRegisterForFinalize() on the object.
  • The state of the object can be indeterminate. Objects with native resources will have released those resources and they will need to be reinitialized. It can be tricky picking this apart.
  • Any objects that the resurrected object refers to will also be resurrected. If those objects have finalizers they will also have run, leaving you in a questionable state.

So why is this even possible? Some languages consider this a bug, and you should to. Some people use this technique for object pooling, but this is a particularly complex way of doing it, and there are many better ways. You should probably consider object resurrection a bug as well. If you do happen upon a legitimate use case for this, you should be able to fully justify it enough to override all of the objections here.

Weak vs. Strong vs. Finalizer Behavior

There are two dimensions for specifying a WeakReference<T>: the weak reference’s creation parameters and whether the object has a finalizer. The WeakReference’s behavior based on these is described in this table:

  No finalizer Has finalizer
trackResurrection = false short short
trackResurrection = true short long

An interesting case that isn’t explicitly specified in the documentation is when trackResurrection is false, but the object does have a finalizer. When does the WeakReference get set to null? Well, it follows the rules for short weak references and is set to null when the garbage collection happens. Yes, the object does get promoted to gen 1 and the finalizer gets put into the queue. The finalizer can even resurrect the object if it wants, but the point is that the WeakReference isn’t tracking it–because that’s what you said when you created it. WeakReference’s creation parameters do not affect how the garbage collector treats the target object, only what happens to the WeakReference.

You can see this in practice with the following code:

class MyObjectWithFinalizer 
{ 
    ~MyObjectWithFinalizer() 
    { 
        var target = myRefLong.Target as MyObjectWithFinalizer; 
        Console.WriteLine("In finalizer. target == {0}", 
            target == null ? "null" : "non-null"); 
        Console.WriteLine("~MyObjectWithFinalizer"); 
    } 
} 

static WeakReference myRefLong = 
    new WeakReference(new MyObjectWithFinalizer(), true); 

static void Main(string[] args) 
{ 
    GC.Collect(); 
    MyObjectWithFinalizer myObj2 = myRefLong.Target 
          as MyObjectWithFinalizer; 
    
    Console.WriteLine("myObj2 == {0}", 
          myObj2 == null ? "null" : "non-null"); 
    
    GC.Collect(); 
    GC.WaitForPendingFinalizers(); 
    
    myObj2 = myRefLong.Target as MyObjectWithFinalizer; 
    Console.WriteLine("myObj2 == {0}", 
         myObj2 == null ? "null" : "non-null"); 
}

The output is:

myObj2 == non-null 
In finalizer. target == non-null 
~MyObjectWithFinalizer 
myObj2 == null 

Finding Weak References in a Debugger

Windbg can show you how to find where your weak references, both short and long.

Here is some sample code to show you what’s going on:

using System; 
using System.Diagnostics; 

namespace WeakReferenceTest 
{ 
    class Program 
    { 
        class MyObject 
        { 
            ~MyObject() 
            { 
            } 
        } 

        static void Main(string[] args) 
        { 
            var strongRef = new MyObject(); 
            WeakReference<MyObject> weakRef = 
                new WeakReference<MyObject>(strongRef, trackResurrection: false); 
            strongRef = null; 

            Debugger.Break(); 

            GC.Collect(); 

            MyObject retrievedRef; 

            // Following exists to prevent the weak references themselves 
            // from being collected before the debugger breaks 
            if (weakRef.TryGetTarget(out retrievedRef)) 
            { 
                Console.WriteLine(retrievedRef); 
            } 
        } 
    } 
} 

Compile this program in Release mode.

In Windbg, do the following:

  1. Ctrl+E to execute. Browse to the compiled program and open it.
  2. Run command: sxe ld clrjit (this tells the debugger to break when the clrjit.dll file is loaded, which you need before you can execute .loadby)
  3. Run command: g
  4. Run command .loadby sos clr
  5. Run command: g
  6. The program should now break at the Debugger.Break() method.
  7. Run command !gchandles

You should output similar to this:

0:000> !gchandles
  Handle Type          Object     Size     Data Type
011112f4 WeakShort   02d324b4       12          WeakReferenceTest.Program+MyObject
011111d4 Strong      02d31d70       36          System.Security.PermissionSet
011111d8 Strong      02d31238       28          System.SharedStatics
011111dc Strong      02d311c8       84          System.Threading.ThreadAbortException
011111e0 Strong      02d31174       84          System.Threading.ThreadAbortException
011111e4 Strong      02d31120       84          System.ExecutionEngineException
011111e8 Strong      02d310cc       84          System.StackOverflowException
011111ec Strong      02d31078       84          System.OutOfMemoryException
011111f0 Strong      02d31024       84          System.Exception
011111fc Strong      02d3142c      112          System.AppDomain
011113ec Pinned      03d333a8     8176          System.Object[]
011113f0 Pinned      03d32398     4096          System.Object[]
011113f4 Pinned      03d32178      528          System.Object[]
011113f8 Pinned      02d3121c       12          System.Object
011113fc Pinned      03d31020     4424          System.Object[]

Statistics:
      MT    Count    TotalSize Class Name
70e72554        1           12 System.Object
01143814        1           12 WeakReferenceTest.Program+MyObject
70e725a8        1           28 System.SharedStatics
70e72f0c        1           36 System.Security.PermissionSet
70e724d8        1           84 System.ExecutionEngineException
70e72494        1           84 System.StackOverflowException
70e72450        1           84 System.OutOfMemoryException
70e722fc        1           84 System.Exception
70e72624        1          112 System.AppDomain
70e7251c        2          168 System.Threading.ThreadAbortException
70e35738        4        17224 System.Object[]
Total 15 objects

Handles:
    Strong Handles:       9
    Pinned Handles:       5
    Weak Short Handles:   1

The weak short reference is called a “Weak Short Handle” in this output.

Next Time

The first article explained how WeakReference works, and this one explained a few of the subtleties, including some behavior you probably don’t want to use. Next time, I’ll go into why you would want to use WeakReference in the first place, and provide a sample application.

Prefer WeakReference<T> to WeakReference

In Writing High-Performance .NET Code, I mention the WeakReference type briefly in Chapter 2 (Garbage Collection), but I don’t go into it too much. However, for the blog, I want to start a small series of posts going into some more detail about WeakReference, how it works, and when to use it, with some example implementations. In this first post, I’ll just cover what it is, what options there are, and how to use it.

A WeakReference is a reference to an object, but one that still allows the garbage collector to destroy the object and reclaim its memory. This is in contrast to a strong (i.e., normal) reference that does prevent the GC from cleaning up the object.

There are two versions of WeakReference:

First, let’s take a look a WeakReference, which has been around since .NET 1.1.

You allocate a weak reference like this:

var weakRef = new WeakReference(myObj);
myObj = null;

myObj is an existing object of your choice. Once you assign it to the weakRef variable, you should set the original strong reference to null (otherwise, what’s the point?). Now, whenever there is a garbage collection the object weakRef is referring to may be collected. To access this object, you may be tempted to make use WeakReference’s IsAlive property, as in this example:

WeakReference ref1 = new WeakReference(new MyObject());
if (ref1.IsAlive)
{
    // wrong!
    DoSomething(ref1.Target as MyObject);
}

IsAlive is a rather silly property. If it returns false, it’s fine–you know the object has been collected. However, if it returns true, then you don’t actually know anything useful because the object could still be garbage collected before you get a chance to make a strong reference to it!

The correct way to use this is to ignore the IsAlive property completely and just attempt to make a strong reference from Target:

MyObject obj = ref1.Target as MyObject;
if (obj != null)
{
    // correct
    DoSomething(obj);
}

Now there is no race. If obj ends up being non-null, then you’ve got a strong reference that is guaranteed to not be garbage collected (until your own strong reference goes out of scope).

Recognizing some of the silliness and umm…weakness of WeakReference, WeakReference<T> was introduced in .NET 4.5 and it formalizes the above procedure by removing both the Target and IsAlive properties from the class and providing you with these two methods:

  • SetTarget – Set a new target object
  • TryGetTarget – Attempt to retrieve the object and assign it to a strong reference

This example demonstrates the usage, which is essentially the same as the correct way to use WeakReference from above:

WeakReference<MyObject> ref2 = new WeakReference<MyObject>(new MyObject());
MyObject obj2;
if (ref2.TryGetTarget(out obj2))
{
    DoSomething(obj2);
}

You could also ask yourself: Why is there a SetTarget method at all? After all, you could just allocate a new WeakReference<T>.

If you are using WeakReference<T> at all, it likely means you are somewhat memory conscious . In this case, allocating new WeakReference<T> objects will contribute extra, unnecessary memory pressure, potentially worsening the overall performance. Why do this, when you can just reuse the WeakReference<T> object for new targets as needed?

Next time, more details on weak references, particularly the differences between short and long weak references, and taking a peek at them in the debugger. We’ll also cover when you should actually use WeakReference<T> at all.

Part 2, Short vs. Long Weak References and Object Resurrection, is up.

Part 3, Practical Uses, is up.

Using Windbg to answer implementation questions for yourself (Can a Delegate Invocation be Inlined?)

The other day, a colleague of mine asked me: Can a generated delegate be inlined? Or something similar to this. My answer was that the generated code is going to be JITted and optimized like any other code, but later I started thinking…. “Wait a sec, can the actual call to the delegate be inlined?”

I’m going to give you the answer before I even start this article: no.

I cover the rules of method inlining that the JITter uses in my book, Writing High-Performance .NET Code, but I don’t discuss this specific situation. You could logically make the leap, however, that there are two other rules that imply this:

  • Virtual methods will not be inlined
  • Interface calls with multiple concrete implementations in a single call site will not be inlined.

While neither of those rules are delegate-specific, you can infer that a delegate call might have similar constraints. You could ask around on the Internet. Somebody on stackoverflow.com will surely answer you, but I want to show you how to find out the answer to this for yourself, which is an invaluable skill for harder questions, where you might not be able to find out the answer unless you know people on the CLR team (which I do, but I *still* try to find out answers before I bother them).

First, let’s see a test program that will exercise various types of function calls, starting with a simple method call that we would expect to be inlined.

using System;
using System.Runtime.CompilerServices;

namespace DelegateInlining
{
    class Program
    {
        static void Main(string[] args)
        {
            TestNormalFunction();
        }
        
        private static int Add(int x, int y) { return x + y; }

        [MethodImpl(MethodImplOptions.NoInlining)]
        private static void TestNormalFunction()
        {
            int z = Add(1, 2);
            Console.WriteLine(z);
        }
    }
}

The code we’re interested in inlining is the Add method. Don’t confuse that with the NoInlining option on TestNormalFunction, which is there to prevent the test method itself from being inlined The test method is there to allow breakpoint setting and debugging.

Build this code in Release mode for x86. Then open Windbg.

If you’re not used to using Windbg, I highly encourage you to start. It is far more powerful than Visual Studio’s debugger, especially when it comes to debugging the details of .NET. It is not strictly necessary for this particular exercise, but it is what I recommend.

To get, Windbg, install the Windows SDK—there is the option to install only the debugger if you wish.

In Windbg:

  1. Ctrl-E to open an executable program. Navigate to and open the Release build of the above program. It will start executing and immediately break
  2. Type the command: sxe ld clr. What we want to do is set a breakpoint inside the TestNormalFunction. To do that, we need to use the SOS debugger extension, which relies on clrjit.dll, which hasn’t been loaded in the process yet. So the first thing to do is set a breakpoint on loading clrjit.dll: sxe ld clrjit
  3. Enter the command g for “go” (or hit F5). The program will then break on the load of clrjit.dll.
  4. Enter the command .loadby sos clr – this will load the SOS debugging helper.
  5. Enter the command !bpmd DelegateInlining Program.TestNormalFunction – this will set a managed breakpoint on this method.
  6. Enter the command g to continue execution. Execution will break when it enters TestNormalFunction.
  7. Now you can see the disassembly for this method (menu View | Dissassembly).
00b80068 55              push    ebp
00b80069 8bec            mov     ebp,esp
00b8006b e8e8011b70      call    mscorlib_ni+0x340258 (70d30258)
00b80070 8bc8            mov     ecx,eax
00b80072 ba03000000      mov     edx,3
00b80077 8b01            mov     eax,dword ptr [ecx]
00b80079 8b4038          mov     eax,dword ptr [eax+38h]
00b8007c ff5014          call    dword ptr [eax+14h]
00b8007f 5d              pop     ebp
00b80080 c3              ret

There are some calls there, but none of them are to Add—they are all functions inside of mscorlib. The call to the dword ptr is virtual function call. These are all related to calling Console.WriteLine.

The key is the instruction at address 00b80072, which moves the value 3 directly into register edx. This is the inlined Add call. The compiler inlined not only the function call, but the trivial math as well (an easy optimization the compiler will do for constants).

So far so good. Now let’s look at the same type of thing through a delegate.

delegate int DoOp(int x, int y);

[MethodImpl(MethodImplOptions.NoInlining)]
private static void TestDelegate()
{
    DoOp op = Add;
    int z = op(1, 2);
    Console.WriteLine(z);
}

Change the Main method above to call TestDelegate instead. Follow the same steps given previously for Windbg, but this time set a breakpoint on TestDelegate.

00610077 42              inc     edx
00610078 00e8            add     al,ch
0061007a 8220d0          and     byte ptr [eax],0D0h
0061007d ff8bc88d5104    dec     dword ptr [ebx+4518DC8h]
00610083 e8481b5671      call    clr!JIT_WriteBarrierECX (71b71bd0)
00610088 c7410cc4053304  mov     dword ptr [ecx+0Ch],43305C4h
0061008f b870c04200      mov     eax,42C070h
00610094 894110          mov     dword ptr [ecx+10h],eax
00610097 6a02            push    2
00610099 ba01000000      mov     edx,1
0061009e 8b410c          mov     eax,dword ptr [ecx+0Ch]
006100a1 8b4904          mov     ecx,dword ptr [ecx+4]
006100a4 ffd0            call    eax
006100a6 8bf0            mov     esi,eax
006100a8 e8ab017270      call    mscorlib_ni+0x340258 (70d30258)
006100ad 8bc8            mov     ecx,eax
006100af 8bd6            mov     edx,esi
006100b1 8b01            mov     eax,dword ptr [ecx]
006100b3 8b4038          mov     eax,dword ptr [eax+38h]
006100b6 ff5014          call    dword ptr [eax+14h]
006100b9 5e              pop     esi
006100ba 5d              pop     ebp
006100bb c3              ret

Things got a bit more complicated. As you’ll read in Writing High-Performance .NET Code, assigning a method to a delegate actually results in a memory allocation. That’s fine as long that operation is cached and reused. What we’re really interested in here starts at address 00610097, where you can see the value 2 being pushed onto the stack. The next line moves the value 1 to the edx register. There are our two function arguments. Finally, at address 006100a4, we’ve got another function call, which is the call to Add, and the key to this whole thing becomes clear. The address of that function had to be retrieved via pointer, which means it’s essentially like a virtual method call for the purposes of inlining.

You can also do the same exercise with a lambda expression (it will look similar to the delegate disassembly above).

So there’s the simple answer.

There is one more interesting case: a delegate that calls into method A that calls method B. We already know that method A won’t be inlined, but can method B be inlined into method A?

[MethodImpl(MethodImplOptions.NoInlining)]
private static void TestDelegateWithFunctionCall()
{
    DoOp op = (x, y) => Add(x, y);
    int z = op(1, 2);
    Console.WriteLine(z);
} 

You can do the same analysis as above. You will see the call into the delegate/lambda will not be inlined, but there is no further function call, so yes, Method B can be inlined.

There you have it. Even though, the answer was pretty clear from the start, you at least have the tools to answer it or yourself. Don’t be afraid of the debugger, or of looking at assembly code, even for .NET programs.