Monthly Archives: September 2014

Appearance on .NET Rocks! Podcast

Carl and Richard put together a great podcast. .NET Rocks! has existed for years now and it’s amazing how many episodes they’ve published.

A couple of weeks ago, I had the privilege of recording their latest episode with them, #1041. We talked about a ton of interesting things like the importance of memory management, precise measurement, using the correct tools, not being afraid of the debugger, a little bit about Microsoft culture, and even LEGO!

Here is their description:

Carl and Richard talk to Ben Watson about his work around writing high performance .NET code. Ben talks about how the Bing team decided to use .NET code internally, which seems like an obvious choice for a Microsoft group, but it isn’t really – when milliseconds count, does .NET makes sense? Ben says it does, and he’s done the work to prove it. Ben’s book “Writing High Performance .NET Code” focuses not only on coding techniques, but also the larger practice of having a deep understanding of how .NET works, and the processes that take place to turn .NET code into machine code. The conversation also digs deeply into the need for performance measurement, especially Event Tracing for Windows. .NET can be fast when you do it right!

Give a listen. Subscribe in iTunes or listen on the web. Let me know what you think!

Digging Into .NET Object Allocation Fundamentals

[Note: this article also appeared on CodeProject]

Introduction

While understanding garbage collection fundamentals is vital to working with .NET, it is also important to understand how object allocation works. It shows you just how simple and performant it is, especially compared to the potentially blocking nature of native heap allocations. In a large, native, multi-threaded application, heap allocations can be major performance bottleneck which requires you to perform all sorts of custom heap management techniques. It’s also harder to measure when this is happening because many of those details are hidden behind the OS’s allocation APIs. More importantly, understanding this will give you clues to how you can mess up and make object allocation far less efficient.

In this article, I want to go through an example taken from Chapter 2 of Writing High-Performance .NET Code and then take it further with some additional examples that weren’t covered in the book.

Viewing Object Allocation in a Debugger

Let’s start with a simple object definition: completely empty.

class MyObject 
{
}

static void Main(string[] args)
{
    var x = new MyObject();
}

In order to examine what happens during allocation, we need to use a “real” debugger, like Windbg. Don’t be afraid of this. If you need a quick primer on how to get started, look at the free sample chapter on this page, which will get you up and running in no time. It’s not nearly as bad you think.

Build the above program in Release mode for x86 (you can do x64 if you’d like, but the samples below are x86).

In Windbg, follow these steps to start and debug the program:

  1. Ctrl+E to execute a program. Navigate to and open the built executable file.
  2. Run command: sxe ld clrjit (this tells the debugger to break on loading any assembly with clrjit in the name, which you need loaded before the next steps)
  3. Run command: g (continues execution)
  4. When it breaks, run command: .loadby sos clr (loads .NET debugging tools)
  5. Run command: !bpmd ObjectAllocationFundamentals Program.Main (Sets a breakpoint at the beginning of a method. The first argument is the name of the assembly. The second is the name of the method, including the class it is in.)
  6. Run command: g

Execution will break at the beginning of the Main method, right before new() is called. Open the Disassembly window to see the code.

Here is the Main method’s code, annotated for clarity:

; Copy method table pointer for the class into
; ecx as argument to new()
; You can use !dumpmt to examine this value.
mov ecx,006f3864h
; Call new
call 006e2100 
; Copy return value (address of object) into a register
mov edi,eax

Note that the actual addresses will be different each time you execute the program. Step over (F10, or toolbar) a few times until call 006e2100 (or your equivalent) is highlighted. Then Step Into that (F11). Now you will see the primary allocation mechanism in .NET. It’s extremely simple. Essentially, at the end of the current gen0 segment, there is a reserved bit of space which I will call the allocation buffer. If the allocation we’re attempting can fit in there, we can update a couple of values and return immediately without more complicated work.

If I were to outline this in pseudocode, it would look like this:

if (object fits in current allocation buffer)
{
   Increment a pointer, return address;
}
else
{
   call JIT_New to do more complicated work in CLR
}

The actual assembly looks like this:

; Set eax to value 0x0c, the size of the object to
; allocate, which comes from the method table
006e2100 8b4104          mov     eax,dword ptr [ecx+4] ds:002b:006f3868=0000000c
; Put allocation buffer information into edx
006e2103 648b15300e0000  mov     edx,dword ptr fs:[0E30h]
; edx+40 contains the address of the next available byte
; for allocation. Add that value to the desired size.
006e210a 034240          add     eax,dword ptr [edx+40h]
; Compare the intended allocation against the
; end of the allocation buffer.
006e210d 3b4244          cmp     eax,dword ptr [edx+44h]
; If we spill over the allocation buffer,
; jump to the slow path
006e2110 7709            ja      006e211b
; update the pointer to the next free
; byte (0x0c bytes past old value)
006e2112 894240          mov     dword ptr [edx+40h],eax
; Subtract the object size from the pointer to
; get to the start of the new obj
006e2115 2b4104          sub     eax,dword ptr [ecx+4]
; Put the method table pointer into the
; first 4 bytes of the object.
; eax now points to new object
006e2118 8908            mov     dword ptr [eax],ecx
; Return to caller
006e211a c3              ret
; Slow Path - call into CLR method
006e211b e914145f71      jmp     clr!JIT_New (71cd3534)

In the fast path, there are only 9 instructions, including the return. That’s incredibly efficient, especially compared to something like malloc. Yes, that complexity is traded for time at the end of object lifetime, but so far, this is looking pretty good!

What happens in the slow path? The short answer is a lot. The following could all happen:

  • A free slot somewhere in gen0 needs to be located
  • A gen0 GC is triggered
  • A full GC is triggered
  • A new memory segment needs to be allocated from the operating system and assigned to the GC heap
  • Objects with finalizers need extra bookkeeping
  • Possibly more…

Another thing to notice is the size of the object: 0x0c (12 decimal) bytes. As covered elsewhere, this is the minimum size for an object in a 32-bit process, even if there are no fields.

Now let’s do the same experiment with an object that has a single int field.

class MyObjectWithInt { int x; }

Follow the same steps as above to get into the allocation code.

The first line of the allocator on my run is:

00882100 8b4104          mov     eax,dword ptr [ecx+4] ds:002b:00893874=0000000c

The only interesting thing is that the size of the object (0x0c) is exactly the same as before. The new int field fit into the minimum size. You can see this by examining the object with the !DumpObject command (or the abbreviated version: !do). To get the address of the object after it has been allocated, step over instructions until you get to the ret instruction. The address of the object is now in the eax register, so open up the Registers view and see the value. On my computer, it has a value of 2372770. Now execute the command: !do 2372770

You should see similar output to this:

0:000> !do 2372770
Name:        ConsoleApplication1.MyObjectWithInt
MethodTable: 00893870
EEClass:     008913dc
Size:        12(0xc) bytes
File:        D:\Ben\My Documents\Visual Studio 2013\Projects\ConsoleApplication1\ConsoleApplication1\bin\Release\ConsoleApplication1.exe
Fields:
      MT    Field   Offset                 Type VT     Attr    Value Name
70f63b04  4000001        4         System.Int32  1 instance        0 x

This is curious. The field is at offset 4 (and an int has a length of 4), so that only accounts for 8 bytes (range 0-7). Offset 0 (i.e., the object’s address) contains the method table pointer, so where are the other 4 bytes? This is the sync block and they are actually at offset -4 bytes, before the object’s address. These are the 12 bytes.

Try it with a long.

class MyObjectWithLong { long x; }

The first line of the allocator is now:

00f22100 8b4104          mov     eax,dword ptr [ecx+4] ds:002b:00f33874=00000010

Showing a size of 0x10 (decimal 16 bytes), which we would expect now. 12 byte minimum object size, but 4 already in the overhead, so an extra 4 bytes for the 8 byte long. And an examination of the allocated object shows an object size of 16 bytes as well.

0:000> !do 2932770
Name:        ConsoleApplication1.MyObjectWithLong
MethodTable: 00f33870
EEClass:     00f313dc
Size:        16(0x10) bytes
File:        D:\Ben\My Documents\Visual Studio 2013\Projects\ConsoleApplication1\ConsoleApplication1\bin\Release\ConsoleApplication1.exe
Fields:
      MT    Field   Offset                 Type VT     Attr    Value Name
70f5b524  4000002        4         System.Int64  1 instance 0 x

If you put an object reference into the test class, you’ll see the same thing as you did with the int.

Finalizers

Now let’s make it more interesting. What happens if the object has a finalizer? You may have heard that objects with finalizers have more overhead during GC. This is true–they will survive longer, require more CPU cycles, and generally cause things to be less efficient. But do finalizers also affect object allocation?

Recall that our Main method above looked like this:

mov ecx,006f3864h
call 006e2100 
mov edi,eax

If the object has a finalizer, however, it looks like this:

mov     ecx,119386Ch
call    clr!JIT_New (71cd3534)
mov     esi,eax

We’ve lost our nifty allocation helper! We have to now jump directly to JIT_New. Allocating an object that has a finalizer is a LOT slower than a normal object. More internal CLR structures need to be modified to track this object’s lifetime. The cost isn’t just at the end of object lifetime.

How much slower is it? In my own testing, it appears to be about 8-10x worse than the fast path of allocating a normal object. If you allocate a lot of objects, this difference is considerable. For this, and other reasons, just don’t add a finalizer unless it really is required.

Calling the Constructor

If you are particularly eagle-eyed, you may have noticed that there was no call to a constructor to initialize the object once allocated. The allocator is changing some pointers, returning you an object, and there is no further function call on that object. This is because memory that belongs to a class field is always pre-initialized to 0 for you and these objects had no further initialization requirements. Let’s see what happens if we change to the following definition:

class MyObjectWithInt { int x = 13; }

Now the Main function looks like this:

mov     ecx,0A43834h
; Allocate memory
call    00a32100
; Copy object address to esi
mov     esi,eax
; Set object + 4 to value 0x0D (13 decimal)
mov     dword ptr [esi+4],0Dh

The field initialization was inlined into the caller!

Note that this code is exactly equivalent:

class MyObjectWithInt { int x; public MyObjectWithInt() { this.x = 13; } }

But what if we do this?

class MyObjectWithInt 
{ 
    int x; 

    [MethodImpl(MethodImplOptions.NoInlining)]  
    public MyObjectWithInt() 
    { 
        this.x = 13; 
    } 
}

This explicitly disables inlining for the object constructor. There are other ways of preventing inlining, but this is the most direct.

Now we can see the call to the constructor happening after the memory allocation:

mov     ecx,0F43834h
call    00f32100
mov     esi,eax
mov     ecx,esi
call    dword ptr ds:[0F43854h]

Exercise for the Reader

Can you get the allocator shown above to jump to the slow path? How big does the allocation request have to be to trigger this? (Hint: Try allocating arrays of various sizes.) Can you figure this out by examining the registers and other values from the running code?

Summary

You can see that in most cases, allocation of objects in .NET is extremely fast and efficient, requiring no calls into the CLR and no complicated algorithms in the simple case. Avoid finalizers unless absolutely needed. Not only are they less efficient during cleanup in a garbage collection, but they are slower to allocate as well.

Play around with the sample code in the debugger to get a feel for this yourself. If you wish to learn more about .NET memory handling, especially garbage collection, take a look at the book Writing High-Performance .NET Code.

iTunes 11.4 not syncing/refreshing podcasts? How I resolved it

In general, I try to avoid Apple products when I can, but I do use and enjoy and iPod Nano for podcasts.

With the recent update to 11.4 I noticed that my podcasts were not refreshing, either on a schedule or on-demand. I tried restarting iTunes, unplugging the iPod, restarting the computer – nothing. Until I looked in the settings.

Look at this setting:

(Windows version: Edit | Preferences | Store)

image

Uncheck the highlighted setting. I’m not sure how this feature is supposed to work, but once disabled, podcasts started refreshing correctly again.

5 More Attributes of Highly Effective Programmers

Nearly 7 years ago, I wrote a little article called The Top 5 Attributes of Highly Effective Programmers that got some good feedback and has proven popular over time.

One matures as a developer, of course. I wrote that last article quite closer to the beginning of my career. Over the last few years, especially at Microsoft, I’ve had the opportunity to witness a much wider range of behaviors. I’ve been able to develop a much better sense of what differentiates the novice from the truly effective developer.

The difference in skills can be truly staggering if you’re not used to seeing it. A new programmer, or one who has not learned much from experience, can often be an order of magnitude or more less productive than a good, experienced developer. You don’t want to spend very long at the bottom of this kind of ranking. Some of this is just experience, but in many cases it’s just a mindset–there are plenty of “experienced” developers who haven’t actually learned to improve. It’s true in many professions, but especially so in programming–you can’t plateau. You have to keep learning. The world changes, programming changes, and what was true 10 years ago is laughably outdated.

The attributes I listed in the previous article are still applicable. They are still valuable, but there is more. Note that I am not claiming in this article that I’ve mastered these. I still aspire to meet higher standards in each of these areas. Remember that it is not hypocrisy to espouse good ideas, even while struggling to live up to them. These are standards to live up to, not descriptions of any one person I know (though I do know plenty of people who are solid in at least one of these areas).

Sense of Ownership

Ownership means a lot of things, but mainly that you don’t wait for problems to find you. It means that if you see a problem, you assume it’s your job to either fix it or find someone else who can, and then to make sure it happens. It means not ignoring emails because, hey, not my problem! It means taking issues seriously and making sure they are dealt with. Someone with a sense of ownership would never sweep a problem under the rug or blithely hope that someone else will deal with it.

You could equate ownership with responsibility, but I think it goes beyond that. “Responsibility” often takes on the hue of a burden or delegation of an unwelcome task, while “ownership” implies that you are invested in the outcome.

Ownership often means stepping outside of your comfort zone. You may think you’re not the best person to deal with something, but if no one else is doing it, than you absolutely are. Just step up, own the problem, and get it done.

Ownership does not mean that you do all the work–that would be draining, debilitating, and ultimately impossible. It does not mean that you specify bounds for your responsibility and forbid others to encroach. It especially does not mean code ownership in the sense that only you are allowed to change your code.

Ownership is a mentality that defies strict hierarchies of control in favor of a more egalitarian opportunism.

Closely related to the idea of ownership is taking responsibility for your mistakes. This means you don’t try to excuse yourself, shift blame, or minimize the issue unnecessarily. If there’s a problem you caused, be straight about it, explain what happened, what you’re going to do to prevent it, and move on.

Together, these ideas on ownership will gain you a reputation as someone who wants the best for the team or product. You want to be that person.

Remember, if you are ever having the thought, Someone ought to…–stop! That someone is you.

Data-Driven

A good developer does not make assumptions. Experience is good, yes, but data is better. Far, far better. Knowing how to measure things is far more important than being able to change them. If you make changes without measuring, then you’re just a random-coding monkey, just guessing that you’re doing something useful. Especially when it comes to performance, building a system to automatically measure performance is actually more important than the actual changes to performance. This is because if you don’t have that system, you will spend far more time doing manual measurement than actual development. See the section on Automation below.

Measurement can be simple. For some bugs, the measurement is merely, does the bug repro or not? For performance tuning of data center server applications, it will likely be orders of magnitude more complicated and involve systems dedicated to measurement.

Determining the right amount of data to make a decision is not always easy. You do have to balance this with expediency, and you don’t want to hold good ideas hostage to more measurement than necessary. However, there is very little you should that do completely blind with no data at all. As a developer, your every action should be independently justifiable.

The mantra of performance optimization is Measure, Measure, Measure. This should be the mantra of all software development. Are things improving or not? Faster or not? How much? Are customers happier or not? Can tasks be completed easier? Are we saving more money? Does it use less memory? Is our capacity larger? Is the UI more responsive? How much, exactly?

The degree to which you measure the answer to those questions is in large part dependent on how important it is to your bottom line.

My day job involves working on an application that runs on thousands of servers, powering a large part of Bing. With something like this, even seemingly small decisions can have a drastic effect in the end. If I make something a bit more inefficient, it could translate into us needing to buy more machines. Great, now my little coding change that I didn’t adequately measure is costing the company hundreds of thousands of extra dollars per year. Oops.

Even for smaller applications, this can be a big deal. For example, making a change that causes the UI to be 20% more sluggish in some cases may not be noticed if you don’t have adequate measurement in place, but if it leads to a bad review by someone who noticed it, and there are adequate competitors, that one decision could be a major loss of revenue.

Solid Tests

Notice that I don’t say “tests”, unqualified. Good tests, solid, repeatable tests. Those are the only ones worth having.

If you see a code change that doesn’t have accompanying test changes, don’t be afraid to ask the question, “Where are the tests?” The answer might be that existing tests cover the change, or that tests at a larger scope, or in a different change will cover it, but the point is to ask the question, and make sure there is a satisfactory answer. “Manual test” is a valid response sometimes, but this should be very rare, and justifiable.

I cannot say how many times I’ve been saved due to the hundreds of unit tests that exercise my code, especially when I’m attempting a big internal refactor, usually for performance reasons.

As important as good tests are, it’s also important to get rid of bad tests. Don’t waste resources on things that aren’t helpful. Insist on a clean, reliable test suite. I’m not sure which is worse: no tests, or tests you can’t rely on. Eventually, unreliable tests become the same as having no tests at all.

Automation

An effective developer is always trying to put themselves out of a job. Seriously. There is more work than you can possibly fit in the time allotted. Automate the heck out of the stuff that annoys you, trips you up, is repetitive, is frequent, is error-prone. Once you can break down a process into something so deterministic that you could write a script for someone else to follow and get the same result, then make sure that someone else is a program.

This is more than just simple maintenance scripts for server management. This is ANY part of your job. Collecting data? Get it automatically ingested into the systems that need it. Generating reports? If you’ve generated the same report more than twice, don’t do it a third time. Your build system requires more than a single step? What’s wrong with you?

You have to free yourself up for more interesting, more creative work. You’re a highly paid programmer. Act like it.

Example: One of my jobs in the last year has been to run regular performance profiling, analyze the results, and send them to my team, making suggestions for future focus. This involved a bunch of steps:

  1. Log onto a random machine in the datacenter.
  2. Start a 120-second CPU profile.
  3. Wait for 120-seconds plus a few minutes for processing, symbol resolution, etc.
  4. Compress file, copy to my machine
  5. Analyze file–group, filter, and sort data according to various rules.
    1. Look for a bunch of standard things that I always report on
  6. Do the same thing for a 900-second memory/exception/thread/etc. profile.

This took about an hour each time, sometimes more.

I realized that every single part of this could happen automatically. I a wrote a service that gets deployed to every datacenter machine. A couple of times per day it checks to see whether we need a profile, whether the machine is in a good state to profile, etc.. It then runs the profiler, collects the data, and even analyzes the data automatically (See Chapter 8 of Writing High-Performance .NET Code for a hint about how I did this). This all gets uploaded to a file server and the analysis gets displayed on a web-site. No intervention whatsoever. Not only do I not have to do this work myself anymore, but others are empowered to look at the data for themselves, and we can easily add more analysis components over time.

Unafraid of Communication

The final thing I want to talk about is communication. This has been a challenge for me. I definitely have the personality type that really likes to disappear into a cave and pound on a keyboard for a few days, to emerge at the end with some magical piece of code. I would delete Outlook from my computer if I could.

This kind of attitude might serve you well for a while, but it’s ultimately limiting.

As you get more senior, communication becomes key. Effective communication skills are one of the things you can use to distinguish yourself to advance your career.

Effective communication can begin with a simple acknowledgement of someone’s issue, or an explanation that you’re working on something, with a follow-up to everyone involved at regular points. Nobody likes to be kept in the dark, especially for burning issues. For time-critical issues, a “next update in XX hours” can be vital.

Effective communication also means being able to say what you’re working on and why it’s cool.

Eventually, it means a lot more–being able to present complicated ideas to many other people in a simple, understandable, logical way.

Good communication skills enable you to be able to move beyond implementing software all by yourself to helping teams as a whole do better software. You can have a much wider impact by helping and teaching others. This is good for your team, your company, and your career.

Do you have a good engineering culture?

I assume one big prerequisite to all of these attributes: You must have a solid engineering environment to operate in. If management gives short shrift to employee happiness, sound software engineering principles, or the workplace is otherwise toxic, than perhaps you need to focus on changing that first.

If your leaders are so short-sighted that they can’t stand the thought of you automating your work instead of just getting the job done, that’s a problem.

If bringing up problems or admitting fault to a mistake is a career-limiting move, then you need to get out soon. That’s a team that will eventually implode under the weight of cumulative failure that no one wants to address.

Don’t settle for this kind of workplace. Either work to change it or find some place better.

 

Practical uses of WeakReference

In Part 1, I discussed the basics of WeakReference and WeakReference<T>. Part 2 introduced short and long weak references as well as the concept of resurrection. I also covered how to use the debugger to inspect your memory for the presence of weak references. This article will complete this miniseries with a discussion of when to use weak references at all and a small, practical example.

When to Use

Short answer: rarely. Most applications won’t require this.

Long answer: If all of the following criteria are met, then you may want to consider it:

  1. Memory use needs to be tightly restricted – this probably means mobile devices these days. If you’re running on Windows RT or Windows Phone, then your memory is restricted.
  2. Object lifetime is highly variable – if you can predict the lifetime of your objects well, then using WeakReference doesn’t really make sense. In that case, you should just control their lifetime directly.
  3. Objects are relatively large, but easy to create – WeakReference is really ideal for that large object that would be nice to have around, but if not, you could easily regenerate it as needed (or just do without).
  4. The object’s size is significantly more than the overhead of using WeakReference<T> – Using WeakReference<T> adds an additional object, which means more memory pressure, an extra dereference step. It would be a complete waste of time and memory to use WeakReference<T> to store an object that’s barely larger than WeakReference<T> itself. However, there are some caveats to this, below.

There is another scenario in which WeakReference may make sense. I call this the “secondary index” feature. Suppose you have an in-memory cache of objects, all indexed by some key. This could be as simple as Dictionary<string, Person>, for example. This is the primary index, and represents the most common lookup pattern, the master table, if you will.

However, you also want to look up these objects with another key, say a last name. Maybe you want a dozen other indexes. Using standard strong references, you could have additional indexes, such as Dictionary<DateTime, Person> for birthdays, etc. When it comes time to update the cache, you then have to modify all of these indexes to ensure that the Person object gets garbage collected when no longer needed.

This might be a pretty big performance hit to do this every time there is an update. Instead, you could spread that cost around by having all of the secondary indexes use WeakReference instead: Dictionary<DateTime, WeakReference<Person>>, or, if the index has non-unique keys (likely), Dictionary<DateTime, List<WeakReference<Person>>>.

By doing this, the cleanup process becomes much easier: you just update the master cache, which removes the only strong reference to the object. The next time a garbage collection runs (of the appropriate generation), the Person object will be cleaned up. If you ever access a secondary index looking for those objects, you’ll discover the object has been cleaned up, and you can clean up those indexes right then. This spreads out the cost of cleanup of the index overhead, while allowing the expensive cached objects to be cleaned up earlier.

Other Uses

This Stack Overflow thread has some additional thoughts, with some variations of the example below and other uses.

A rather famous and involved example is using WeakReferences to prevent the dangling event handler problem (where failure to unregister an event handler keeps objects in memory, despite them having no explicit references anywhere in your code).

Practical Example

I had mentioned in Chapter 2 (Garbage Collection) of Writing High-Performance .NET Code that WeakReference could be used in a multilevel caching system to allow objects to gracefully fall out of memory when pressure increases. You can start with strong references and then demote them to weak references according to some criteria you choose.

That is the example I’ll show here. Note that this not production-quality code. It’s only about 5% of the code you would actually need, even assuming this algorithm makes sense in your scenario. At a minimum, you probably want to implement IDictionary<TKey, TValue> on it, perhaps tighten up some of the temporary memory allocations, and more.

This is a very simple implementation. When you add items to the cache, it adds them as strong references (removing any existing weak references for that key). When you attempt to read a value from the cache, it tries the strong references first, before attempting the weak references.

Objects are demoted from strong to weak references based simply on a maximum age. This is admittedly rather simplistic, but it gets the point across.

using System; using System.Collections.Concurrent; using System.Collections.Generic; using System.Diagnostics; namespace WeakReferenceCache { sealed class HybridCache<TKey, TValue>

where TValue:class { class ValueContainer<T> { public T value; public long additionTime; public long demoteTime; } private readonly TimeSpan maxAgeBeforeDemotion; private readonly ConcurrentDictionary<TKey, ValueContainer<TValue>> strongReferences = new ConcurrentDictionary<TKey, ValueContainer<TValue>>(); private readonly ConcurrentDictionary<TKey, WeakReference<ValueContainer<TValue>>> weakReferences = new ConcurrentDictionary<TKey, WeakReference<ValueContainer<TValue>>>(); public int Count { get { return this.strongReferences.Count; } } public int WeakCount { get { return this.weakReferences.Count; } } public HybridCache(TimeSpan maxAgeBeforeDemotion) { this.maxAgeBeforeDemotion = maxAgeBeforeDemotion; } public void Add(TKey key, TValue value) { RemoveFromWeak(key); var container = new ValueContainer<TValue>(); container.value = value; container.additionTime = Stopwatch.GetTimestamp(); container.demoteTime = 0; this.strongReferences.AddOrUpdate(key, container, (k, existingValue) => container); } private void RemoveFromWeak(TKey key) { WeakReference<ValueContainer<TValue>> oldValue; weakReferences.TryRemove(key, out oldValue); } public bool TryGetValue(TKey key, out TValue value) { value = null; ValueContainer<TValue> container; if (this.strongReferences.TryGetValue(key, out container)) { AttemptDemotion(key, container); value = container.value; return true; } WeakReference<ValueContainer<TValue>> weakRef; if (this.weakReferences.TryGetValue(key, out weakRef)) { if (weakRef.TryGetTarget(out container)) { value = container.value; return true; } else { RemoveFromWeak(key); } } return false; } public void DemoteOldObjects() { var demotionList = new List<KeyValuePair<TKey, ValueContainer<TValue>>>(); long now = Stopwatch.GetTimestamp(); foreach(var kvp in this.strongReferences) { var age = CalculateTimeSpan(kvp.Value.additionTime, now); if (age > this.maxAgeBeforeDemotion) { demotionList.Add(kvp); } } foreach(var kvp in demotionList) { Demote(kvp.Key, kvp.Value); } } private void AttemptDemotion(TKey key, ValueContainer<TValue> container) { long now = Stopwatch.GetTimestamp(); var age = CalculateTimeSpan(container.additionTime, now); if (age > this.maxAgeBeforeDemotion) { Demote(key, container); } } private void Demote(TKey key, ValueContainer<TValue> container) { ValueContainer<TValue> oldContainer; this.strongReferences.TryRemove(key, out oldContainer); container.demoteTime = Stopwatch.GetTimestamp(); var weakRef = new WeakReference<ValueContainer<TValue>>(container); this.weakReferences.AddOrUpdate(key, weakRef, (k, oldRef) => weakRef); } private TimeSpan CalculateTimeSpan(long offsetA, long offsetB) { long diff = offsetB - offsetA; double seconds = (double)diff / Stopwatch.Frequency; return TimeSpan.FromSeconds(seconds); } } }

That’s it for the series on Weak References–I hope you enjoyed it! You may never need them, but when you do, you should understand how they work in detail to make the smartest decisions.