Author Archives: Ben

The Best Thing That Ever Happened To Me Was That John Williams Did Not Return My Letter

I have been a fan of the film composer John Williams since I was young. The first CD I ever bought was the soundtrack to Jurassic Park. I still have it. My collection now looks like this:

Photo of John Williams CD collection
My John Williams CD collection

And it’s still growing because I’m a sucker for all the reissues, expanded editions, remasterings, etc. that are being released these days.

On June 23, 2022 I was able to attend John Williams: The 90th Birthday Gala Concert at the Kennedy Center for the Performing Arts in Washington, D.C. Attending this concert, as well as the donor dinner afterwards, was my birthday present to myself. The National Symphony Orchestra with Stéphane Denève conducting, presented an inspiring program of some of Williams’ best works, with guest stars Steven Spielberg, Yo-Yo Ma, Anne-Sophie Mutter, Daisy Ridley, and Jackie Joyner-Kersee.

The program was wonderful and the concert lineup is available to see so I won’t go through the whole thing in this post. If you’re interested, her are a few a resources to browse:

Some of the highlights for me included:

  • The U.S. Army Herald Trumpets during Olympic Fanfare and Theme – Just amazing power. You’ve never heard the piece like this before.
  • Screening of Kobe Bryant’s Dear Basketball, with Williams’ score performed live.
  • The Duel from Tintin, performed hilariously and heroically by Anne-Sophie Mutter and Yo-Yo Ma.
  • Selection from The Rise of Skywalker – I believe this might be the first time this has been performed in concert.
  • Superman March – One of my favorites, which I had never actually heard in concert (though they did a slightly  abbreviated version as accompaniment to a photo montage of John Williams through the years).

Halfway through the concert, it was revealed that John Williams was seated in the concert hall (just a few rows behind me, in seat J-1 naturally). From then on, he received applause after every piece, as well as when he exited and entered the auditorium.

As usual, the concert was too short by half.

After the concert, those of us who were donors to the gala’s beneficiary, the NSO’s Music Education Endowment, headed to the Skylight Pavilion for dinner. I was at a table with seven other wonderful people: a family of five from Florida with two twin sons studying film music, and a couple from Connecticut. We made small talk, enjoyed our dinner, and discussed what the music of John Williams meant to us.

Daisy Ridley and me

A few of the night’s VIPs attended the dinner as well. After eating, I was able to briefly talk to Daisy Ridley and grab a quick photo with her. At the very end of the night I chatted with Anne-Sophie Mutter about her wonderful violin playing both that night and on her recent albums with John Williams. I told her what an inspiration she was to my daughter, who has just started learning viola. And finally, I exchanged a few words with Stéphane Denève, the night’s conductor.

The one person I was not able to meet was John Williams. Kennedy Center security was extremely tight around him and did not let anyone but close friends near him. They told one of my table-friends that if you managed to get through them, John would be extremely kind and would talk to you for as long as you wanted. But their job was their job, and they were serious about it. I was able to tell John thank you as he walked right by me on his way out. I don’t envy his position: he is one of the most famous musicians in the world, and everybody there would have given their right arm to spend a few minutes with him. He is also 90 years old and deserves to have as much time to himself as he wants.

But yet. There was frustration there, maybe a little sadness. I couldn’t meet him. I couldn’t talk to him. I couldn’t express how much his music means to me. I know this instinct to do so comes from a selfish place. He certainly doesn’t need to hear all of that from me. He’s heard it all. He knows it. Despite his well-known humility and graciousness, he knows his music means a lot to millions of people. No, it’s me who needs to tell him. It’s for me, just like this blog entry is for me.

We need our heroes to know from our lips that they are our heroes. We need to cement their feats in our mind like trees planted for posterity. “I’ve planted this thought between us. It exists. It’s real.”


In September 2017, John Williams came to Seattle and performed with the Seattle Symphony. I attended with my young daughter and we were fairly close to the front, with a wonderful view. This concert remains to this day one of the most significant events in my life. The joy I felt at hearing him conduct his music live was full.

Paradoxically, however, I felt an overwhelming sense of sadness and regret at the way one facet of my life has gone.

To be clear, I have an exquisitely blessed life. I have a wonderful family with three children. My career is successful, and we are more than comfortable. But at the concert that night I pondered the time at the beginning of my university career when I consciously chose to study computer science instead of music. After university came life: work, family, and other interests took over. My musical focus faltered.

In the second half of that 2017 concert, that sadness and regret changed to firm resolution—I was going to make a change. Interest in my other hobbies evaporated. My life suddenly felt too cluttered. I was impatient with who I was as a human being, let alone as a lapsed musician.

A few months after the concert, I sent John Williams a letter expressing some of these thoughts and thanking him for his life of dedication to so finely honed a craft.

I have since restarted my music education. My wife and I bought a real acoustic grand piano for the first time in our life (she also plays piano, wonderfully), and I’ve been playing it, diligently practicing on most days. I have been reading music theory and doing exercises. I have written a few compositions that I am willing to share publicly (and more that I’m not!). I decided I wanted to go back to my roots, and finally become a composer. I’d build up to it, but of course I wanted to write grand, sweeping scores for full orchestra–all like my musical inspiration John Williams.


Years later, a few weeks before the 2022 gala concert, the letter was returned unopened with “Refused” stamped on it by the USPS. To be honest, I half-expected as much. He receives far too much mail from too many people, and I’m sure he doesn’t want to spend the rest of his years on this planet digging himself out of that mountain when he could be spending the time with his family and friends. I thank the anonymous personal assistant, employee, friend, or family member who sent the letter back rather than destroy it. I was able to get my Jurassic Park album cover back!

Between the returned letter and the inability to meet him at the gala dinner, something again broke in me. I realized that I had been seeking something that was never going to be true. I think subconsciously I wanted to be like him, to have his success, to have created all of that music. That attitude, subconscious though it was, was hindering me. I could never be like him. I can only be me. My past choices aren’t to be regretted, but built-upon and mined. My path is unique and my own, and it’s the only one I can walk down. I don’t have the connections or formal education that a music degree would bring. I can’t spend 8 hours a day studying or writing music. I’m starting fresh and late and all alone and that’s OK.

These twin concert experiences frame a confusing jumble of years for me, where I knew where I wanted to move toward, but not in the right way. It caused me to lurch from stolid determination in one moment to intolerable angst in another. I didn’t know who I was anymore, and I suffered for it.

For the universe (and Mr. Williams) to ignore me was the necessary correction.

I still don’t know where I’ll go with music, but I’m a lot more at peace about it now than I have been in the last few years. I’m spending more time just trying to be a good person. I try not to get anxious about not spending time on music. I try to spend quality time with kids. I meditate. I exercise. I try to cut out the things in my life that don’t bring fulfilment. And yes, I work on music. Slowly, deliberately, when I can.


Thank you, Mr. Williams, for all of your music, for being the soundtrack of my life. Thank you for the hours, weeks, years, and decades of intense, sacrificial, hard work that I know it takes to produce quality art. Thank you for a lifetime of providing music that create so much happiness and inspiration.


I still hope in the depths of my heart to someday meet him or hear from him–how could I not?! But it’s not likely, and I’m ok with that. We each must forge our own path, and I needed to learn that in a very specific way.

Announcing: Microsoft.IO.RecyclableMemoryStream 1.3.0

It’s a long time coming, but I’ve finally released the next version of Microsoft.IO.RecyclableMemoryStream!

Links

What’s New

Bug Fixes

  • Removed a buggy and unnecessary boundary check in Write methods. (927e173, benmwatson)

Performance

  • Removed LINQ iteration in some properties (ffe469c, ndrwrbgs)
  • Overloads of Read, SafeRead, and Write that accept Span<byte> arguments (.NET Core and .NET Standard targets only) (86fc159, brantburnett)

Functionality

  • New buffer allocation strategy: exponential. Instead of linearly increasing large buffer sizes by a fixed multiple (say, 1MB), you can choose to have it increase exponentially in size, starting with smaller large buffers. This will allow you more efficient use of space, especially in smaller heap scenarios where you don’t have gobs of memory to keep in a pool. We use this in Bing in some data centers that are more resource constrained than others. (4755383, magicpanda0618)
  • New targets for .NET Framework 4.6, .NET Standard 2.1 (4e97a7c, brantburnett,
    927e173, benmwatson)
  • Overload for TryGetBuffer, introduced in .NET Framework 4.6.
    (927e173, benmwatson)
  • Allow the Stream’s GUID to be set explicitly (c21b25b, mosdav)

Meta

  • Removed CBT build support files. Using dotnet.exe to build now.
  • Added public key for delayed signing during build
  • Consolidate and updated all NuGet package settings in the .csproj file.
  • Added setting for generating a NuGet package for symbols (.snupkg)

How to Write and Publish a Technical Book and Not Lose Your Sanity

Earlier this year, I published my third book, Writing High-Performance .NET Code (2nd Edition), my second that was fully self-published. In this article I want to detail a topic that I found incredibly under-documented as I went through the process of writing this revision: the technical process of managing a book’s manuscript.

This won’t cover how to actually write content, or how to work with Amazon, CreateSpace, Smashwords, or the other publishers out there. It’s just about how you go from manuscript to PDF (or EPUB) in a way that is infinitely and quickly repeatable.

This can be more complicated than you think. To illustrate this, let me explain my (very broken) process for the first edition of this book.

First Edition

The first edition of Writing High-Performance .NET Code was written entirely in Microsoft Word. While easy to get started because of its familiarity (as well as the great reviewing tools when it came time for proofreading from my editor), it ended up being a curse in the long run.

  • Reorganization of content was awful. Trying to move entire chapters, or even just sections, can be very difficult once you have a lot of layout already done. I often had to rework page headings, page numbering, and various style issues after the move.
  • Adding formatting is very tedious. Since this is a programming book that will actually be printed, there are a huge number of stylistic factors that need to be considered. There are different fonts for code blocks, inline code, and tons of other features. Code samples need to be shaded and boxed. There needs to be correct spacing in lists, around lists, after paragraphs, before and after headings, and tons more. A point-and-click/WYSIWYG interface actually becomes a severe bottleneck in this case. Then, as mentioned in the first bullet–once you reorganize something, there’s a good chance you’ll need to double-check a lot of formatting. No matter how nit-picky you’re being, you need to be much more. It’s excruciating doing this in Word.
  • Control over PDF creation is very limited. Word only gives you a few knobs. I didn’t feel like springing for the full version of Acrobat.
  • Exporting to EPUB/HTML creates extremely messy HTML. Hundreds of styles, very bloated HTML. The HTML actually needed to be hand-edited in many cases. And in the end, I actually needed three versions of the EPUB file, so making post-publishing edits became triply awful.
  • Every time I needed to make a correction, the editing process was tedious beyond belief. Re-exporting from Word was a non-starter because of how much manual cleanup I had to do. Instead, I made fixes in up to 6 individual files (Master copy, EPUB, EPUB for Smashwords, EPUB for Kindle conversion, Word file for online PDF, Word file for print PDF). Yuck.

2nd Edition

When I decided to update the book for the 2nd edition, I knew I wasn’t going to repeat this awful process. I needed a completely new way of working.

My main requirement was: Trivial to go from a single source document to all publishable formats. I wanted one source of truth that generated all destination formats. I wanted to type “build” and be able to upload the resulting file immediately.

Enter Scrivener. During the review process for the first edition, a friend’s wife mentioned Scrivener to me. So it was the first thing I tried. Scrivener has a lot of features for writers of all sorts, but the things that I liked were:

  • Trivial reorganization of sections and chapters. Documents are organized in a tree hierarchy which becomes your chapters and sections. Reordering content is merely a matter of dragging nodes around.
  • Separates formatting from content. There is no WYSIWYG editing here. (You can change the text formatting–but it’s just for your comfort in editing. It has nothing to do with the final product.) In the end, I’m fudging this line a bit because I’m using a Markdown syntax.
  • Integration with Multimarkdown. This formed the basis for both print and electronic editions. Since everything is just text and Multimarkdown supports inclusion of Latex commands
  • Simple export process. I just tell it to compile my book and it combines all the nodes in a tree (you have a lot of control over how this happens) and produces a single file that can either be published directly or processed more.

Setting up the right process took quite a bit of trial and error (months), but I ended up with something that worked really well. The basic process was this:

  • Top-level nodes are chapters, with multiple sections underneath.
  • First-level formatting in Multimarkdown syntax.
  • Second-level formatting in Latex syntax, hidden inside Multimarkdown comments.
  • Using Scrivener, export book to Multimarkdown (MMD) file.
  • EPUB conversion happens directly from MMD file.
  • PDF conversion happens by converting MMD file to a TEX file, then from TEX to PDF.

Let’s look at a few of these pieces in more detail.

Multimarkdown

Multimarkdown is a specific variant of a family of markdown products which attempts to represent simple formatting options in plain text.
For example, this line:

* **# Induced GC**: Number of times `GC.Collect` was called to explicitly start garbage collection.

indicates that this is a bullet point with some bolded text, and GC.Collect is in a different font, indicating a code-level name.

There is Multimarkdown syntax for headers, links, images, tables, and much more.

However, it doesn’t have everything. If you’re doing a print book, it likely won’t be enough if you use anything but the most basic of formatting.

Latex

For advanced formatting, the industry standard is Latex. It is its own programming language and can get quick complex. However, I found getting started fairly easy. One of the best advantages of using Latex is that it has sensible defaults for everything–it just makes a good book automatically. You just change the things you notice need changing. There are tons of examples and communities out there that can help. See the section at the end for useful links.

Latex was included by putting commands inside the manuscript, inside HTML comments (which Multimarkdown would ignore), like this:

<!-- \medskip -->

The most important part of using Latex is setting up the environment–this is basically where you set up how you want all the various elements in your book to look by default. Here’s an excerpt:

<!--
\documentclass[12pt,openright]{book}
\usepackage[paperwidth=7.5in, paperheight=9.25in, left=1.0in, right=0.75in, top=1.0in, 
                                bottom=1.0in,headheight=15pt, ]{geometry}
\usepackage{fancyhdr}
\usepackage[pdftex,
pdfauthor={Ben Watson},
pdftitle={Writing High-Performance .NET Code},
pdfsubject={Programming/Microsoft/.NET},
pdfkeywords={Microsoft, .NET, JIT, Garbage Collection},
pdfproducer={pdflatex},
pdfcreator={pdflatex}]{hyperref}

\usepackage[numbered]{bookmark}

% Setup Document
% Setup Headings

\pagestyle{fancy}
\fancyhead[LE]{\leftmark}
\fancyhead[RE]{}
\fancyhead[LO]{}
\fancyhead[RO]{\rightmark}

... hundreds more lines...added as needed over time...
-->

By doing all of that, I get so many things for free:

  • Default formatting for almost everything, including code listings, table of contents, index, headers
  • Page setup–size, header layout and content
  • Fonts
  • Colors for various things (EPUB and PDF editions)
  • Paragraph, image, code spacing
  • Tons more.

One problem that vexed me for a few days was dealing with equations. Latex has equations, and there is even a Latex equation translator for EPUB, but it doesn’t work in Kindle. Since I only had three equations in the whole book, I decided not to do something special for Kindle, but simplify the EPUB formatting significantly. The other option is to convert the equations to images, but this can be tricky, and it would not be automated. I still wanted a fancy-looking equation in the PDF and print editions. So I came up with something like this:

<!--\iffalse-->
0.01 * P * N
<!--\fi-->
<!--\[ \frac{P}{100}{N} \]-->

For anything where Latex is doing the document processing, the first statement will be ignored because it’s in an \iffalse, and the second statement will be used. If Latex isn’t being processed, then text in the first statement is used and the rest ignored.

I used similar tricks elsewhere when I needed the formatting in the printed edition to be different than the simpler electronic versions.

MMD –> Latex Problems

There were still some issues that don’t automatically translate between MMD and Latex–there were fundamental incompatibilities in the EPUB and PDF translation processes, so I still had to write a custom preprocessor that replaced some MMD code with something more suitable for print. For example, when mmd.exe translates an MMD file to a TEX file, it has certain latex styles it uses by default. I could never figure out how to customize these styles, so I just did myself with a preprocessor.

Benefits

So what have I gained now?

  • All my content is just text–no fiddling with visual styles.
  • Formatting control is trivial with Multimarkdown syntax.
  • Additional specificity for PDFs is achieved with Latex inline in the document (or in the document header).
  • A single command exports the whole book to a single MMD file.

The MMD file is the output of the writing process. But it’s just the input to the build system.

Build System

Yep, I wrote a build system for a book. This allowed me to do things like “build pdf” and 30 seconds later out pops a PDF that is ready to go online.

Confession: the build system is just a Windows batch file. And a lot of cobbled tools.

The top of the file was filled with setting up variables like this:

@echo off

REM Setup variables

SET PRINTTRUEFALSE=false{print}
SET KINDLETRUEFALSE=false{kindle}
SET VALIDEDITION=false


IF /I "%1"=="Kindle" (
set ISBN13=978-0-990-58348-6
set ISBN10=0-990-58348-1
set EDITION=Kindle
set EXT=epub
SET KINDLETRUEFALSE=true{kindle}
set VALIDEDITION=true)
…

Then is setting up build output and locating my tools:

set outputdir=%bookroot%\output
SET intermediate=%outputdir%\intermediate
SET metadata=%bookroot%\metadata
SET imageSource=%bookroot%\image
IF /I "%EDITION%" == "Print" (SET imageSource=%bookroot%\image\hires)
SET bin=%bookroot%\bin
SET pandoc=%bin%\pandoc.exe
SET pp=%bin%\pp.exe
SET preprocessor=%bin%\preprocessor.exe
SET mmd=%bin%\mmd54\multimarkdown.exe
SET pdflatex=%bin%\texlive\bin\win32\pdflatex.exe
SET makeindex=%bin%\texlive\bin\win32\makeindex.exe
SET kindlegen=%bin%\kindlegen.exe
…

%bookroot% is defined in the shortcut that launches the console.

Next is preprocessing to put the right edition label and ISBN in there. It also controls some formatting by setting a variable if this is the print edition (I turn off colors, for example).

IF EXIST %intermediate%\%mmdfile% (%preprocessor% %intermediate%\%mmdfile% !EDITION=%EDITION% !ISBN13=%ISBN13% !ISBN10=%ISBN10% !EBOOKLICENSE="%EBOOKLICENSE%" !PRINTTRUEFALSE=%PRINTTRUEFALSE% !KINDLETRUEFALSE=%KINDLETRUEFALSE% > %intermediate%\%mmdfile_edition%)

 

Then comes the building. EPUB is simple:

IF /I "%EXT%" == "epub" (%pandoc% --smart -o %outputfilename% --epub-stylesheet styles.css --epub-cover-image=cover-epub.jpg -f %formatextensions% --highlight-style pygments --toc --toc-depth=2 --mathjax -t epub3 %intermediate%\%mmdfile_edition%)

 

Kindle also generates an EPUB, which it then passes to kindlegen.exe:

IF /I "%EDITION%" == "Kindle" (%pandoc% --smart -o %outputfilename% --epub-stylesheet styles.css --epub-cover-image=cover-epub.jpg -f %formatextensions% --highlight-style haddock --toc --toc-depth=2  -t epub3 %intermediate%\%mmdfile_edition%

%kindlegen% %outputfilename%

PDF (whether for online or print) is significantly more complicated. PDFs need a table of contents and an index, which have to be generated from the TEX file. But first we need the TEX file. But before that, we have to change a couple of things in the MMD file before the TEX conversion, using the custom preprocessor tool. The steps are:

  • Preprocess MMD file to replace some formatting tokens for better Latex code.
  • Convert MMD file to TEX file using mmd.exe
  • Convert TEX file to PDF using pdflatex.
  • Extract index from the PDF using makeindex.
  • Re-run conversion, using the index.
IF /I "%EXT%" == "pdf" (copy %intermediate%\%mmdfile_edition% %intermediate%\temp.mmd
%preprocessor% %intermediate%\temp.mmd ```cs=```{[Sharp]C} ```fasm=```{[x86masm]Assembler} > %intermediate%\%mmdfile_edition%
%mmd% -t latex --smart --labels --output=%intermediate%\%latexfile_edition% %intermediate%\%mmdfile_edition%
%pdflatex% %intermediate%\%latexfile_edition% > %intermediate%\pdflatex1.log
%makeindex% %intermediate%\%indexinputfile% > %intermediate%\makeindex.log
%pdflatex% %intermediate%\%latexfile_edition% %intermediate%\%indextexfile% > %intermediate%\pdflatex2.log

Notice the dumping of output to log files. I used those to diagnose when I got either the MMD or Latex wrong.

There is more in the batch file. The point is to hide the complexity–there are NO manual steps in creating the publishable files. Once export that MMD from Scrivener, I just type “build.bat [edition]” and away I go.

Source Control

Everything is stored in a Subversion repository which I host on my own server. I would have used Git since that’s all the rage these days, but I already had this repo from the first edition. The top level of the 2nd edition directory structure looks like this:

  • BetaReaders – instructions and resources for my beta readers
  • Bin – Location of build.bat as well as all the tools used to build the book.
  • Business – Contracts and other documents around publishing.
  • Cover – Sources and Photoshop file for the cover.
  • CustomToolsSrc – Source code for my preprocessor and other tools that helped me build book.
  • Graphics – Visio and other source files for diagrams.
  • HighPerfDotNet2ndEd.scriv — where Scrivener keeps its files (in RTF format for the content, plus a project file)
  • Image – All images for the book, screenshots or exported from diagrams.
  • Marketing – Some miscellaneous images I used for blog posts, Facebook stuff.
  • Metadata – License text for eBook and styles.css for EPUB formatting.
  • mmd – output folder for MMD file (which I also check into source control–so I can do a build without having Scrivener installed, if necessary).
  • Output – Where all output of the build goes (working files go to output\intermediary)
  • Resources – Documents, help files, guides, etc. that I found that were helpful, e.g., a Latex cheat sheet.
  • SampleChapter – PDF of sample chapter.
  • SampleCode – Source code for all the code in the book.

Tools

  • Scrivener – What I manage the book’s manuscript in. Has a bit of a learning curve, but once you get it set up, it’s a breeze. Only thing I wish is for a command-line version that I could integrate into my build.bat.
  • PdfLatex.exe – You need to get a Latex distribution. I used texlive, but there are more–probably some a lot more minimal than the one I got. It’s big. This is what creates the PDF files.
  • Pp.exe – an opensource preprocessor I used for a while, but it ended up not being able to handle one situation.
  • Preprocessor.exe – Custom tool I wrote to use instead of pp.exe.
  • Multimarkdown – Scrivener has an old version built-in, but I ended up using the new version outside of it for some more control.
  • Pandoc – I use this for the conversion from MMD to EPUB (and Word for internal use).
  • Adobe Digital Editions 3.0 –  This is useful for testing EPUB files for correct display. I couldn’t get the latest version to work correctly.
  • Calibre – In particular, the EPUB editor. For the first edition, I actually edited all the EPUB editions using this to clean up and correct the garbage Word output. For the second edition, I just used it for double-checking and debugging conversion issues.

Useful Web Sites

Announcing Writing High-Performance .NET Code, 2nd Edition!

The new book will be available for sale on or around May 1, 2018. You can pre-order right now for Kindle in many Amazon markets, and for print as well in the US. 

The book is out! Get it in the US, or go here for more markets and options.

The new book is a mix of greatly expanded coverage of old topics, as well as new material throughout. The fundamentals of .NET and how the GC and JIT work have not changed, not even in the face of .NET Core, but there have been many improvements, new APIs, and new ways of doing things that necessitated some updating and coverage of new topics.

It was also a great opportunity to incorporate a lot of the feedback I’ve received on the first edition.

Here’s a visualization of the book’s structure. I wrote the book in Scrivener and marked each section with a label depending on whether it was unchanged (green), modified (yellow), or completely new (purple). Click to expand to full size.

This is a good overview, but it doesn’t really convey what is exactly “new”–the newness is spread throughout the book in many forms. Here are some highlights:

Meta-content:

  • 50% increase in content overall (from 68K words to over 100K words, or over 200 new pages).
  • Fixed all known errata (quite a bit more than on that list, honestly).
  • Incorporated feedback from hundreds of readers.
  • A new foreword by Vance Morrison, .NET Performance Architect at Microsoft.
  • Better-looking graphics.
  • Completely new typesetting system to give a more professional look. (You’ll only notice this in print and PDF formats.)
  • Automated build process that can produce new revisions of all formats much easier than before, allowing me to publish updates as needed.
  • Cut the Windows Phone chapter. (No one will miss this, right?)
  • Updated cover.

Content:

  • List of CLR performance improvements over time.
  • Significant increase in examples of using Visual Studio to diagnose CPU usage, heap state, memory allocations, and more, throughout entire book.
  • Introduced a new tool for analyzing .NET processes: Microsoft.Diagnostics.Runtime (“CLR MD”). It’s kind of like having programmatic access to a debugger, and it’s very powerful. I have many examples of its usage throughout the book, tied to specific diagnostic scenarios.
  • Benchmarking! I include actual benchmark results in many sections. Many code samples use a benchmarking library to show the differences in various approaches.
  • More on garbage collection, including advanced configuration options; reworked and more detailed explanations of the GC process including less-known information; discussion of recent libraries to help with pooling; examples of using weak references; object resurrection; stackalloc; more on finalization; more ways to diagnose problems; and much, much more. This was already the biggest chapter, and it got bigger.
  • More details about JIT, how to do custom warmup, how to figure out which code is jitted, and many more improvements.
  • Greatly expanded coverage of TPL, including the TPL Dataflow library, how to use tasks effectively, avoiding lock convoys, and many other smaller improvements to existing topics.
  • More guidance on struct and class design, such as immutability, thread safety, and more. When to use tuples, the new ValueTuple type.
  • Ref-returns and locals and when to use.
  • Much more content on collections, including effectively taking advantage of initial capacity; sorting; key comparisons; jagged vs. multi-dimensional arrays; analyzing the performance of various collection types; and more.
  • Discussion of SIMD commands, including examples and benchmarks.
  • A detailed analysis of LINQ and why it’s more expensive than you think.
  • More ways to consume and process ETW events.
  • A detailed example of building a Roslyn-style Code Analyzer and automatic fixer (FxCop replacement).
  • A new appendix with high-level tips for ADO.NET, ASP.NET, and WPF.
  • And a lot more…

See the book’s web-site for up-to-date details on where to buy, and other news.

Don’t Make This Dumb Locking Mistake

What’s wrong with this code:

try
{
    if (!Monitor.TryEnter(this.lockObj))
    {
        return;
    }
    
    DoWork();
}
finally
{
    Monitor.Exit(this.lockObj);
}

This is a rookie mistake, but sadly I made it the other day when I was fixing a bug in haste. The problem is that the Monitor.Exit in the finally block will try to exit the lock, even if the lock was never taken. This will cause an exception. Use a different TryEnter overload instead:

bool lockTaken = false;
try
{
    Monitor.TryEnter(this.lockObj, ref lockTaken))
    if (!lockTaken)
    {
        return;
    }
    
    DoWork();
}
finally
{
    if (lockTaken)
    {
        Monitor.Exit(this.lockObj);
    }
}

You might be tempted to use the same TryEnter overload we used before and rely on the return value to set lockTaken. That might be fine in this case, but it’s better to use the version shown here. It guarantees that lockTaken is always set, even in cases where an exception is thrown, in which case it will be set to false.

Get Your Thread Synchronization Right the First Time

I was recently debugging a problem that just didn’t make any sense at first.

The code looked like this:

class App
{
    public bool IsRunning {get; set;}
    private Thread houseKeepingThread;
    
    public void Start()
    {
        this.IsRunning = true;
        this.houseKeepingThread = new Thread(ThreadFunc);
        this.houseKeepingThread.Start();
    }
    
    private void ThreadFunc()
    {
        while (this.IsRunning)
        {
            DoWork();
            // wait for 30 seconds
        }
    }
};

The actual code was of course a bit more complicated, but this demonstrates the essence of the problem.

The outward manifestation of the bug was that there was evidence that DoWork wasn’t being called over time as it should have.

To debug this, I first concentrated on reasons that the thread could end early, and none of them made any sense.

To finally figure it out, I attached a debugger to a process known to be in a suspect state and discovered evidence in the state of the App object that DoWork had never run, not even once.

I stared at the code for five seconds and then said out loud: “IsRunning isn’t volatile!”

Despite setting IsRunning to true before starting the thread, it is completely possible that the thread starts running before the memory is coherent across all of the threads.

What’s amazing about this bug is that it existed from the very beginning of this application. As in checkin #1. It has probably been causing problems for a while at some low level, but it recently must have gotten worse—it could be for any reason, some change in the pattern of execution, the load on the system, who knows.

The fix was quite easy:

Make a volatile backing field for the IsRunning property.

By using volatile, there will be a memory barrier that will enforce coherency across all threads that access this variable. Problem solved.

The lesson here isn’t about any particular debugging skills. The real lesson is to make sure you get this right in the first place, because debugging it takes far longer and is much trickier than just reasoning about it in the first place. These types of problems can stay hidden and extremely rare for a very long time until some other innocuous change causes them to start manifesting in strange, seemingly impossible ways.

If you want to learn more about memory barriers and when they’re required, you can check out Chapter 4 of Writing High-Performance .NET Code, and there is also this tutorial by Joseph Albahari.