Monthly Archives: September 2006

Queue multiple GUI updates into a single update event (C# .Net)

I’m writing a simple utility that involves scanning a hard disk and updating a display with the latest status. There are potentially many, many, many, many, MANY changes that happen to the display in very quick succession and sending an update event which triggers a screen refresh for every single state change would be an enormous bottleneck.

I implemented a simple queueing scenario where all the updated objects are queued for up to 200ms before being batched into a single event. The GUI can then update the display for every changed object all at once.

First, the queue:

private DateTime lastUpdateSent = DateTime.MinValue; //queue updated folders to limit the number of screen updates List<FolderObject> updatedFoldersQueue = new List<FolderObject>();

A simple list. Then I have a function which is called to handle the actual event generation:

private void FireUpdate(FolderObject folder) { if (this.OnFolderUpdated!=null) { try { if (!folder.Dirty) { folder.Dirty = true; updatedFoldersQueue.Add(folder); } TimeSpan diff = DateTime.Now - lastUpdateSent; if (diff.TotalMilliseconds > 200){ FolderUpdatedEventArgs args = new FolderUpdatedEventArgs(updatedFoldersQueue); int numFolders = updatedFoldersQueue.Count; //reinitialize queue updatedFoldersQueue = new List<FolderObject>(numFolders); OnFolderUpdated(this, args); Thread.Sleep(0); lastUpdateSent = DateTime.Now; } } catch (Exception ex) { System.Diagnostics.Trace.WriteLine(ex.Message); System.Diagnostics.Trace.WriteLine(ex.StackTrace); } } }

The folder.Dirty flag merely prevents a folder from being queued twice. The TimeSpan diff is the important bit. By checking the time difference between now and the last updated time, we can ensure that updates only get sent as often as the display–and our eyes–can handle them. The Thread.Sleep(0) ensures that the GUI thread gets a chance to draw–this is supposed to be interactive, after all.

Code formatter for Windows Live Writer

I stumbled across a great code formatter for Windows Live Writer today. Here’s an example, using a C# function that converts a number into a formatted file size:

       public static string SizeToString(long size) 
        { 
            const long kilobyte = 1L << 10; 
            const long megabyte = 1L << 20; 
            const long gigabyte = 1L << 30; 
            const long terabyte = 1L << 40; 
            string kbSuffix = "KB"; 
            string mbSuffix = "MB"; 
            string gbSuffix = "GB"; 
            string tbSuffix = "TB"; 
            string suffix = kbSuffix; 

            double divisor = kilobyte;//KB 
            if (size > 0.9 * terabyte) 
            { 
                divisor = terabyte; 
                suffix = tbSuffix; 
            } 
            else if (size > 0.9 * gigabyte) 
            { 
                divisor = gigabyte; 
                suffix = gbSuffix; 
            } 
            else if (size > 0.9 * megabyte) 
            { 
                divisor = megabyte; 
                suffix = mbSuffix; 
            } 

            double newSize = size / divisor; 
            return string.Format("{0:F2}{1}", newSize,suffix); 
        }

The power of the blog to motivate corporate, societal, and government change

This is an issue that has been discussed many times previously–so many that I won’t even bother to link to those discussions. By now it’s well-understood that blogs carry a power stronger than most in the media initially assumed possible.

Not just blogs, but the entire “Web 2.0” phenomenon–MySpace, YouTube–the whole rotten bunch. 🙂 Would Patricia Dunn have stepped down as chair of HP were it not for the constant pounding brought on by the likes of Scoble? Maybe, maybe not. In some sectors, blogs are becoming as well-regarded, if not more, than traditional publishing. Maybe this is limited to the computer industry. Maybe I just read too many blogs. 🙂

Still, it seems that the nature of debate and information dissemination has changed. No longer are we fed what mainstream publishers tell us–even if it’s of better quality. We are now free to choose what and how we read–for good or bad.

We’ve already seen the effects on the corporations. Companies simply can’t get away with anything anymore. Somebody, somewhere, will jump on it.

Areas where I think it will get more interesting:

  1. entertainment – RIAA, MPAA, I’m talking about you. You have ZERO friends among bloggers. All of the bad things you’ve done in courts to innocent people, all of your extortion is shouted from the rooftops by people like those at TechDirt.  You can’t win this war. For now, the audience isn’t very general, but news spreads, and it’s spreading faster and further. Sooner or later, you will lose the PR battle completely–in the meantime, unless your companies drastically change how they do business, your business will be swept out from under you, relegated to the dustbin of irrelevance.
  2. corporations – Microsoft already can’t do anything without the blogosphere lighting up. In some ways, they’ve chosen to embrace this–witness the very high-quality set of developer blogs they host. On the other hand, they’re like any other large company–they have secrets and tactics they would rather not be public debate-fodder. Corporations will be forced to open the windows and let the light shine in on what they’re doing. 
  3. government – imagine if honest, whistle-blowing (or even dishonest whistle-blowing!) staffers ratted on all the corruption in Washington. Imagine if every backroom deal was publicized in embarrassing detail. I don’t think we’re anywhere close to that yet, but there are signs that things are beginning to emerge. Look at the hilarity on YouTube about Senator Ted Stevens’ gaffe about the Internet’s tubes. How long as CSPAN been broadcasting, again? Our elected officials say dumb things about topics they don’t understand all the time–but now we can hear about it over and over again.

Overall, I think blogging will lead to more accountability of traditional structures of society. However, even with these possibilities, there are potential pitfalls:

  1. Overcrowded Medium — occurs when there are WAYYYYYYY too many people broadcasting that not enough people are listening. If everybody in the world blogged, who would read them?
  2. Loss of accountability – if there is accountability for things that are written online, than anything goes. The Internet is already the source of much bad information–it can become much worse if most of it is partisan, subjective, opinionated blather. Still, I’m not convinced it will really be worse than the status quo. The media now is far from infallible. Maybe part of me just wants to keep faith in people’s ability to reason. 🙂
  3. Undercrowded Debates – Broadcast media is a finite resource therefore it maintains its quality mostly by the fact that it has  to judge some things more worthy of discussion than others. Those topics are what people hear about. The Internet, on the other hand, is an unlimited resource. Anybody can have a blog on anything and most do. 🙂 This means that people themselves must choose what they follow, leading to some topics having far fewer meaningful discussions than others. For example, blogs about software and computers comprise a fairly large and active community. Politics has a large community. But what about small-interest, high-importance communities and topics? Where are the scientist blogs about global warming? I’m sure there are some, but is that kind of community ever going to gain a large enough population to affect societal opinion?
  4. Lack of participation – related to Undercrowded Debates, this means people don’t participate in all the areas that are pertinent to their lives. For example, how many of the US Internet users follow blogs discussing network neutrality? This is certainly an issue that could affect all of us, but from what I can tell it’s mostly debated on tech blogs, while the rest of the country misrepresents the entire issue. It works the other way around–I don’t read any political blogs at the moment. What issues am I missing out on? It’s too easy to become part of a niche community on the Internet and ignore the community as a whole.

Some of these problems stem from the anonymity of the Internet, others from the exponential increases in information available to us. Perhaps there are technologies in the pipeline that will solve these issues for us someday. They certainly aren’t going away.

tags: , , , ,

Code Formatter Plugin for Windows Live Writer

I stumbled across a great code formatter for Windows Live Writer today. Here’s an example, using a C# function that converts a number into a formatted file size:

       public static string SizeToString(long size) 
        { 
            const long kilobyte = 1L << 10; 
            const long megabyte = 1L << 20; 
            const long gigabyte = 1L << 30; 
            const long terabyte = 1L << 40; 
            string kbSuffix = "KB"; 
            string mbSuffix = "MB"; 
            string gbSuffix = "GB"; 
            string tbSuffix = "TB"; 
            string suffix = kbSuffix; 

            double divisor = kilobyte;//KB 
            if (size > 0.9 * terabyte) 
            { 
                divisor = terabyte; 
                suffix = tbSuffix; 
            } 
            else if (size > 0.9 * gigabyte) 
            { 
                divisor = gigabyte; 
                suffix = gbSuffix; 
            } 
            else if (size > 0.9 * megabyte) 
            { 
                divisor = megabyte; 
                suffix = mbSuffix; 
            } 

            double newSize = size / divisor; 
            return string.Format("{0:F2}{1}", newSize,suffix); 
        }

How to solve severe driver problems in Windows

A colleague at work recently got a second video card–a bottom of the barrel (or close to it) nVidia MX 4000 (PCI). He had an existing AGP nVidia Vanta. Well…the installation did not go well. It did something to Windows so that it consistently blue-screened during the driver load process (the progress bar moving in the startup splash screen).

Windows would start in safe mode, but removing the non-working drivers for the new card did not work. Removing both drivers did not work. Choosing last-known good configuration got us up and running in Windows (finally), but with only the bare VGA driver. Installing a driver from either CD or nVidia’s site ended in the strange error “Access Denied.”

Then I remembered what I had read in Windows Internals about the location of driver configuration information in the registry.  Driver info is stored with service configuration in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services.

First we removed all hints of nVidia apps and videos drivers using Add/Remove Programs. Then we went into regedit, into the above key and deleted the keys “nv”, “nv4”, and “nvsvc” (I think they were those, but looking on my own machine at home, they’re a bit different, so I’m half-guessing). I’m sure there are similar keys for ATI chips.

In the meantime, we had found an unused AGP version of the MX 4000 just lying around (no joke), and replaced the Vanta with this. We reinstalled the drivers and everything worked great.

Don’t use CArchive with native C++ type bool

I recently ran into an issue where I was trying to serialize a bool variable from a CArchive object.

Archiving it out, with this code, works fine:

//CArchive ar;
bool bFill;
ar << bFill;

But this code:

ar >> bFill

has problems. It compiles fine under Visual Studio 2003, but errors out under Visual C++ 6 with this error:

C2679: binary ‘>>’
: no operator defined which takes a right-hand operand of type ‘bool’
(or there is no acceptable conversion)

Very weird. There is some explanation here.

My resolution? Use BOOL instead. Sure, it’s actually an int and takes 4 bytes, not 1, but that’s a not a big deal in this case. And it’s more clear than using BYTE.

The hidden purpose behind private web browsers and history cleaners.

Does anybody else think that the real purpose behind the big movements in privacy, hiding web browsing habits, building anonymizers, and more… is really just a movement to allow everyone to view pornography without their SO’s finding out?

Come on, perhaps there really are people in a public library who need to look up an embarrassing medical question and don’t want the next user to spy on them and confront them about it, but let’s be honest–is that REALLY the motivation?

On a personal computer, what excuse is there really? To hide your browsing habits from your children? your spouse? What web sites are you visiting that your family shouldn’t know about? Maybe you should work on your honesty with them…

At work? I’ll cut a little slack here, but not much. You REALLY shouldn’t be browsing bad sites at work–that creates liability problems. On the other hand, maybe you have some dishonest coworkers who would stoop to spying on your habits…

Simple Command Mapping Infrastructure in .Net

Unlike MFC, .Net does not offer a built-in framework for message handling in WinForm applications. MFC developers, for better or worse, have a rather large mechanism that automatically maps command messages from menus, toolbars, and windows into the classes that want them.

.Net has no such all-encompassing framework, though some 3rd-party frameworks exist (perhaps I’ll go over some in a a future entry).

The lack of a built-in application framework gives developers more freedom by not locking them into a certain mind set, but it also means that you have to develop your own framework or do everything from scratch in every application.

Short of developing an entire framework, there are some practices you can follow in .Net (and any GUI application, for that matter) to ease management of commands.

I’ll talk about one possibility that I recently chose to implement in a small WinForms application. First of all, let’s look at the overall architecture I’m aiming for:

It’s roughly a Model-View-Controller (MVC) architecture. What I want to do is almost completely isolate the GUI layer from knowledge of how the program actually works. The breakdown of responsibility is as follows:

  1. GUI – contains menus, status bar, views, panels, toolbars, etc. and routes messages from them to the controller via command objects.
  2. Controller – coordinates views, reports, and GUI.
  3. ReportGenerator – the “model” in this architecture. Generates and stores the actual data that will be presented.
  4. Views – Does the drawing for the report in different ways.

With this architecture, the GUI is restricted to doing only a few things, all of them related to the state of the GUI:

  1. Receiving events from toolbar buttons, etc.
  2. Setting menu/button states based on general program state (i.e., report running, idle)
  3. Changing the current view based on a request from controller.
Command Objects

A popular way to handle commands from toolbars and menus is to use the Command design pattern. The advantages to this pattern are numerous, but as far as isolating the GUI from implementation details, they allow us to put all the work required to get things done into command objects and the controller.

Here’s a sample design for commands. As you can see, it’s exceedingly simple.

An interface defines the basic functionality: Execute(). The concrete base classes are created with parameters that tell it the information it needs to know. When the user selects a command in the GUI, the GUI must then run the associated command.

For example, here’s a command that merely changes the current view:

class ChangeViewCommand : ICommand
{
private ViewType viewType;

public ChangeViewCommand(ViewType view)
{
this.viewType = view;
}

#region ICommand Members

public void Execute()
{
DiskViewController.Controller.SetCurrentView(viewType);
}

#endregion
}

When the command is created, it is initialized with the view type. When it’s executed, it merely calls the controller to do the work. The amount of work that is done in the controller versus command objects really depends on the application and individual commands. In this specific case, it might not seem like it’s doing a lot, but it would be easy to add Undo support, logging, coordination among multiple classes, or any other requirement for a command. In addition, the benefit of using command objects even in simple cases like this will become clear shortly.

Associating GUI objects with Command objects

Many applications use menus as well as toolbars with duplicate functionality. There are also keyboard shortcuts that can invoke the same command. An easy way in .Net to associate the GUI with command objects is to use a hash table. The keys consist of the menu and toolbar objects, and the values are the command objects.

//declaration:
private Dictionary<object, ICommand> commandMap = new Dictionary<object, ICommand>();

//toolbar
commandMap[toolStripButtonPie] = new ChangeViewCommand(ViewType.Pie);
commandMap[toolStripButton3DPie] = new ChangeViewCommand(ViewType.Pie3D);
commandMap[toolStripButtonStart] = new StartCommand(controller);
commandMap[toolStripButtonStop] = new StopCommand(controller);

//menu
commandMap[pieToolStripMenuItem] = commandMap[toolStripButtonPie];
commandMap[Pie3DToolStripMenuItem] = commandMap[toolStripButton3DPie];
commandMap[startAnalysisDriveToolStripMenuItem] = commandMap[toolStripButtonStart];
commandMap[stopAnalysisToolStripMenuItem] = commandMap[toolStripButtonStop];

 You can see that some commands take the controller as argument–an option to discovering it from a static class variable. Also notice that commands from menu objects can reuse the same object that is associated with the toolbar button.

In order to run the commands, the GUI needs only respond to OnClick events from menus and toolbars and in many cases, the same delegate can be assigned to many of the menu items:

private void OnCommand(object sender, EventArgs e)
{
    if (commandMap.ContainsKey(sender))
    {
        ICommand cmd = commandMap[sender];
        if (cmd != null)
        {
              cmd.Execute();
              //could also do controller.RunCommand(cmd);
        }
    }
    else
    {
        throw new NotImplementedException();
    }
}

In conclusion, using the techniques presented here you can easily split your program into manageable chunks, with each layer doing only what it needs to.