Tag Archives: Code

Importance of disposing MailMessage objects

Always, always, always follow the suggested usage patterns of disposable objects. I had a strange problem recently where e-mail messages with attachments weren’t releasing the lock on the files for a long time. I was using the System.Net.Mail namespace, specifically the Attachment and MailMessage classes.

Disposable objects represent resources not directly under control by the CLR–and this includes SMTP connections. By not calling dispose, the mail message was just sitting around, not getting sent. Why this is the case when I explicitly call SmtpClient.Send is unclear. The main symptom of this problem was that files that I attached to the e-mail message were still locked when I tried to delete them immediately after the send.

Something like this will do:

using (MailMessage msg = new MailMessage()) 
{ 
    msg.Attachments.Add(...); 
    client.Send(msg); 
}

Changes in .Net 1.1. and 2.0 E-Mail classes

I recently (re?)learned a harsh lesson at work. We migrated a critical application’s e-mail client code to use System.Net.Mail‘s classes instead of those in the deprecated System.Web.Mail.

After doing this, we immediately had three problems, mostly with the Attachment class:

  1. Our automated test software could no longer read the attachments
  2. Attachments started having names of Directory_Directory_FileName (they were just Filename)
  3. The software could no longer delete the attachment files from disk after they were e-mailed (the file was still in use).

With problem 1, it turned out that the default encoding in the old MailAttachment class was 7Bit and the test program (which I didn’t write) was directly scraping the mail queue (on a Linux system) and reading the encoded text, looking for recognizable tokens. The new class’s default encoding was Base64–gibberish, to this test program. It wouldn’t really matter to our end-users, but the taking the “easy” way out in writing the test program locked us into an encoding. It just so happens, though, that 7Bit encoding is preferable in our circumstances, even for end-users (Base64 adds significant e-mail overhead–some of our users pay by kilobyte).

The 2nd problem took me a little longer to figure out. At first I thought it was a bug in the attachment naming code, but I couldn’t find anything either in code or the file system. It turns out that the name of the file that the Attachment class is giving includes the directory structure in the name–this seems quite odd–I mean, how often do you want a file name like that? So I manually set the ContentDisposition.FileName property to be just the filename.

I still haven’t completely figured out #3. I think it’s because the MailMessage class is locking the file during send, and not releasing it as early as the old class. In any case, I haven’t directly solved the problem, just come up with a semi-elegant workaround.

Lessons:

  1. Don’t assume default settings are the same across framework upgrades.
  2. Build unit tests to test the assumed stuff–even the stuff controlled by the framework. Don’t assume it’s doing exactly what you want with default settings.
  3. Most importabtly, if your functionality depends on specific settings, then explicitly specify them–don’t ever leave them to chance.

Thankfully, these were relatively easy to diagnose and caused minimal to moderate impact.

Why you shouldn’t catch System.Exception

There are probably many reasons, but my favorite is that, if you are resorting to catching anything at all, you probably don’t understand the code well.

For example, in complicated parsing of text, it is often easier to just put a huge try {…} catch (Exception ex) {…} around the entire thing, rather than take the time necessary to understand how your code works, and where exactly things could go wrong.

It is much better to go through the code line-by-line and prove to yourself that that there are only a few places of possible unknown exception behavior. Obviously, exceptions are meant to catch things you can’t reliably predict, but in most code, there are large areas of things like string manipulation, arithmetic, or other simple procedures, that will not throw an exception. It makes sense to delineate these areas, and surround only the trouble spots with try-catch.

For example, look at this fragment:

string strDelimiters = ":, ;"; 
try 
{ 
    string[] tempStrings = message.Split(strDelimiters.ToCharArray()); 
    string[] subStrings = new string[tempStrings.Length]; 
    int numSubStrings = 0; 
    //remove empty substrings 
    for (int i=0;i<tempStrings.Length;i++) 
    { 
        if (tempStrings[i] != null && tempStrings[i].Length > 0) 
        { 
            subStrings[numSubStrings] = tempStrings[i]; 
            numSubStrings++; 
        } 
    } 
...

 There is no reason to surround this with a try {} catch{} block. Doing so lets you off the hook of digging into this and realizing how bad it is, especially with .Net 2.0. 🙂

Queue multiple GUI updates into a single update event (C# .Net)

I’m writing a simple utility that involves scanning a hard disk and updating a display with the latest status. There are potentially many, many, many, many, MANY changes that happen to the display in very quick succession and sending an update event which triggers a screen refresh for every single state change would be an enormous bottleneck.

I implemented a simple queueing scenario where all the updated objects are queued for up to 200ms before being batched into a single event. The GUI can then update the display for every changed object all at once.

First, the queue:

private DateTime lastUpdateSent = DateTime.MinValue; //queue updated folders to limit the number of screen updates List<FolderObject> updatedFoldersQueue = new List<FolderObject>();

A simple list. Then I have a function which is called to handle the actual event generation:

private void FireUpdate(FolderObject folder) { if (this.OnFolderUpdated!=null) { try { if (!folder.Dirty) { folder.Dirty = true; updatedFoldersQueue.Add(folder); } TimeSpan diff = DateTime.Now - lastUpdateSent; if (diff.TotalMilliseconds > 200){ FolderUpdatedEventArgs args = new FolderUpdatedEventArgs(updatedFoldersQueue); int numFolders = updatedFoldersQueue.Count; //reinitialize queue updatedFoldersQueue = new List<FolderObject>(numFolders); OnFolderUpdated(this, args); Thread.Sleep(0); lastUpdateSent = DateTime.Now; } } catch (Exception ex) { System.Diagnostics.Trace.WriteLine(ex.Message); System.Diagnostics.Trace.WriteLine(ex.StackTrace); } } }

The folder.Dirty flag merely prevents a folder from being queued twice. The TimeSpan diff is the important bit. By checking the time difference between now and the last updated time, we can ensure that updates only get sent as often as the display–and our eyes–can handle them. The Thread.Sleep(0) ensures that the GUI thread gets a chance to draw–this is supposed to be interactive, after all.

Code formatter for Windows Live Writer

I stumbled across a great code formatter for Windows Live Writer today. Here’s an example, using a C# function that converts a number into a formatted file size:

       public static string SizeToString(long size) 
        { 
            const long kilobyte = 1L << 10; 
            const long megabyte = 1L << 20; 
            const long gigabyte = 1L << 30; 
            const long terabyte = 1L << 40; 
            string kbSuffix = "KB"; 
            string mbSuffix = "MB"; 
            string gbSuffix = "GB"; 
            string tbSuffix = "TB"; 
            string suffix = kbSuffix; 

            double divisor = kilobyte;//KB 
            if (size > 0.9 * terabyte) 
            { 
                divisor = terabyte; 
                suffix = tbSuffix; 
            } 
            else if (size > 0.9 * gigabyte) 
            { 
                divisor = gigabyte; 
                suffix = gbSuffix; 
            } 
            else if (size > 0.9 * megabyte) 
            { 
                divisor = megabyte; 
                suffix = mbSuffix; 
            } 

            double newSize = size / divisor; 
            return string.Format("{0:F2}{1}", newSize,suffix); 
        }

Don’t use CArchive with native C++ type bool

I recently ran into an issue where I was trying to serialize a bool variable from a CArchive object.

Archiving it out, with this code, works fine:

//CArchive ar;
bool bFill;
ar << bFill;

But this code:

ar >> bFill

has problems. It compiles fine under Visual Studio 2003, but errors out under Visual C++ 6 with this error:

C2679: binary ‘>>’
: no operator defined which takes a right-hand operand of type ‘bool’
(or there is no acceptable conversion)

Very weird. There is some explanation here.

My resolution? Use BOOL instead. Sure, it’s actually an int and takes 4 bytes, not 1, but that’s a not a big deal in this case. And it’s more clear than using BYTE.

Simple Command Mapping Infrastructure in .Net

Unlike MFC, .Net does not offer a built-in framework for message handling in WinForm applications. MFC developers, for better or worse, have a rather large mechanism that automatically maps command messages from menus, toolbars, and windows into the classes that want them.

.Net has no such all-encompassing framework, though some 3rd-party frameworks exist (perhaps I’ll go over some in a a future entry).

The lack of a built-in application framework gives developers more freedom by not locking them into a certain mind set, but it also means that you have to develop your own framework or do everything from scratch in every application.

Short of developing an entire framework, there are some practices you can follow in .Net (and any GUI application, for that matter) to ease management of commands.

I’ll talk about one possibility that I recently chose to implement in a small WinForms application. First of all, let’s look at the overall architecture I’m aiming for:

It’s roughly a Model-View-Controller (MVC) architecture. What I want to do is almost completely isolate the GUI layer from knowledge of how the program actually works. The breakdown of responsibility is as follows:

  1. GUI – contains menus, status bar, views, panels, toolbars, etc. and routes messages from them to the controller via command objects.
  2. Controller – coordinates views, reports, and GUI.
  3. ReportGenerator – the “model” in this architecture. Generates and stores the actual data that will be presented.
  4. Views – Does the drawing for the report in different ways.

With this architecture, the GUI is restricted to doing only a few things, all of them related to the state of the GUI:

  1. Receiving events from toolbar buttons, etc.
  2. Setting menu/button states based on general program state (i.e., report running, idle)
  3. Changing the current view based on a request from controller.
Command Objects

A popular way to handle commands from toolbars and menus is to use the Command design pattern. The advantages to this pattern are numerous, but as far as isolating the GUI from implementation details, they allow us to put all the work required to get things done into command objects and the controller.

Here’s a sample design for commands. As you can see, it’s exceedingly simple.

An interface defines the basic functionality: Execute(). The concrete base classes are created with parameters that tell it the information it needs to know. When the user selects a command in the GUI, the GUI must then run the associated command.

For example, here’s a command that merely changes the current view:

class ChangeViewCommand : ICommand
{
private ViewType viewType;

public ChangeViewCommand(ViewType view)
{
this.viewType = view;
}

#region ICommand Members

public void Execute()
{
DiskViewController.Controller.SetCurrentView(viewType);
}

#endregion
}

When the command is created, it is initialized with the view type. When it’s executed, it merely calls the controller to do the work. The amount of work that is done in the controller versus command objects really depends on the application and individual commands. In this specific case, it might not seem like it’s doing a lot, but it would be easy to add Undo support, logging, coordination among multiple classes, or any other requirement for a command. In addition, the benefit of using command objects even in simple cases like this will become clear shortly.

Associating GUI objects with Command objects

Many applications use menus as well as toolbars with duplicate functionality. There are also keyboard shortcuts that can invoke the same command. An easy way in .Net to associate the GUI with command objects is to use a hash table. The keys consist of the menu and toolbar objects, and the values are the command objects.

//declaration:
private Dictionary<object, ICommand> commandMap = new Dictionary<object, ICommand>();

//toolbar
commandMap[toolStripButtonPie] = new ChangeViewCommand(ViewType.Pie);
commandMap[toolStripButton3DPie] = new ChangeViewCommand(ViewType.Pie3D);
commandMap[toolStripButtonStart] = new StartCommand(controller);
commandMap[toolStripButtonStop] = new StopCommand(controller);

//menu
commandMap[pieToolStripMenuItem] = commandMap[toolStripButtonPie];
commandMap[Pie3DToolStripMenuItem] = commandMap[toolStripButton3DPie];
commandMap[startAnalysisDriveToolStripMenuItem] = commandMap[toolStripButtonStart];
commandMap[stopAnalysisToolStripMenuItem] = commandMap[toolStripButtonStop];

 You can see that some commands take the controller as argument–an option to discovering it from a static class variable. Also notice that commands from menu objects can reuse the same object that is associated with the toolbar button.

In order to run the commands, the GUI needs only respond to OnClick events from menus and toolbars and in many cases, the same delegate can be assigned to many of the menu items:

private void OnCommand(object sender, EventArgs e)
{
    if (commandMap.ContainsKey(sender))
    {
        ICommand cmd = commandMap[sender];
        if (cmd != null)
        {
              cmd.Execute();
              //could also do controller.RunCommand(cmd);
        }
    }
    else
    {
        throw new NotImplementedException();
    }
}

In conclusion, using the techniques presented here you can easily split your program into manageable chunks, with each layer doing only what it needs to.

Factory Design Pattern to the Rescue: Practical Example

Design patterns really are quite useful. I have a situation in the code I’m working on where I was obviously repeating a lot of the same patterns and code (functions that were 90% the same–the only thing different was the specific class being instantianted): perfect candidate for factory techniques.

Let’s say we have the following set of classes representing a data access layer meant to abstract some database information from the client code. We have a BaseDBObject class that defines all of the common. We derive from that for each table we want to access.

class BaseDBObject
{
    protected BaseDBObject(Database database) {...}
    public void SetProperty(string name, object value) {...}
    //...more common functionality

};

Derived from this base are lots of classes that implement table-specific database objects. To control object creation, constructors are declared protected and static member functions are used. To wit:

class MyTableObject : BaseDBObject
{
    protected MyTableObject(Database database) : base(database) { }
    public static void Create(Database database, int param1, string param2)
    {
        string query = "INSERTO INTO MyTable (param1, param2) VALUES (@PARAM1, @PARAM2)";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //paramterize query
        try {
            //exeute query
            //error check
            MyTableObject mto = new MyTableObject();
            //set object properties to match what's inserted
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exception
        }
        finally
        {
            //close connection
        }
    }
    //...
    public static IList<MyTableObject> LookupById(Database database, int id)
    {
        string query = "SELECT * FROM MyTable WHERE ID = @ID";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //parameterize query
        try
        {
            //execute query
            SqlDataReader reader = cmd.ExecuteReader(...);
            List<MyTableObject> list = new List<MyTableObject>();
            while (reader.Read())
            {
                MyTableObject mto = new MyTableObject();
                //set properties in mto
                list.Add(mto);
               
            }
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exceptions
        }
        finally
        {
            //close connections
        }
    }
};

There are two functions here that must be created for every single table object derivation. That can be a LOT of code, and most of it is doing the same thing. There are a number of simple ways to handle some of the repetition:

  1. There will be multiple LookupByXXXXX functions. They can all specify the query they will use and pass it to a common function that executes and returns a list the class’s objects.
  2. Paramterizing queries can be accomplished by a function that takes a query string, a list of parameters (say, in a struct that describes each parameter), and produces a paramterized SqlCommand, ready for execution.
  3. Other helper functions that do the actual execution and checking of errors.

In the end, however, you are still left with two things that can’t be relegated to helper functions: MyTableObject mto = new MyTableObject(); and List<MyTableObject> list = new List<MyTableObject>(); One possible solution is to use reflection to dynamically generate the required objects. From a performance and understandability perspective, I don’t think this is a first choice.

Which leaves a factory method. My first attempt involved using templates to simplify this (you will see why shortly). Something like this:

class DatabaseObjectFactory<T> where T : BaseDBObject, new()
{
    public T Create(Database database) { return new T(database); } 
    public IList<T> CreateList() { return new List<T>(); }
};

This way, I could simply define a function in the base class BaseDBObject, which I could call like this:

Lookup(database, query, arguments, new DatabaseObjectFactory<MyTableObject>());

and that would automagically return a list of the correct objects. The problem with this approach, however, lies in the Create function. .Net can’t pass arguments to a constructor of T. It can only return new T() with no parameters. Nor can you access properties of BaseDBObject through T after creation. Back to the drawing board…

Now I had to face the problem of creating a duplicate inheritance hierarchy of object factories. This is what I had hoped to avoid by using generics. I designed an interface like this:

interface IDatabaseObjectFactory
{
    BaseDBObject Create(Database database);
    IList<BaseDBObject> CreateList();
};

And in each table object derivation I declare a private class and static member like this:

private class MyTableObject : IDatabaseObjectFactory
{
    public BaseDBObject Create(Database database) { return new MyTableObject(database); }
    public IList<BaseDBObject> CreateList() { return new List<MyTableObject>(); }
};
private static IDatabaseObjectFactory s_factory = new MyTableObjectFactory();

Now, I can have a Lookup function in BaseDBObject that accepts an IDatabaseObjectFactory parameter. At the expense of creating a small, simple factory class for each table object that needs it, I can remove roughly 50 lines of code from each of those classes. Smaller code = fewer bugs and easier maintenance.

The base lookup function would look something like this:

protected Lookup(Database database, string query, ICollection<QueryArgument>, IDatabaseObjectFactory factory)
{
    //paramterize query
    //execute query
    //ignoring error-handling for sake of brevity
    SqlDataReader reader = cmd.ExecuteReader(...);
    IList<BaseDBObject> list = factory.CreateList();
    while (reader.Read())
    {
        BaseDBObject obj = factory.Create(database);
        obj.Fill(reader);    //another common function that
                             // automatically fills in all properties
                             //of object from SqlDataReader
        list.Add(obj);
    }
    return list;
}

But what about theMyTableObject.Create()? It’s possible to do something like this, but in a different way. In order to handle inserting rows in a table that uses identity fields (that you don’t know until after creation), I created a utility function that inserted the data using database, query string, and QueryArgument objects. Then, instead of creating the object directly, I do a Lookup based on values that I know are unique to the row I just inserted. This ensures I get the most complete object (at the expense of an extra database trip).

Getting the real view under a CPreviewView (MFC)

I had an interesting problem at work the other day. In this MFC application, there are a number of views that can be printed and we support the Print Preview function.

However, in one of these views we rely on getting the view from the frame window in order to handle command updates. This is accomplished with code like this:

 pWnd = reinterpret_cast< CBaseView*>(GetActiveView());

However, when you’re in print preview mode, GetActiveView() returns a CPreviewView, not the underlying view (CBaseView). If you look in the source of CPreviewView, you’ll notice that it has a protected member m_pOrigView, which is indeed the one I want. However, there is no way of accessing that value. (I briefly toyed with the idea of directly accessing the memory via its offset from the beginning of the object, but as this software has to run in unpredictable environments, and it’s a horrible idea anyway, I let that go…)

 If you try this:

pWnd=(CBaseView*)pWnd->GetDescendantWindow(MAP_WINDOW);

Where MAP_WINDOW is the ID of the real view that I want, it won’t work (it may work in the general case, but it doesn’t work in my case).

I had two options, just return NULL and tell the higher-level functions to not do those certain command updates when the returned view is NULL. This should be done anyway, so I implemented those checks.

However, it still bugged me that I couldn’t get access to the real view. At last I hit on the idea of going through the document (this uses the Doc/View framework).

 I used this code and it did exactly what I needed:

CDocument* pDoc = GetActiveDocument();
if (pDoc!=NULL) {
    
POSITION pos = pDoc->GetFirstViewPosition();
    
CView* pView = NULL;
    
do {
            
pView = pDoc->GetNextView(pos);
            
if (pView != NULL && pView->IsKindOf(RUNTIME_CLASS(CBaseView)))
                        
return (CBaseView*)pView;     } while (pos!=NULL);
}

 

What’s Wrong with this code? – 2 – Answer

In the last post, I showed some code that had a very simple problem.

The problem is that when you call HIWORD, it converts the 16 bits into an unsigned short, which then gets passed as an int padded with zeros–not sign extended (it’s not signed). It will NEVER be less than zero. The solution is to cast it to a signed short before calling the function or change the function parameters to be signed short instead of int.

Â