Tag Archives: programming

Simple Command Mapping Infrastructure in .Net

Unlike MFC, .Net does not offer a built-in framework for message handling in WinForm applications. MFC developers, for better or worse, have a rather large mechanism that automatically maps command messages from menus, toolbars, and windows into the classes that want them.

.Net has no such all-encompassing framework, though some 3rd-party frameworks exist (perhaps I’ll go over some in a a future entry).

The lack of a built-in application framework gives developers more freedom by not locking them into a certain mind set, but it also means that you have to develop your own framework or do everything from scratch in every application.

Short of developing an entire framework, there are some practices you can follow in .Net (and any GUI application, for that matter) to ease management of commands.

I’ll talk about one possibility that I recently chose to implement in a small WinForms application. First of all, let’s look at the overall architecture I’m aiming for:

It’s roughly a Model-View-Controller (MVC) architecture. What I want to do is almost completely isolate the GUI layer from knowledge of how the program actually works. The breakdown of responsibility is as follows:

  1. GUI – contains menus, status bar, views, panels, toolbars, etc. and routes messages from them to the controller via command objects.
  2. Controller – coordinates views, reports, and GUI.
  3. ReportGenerator – the “model” in this architecture. Generates and stores the actual data that will be presented.
  4. Views – Does the drawing for the report in different ways.

With this architecture, the GUI is restricted to doing only a few things, all of them related to the state of the GUI:

  1. Receiving events from toolbar buttons, etc.
  2. Setting menu/button states based on general program state (i.e., report running, idle)
  3. Changing the current view based on a request from controller.
Command Objects

A popular way to handle commands from toolbars and menus is to use the Command design pattern. The advantages to this pattern are numerous, but as far as isolating the GUI from implementation details, they allow us to put all the work required to get things done into command objects and the controller.

Here’s a sample design for commands. As you can see, it’s exceedingly simple.

An interface defines the basic functionality: Execute(). The concrete base classes are created with parameters that tell it the information it needs to know. When the user selects a command in the GUI, the GUI must then run the associated command.

For example, here’s a command that merely changes the current view:

class ChangeViewCommand : ICommand
{
private ViewType viewType;

public ChangeViewCommand(ViewType view)
{
this.viewType = view;
}

#region ICommand Members

public void Execute()
{
DiskViewController.Controller.SetCurrentView(viewType);
}

#endregion
}

When the command is created, it is initialized with the view type. When it’s executed, it merely calls the controller to do the work. The amount of work that is done in the controller versus command objects really depends on the application and individual commands. In this specific case, it might not seem like it’s doing a lot, but it would be easy to add Undo support, logging, coordination among multiple classes, or any other requirement for a command. In addition, the benefit of using command objects even in simple cases like this will become clear shortly.

Associating GUI objects with Command objects

Many applications use menus as well as toolbars with duplicate functionality. There are also keyboard shortcuts that can invoke the same command. An easy way in .Net to associate the GUI with command objects is to use a hash table. The keys consist of the menu and toolbar objects, and the values are the command objects.

//declaration:
private Dictionary<object, ICommand> commandMap = new Dictionary<object, ICommand>();

//toolbar
commandMap[toolStripButtonPie] = new ChangeViewCommand(ViewType.Pie);
commandMap[toolStripButton3DPie] = new ChangeViewCommand(ViewType.Pie3D);
commandMap[toolStripButtonStart] = new StartCommand(controller);
commandMap[toolStripButtonStop] = new StopCommand(controller);

//menu
commandMap[pieToolStripMenuItem] = commandMap[toolStripButtonPie];
commandMap[Pie3DToolStripMenuItem] = commandMap[toolStripButton3DPie];
commandMap[startAnalysisDriveToolStripMenuItem] = commandMap[toolStripButtonStart];
commandMap[stopAnalysisToolStripMenuItem] = commandMap[toolStripButtonStop];

 You can see that some commands take the controller as argument–an option to discovering it from a static class variable. Also notice that commands from menu objects can reuse the same object that is associated with the toolbar button.

In order to run the commands, the GUI needs only respond to OnClick events from menus and toolbars and in many cases, the same delegate can be assigned to many of the menu items:

private void OnCommand(object sender, EventArgs e)
{
    if (commandMap.ContainsKey(sender))
    {
        ICommand cmd = commandMap[sender];
        if (cmd != null)
        {
              cmd.Execute();
              //could also do controller.RunCommand(cmd);
        }
    }
    else
    {
        throw new NotImplementedException();
    }
}

In conclusion, using the techniques presented here you can easily split your program into manageable chunks, with each layer doing only what it needs to.

Worse than Y2K–what if gravity changes?

Though the danger to life, civilization, and future of all that is good and beautiful was greatly oversold, Y2K was still a pretty big deal. It required the detailed analysis and updated of millions of lines of legacy code in all sectors, levels, nooks, and crannies of computer civilization.

We survived, somehow. Planes didn’t fall out of the air. Elevators did not plummet to the basement. Satellites did not launch lasers and nukes at random targets. Cats and Dogs did not start living together.

But what if something even more fundamental than our calendaring system changed?

What if a fundamental assumption about the way Earth functions changed?

Take, for example, gravity. The force of gravity is defined by the following equation:

 

Constants are:

  • G – universal gravity constant. 6.6742×10-11Nm2/kg2
  • M – mass of first object. Earth = 5.9724 x 1024 kg
  • m – mass of second object.
  • r – radius from center to center of objects. Earth = 6,378,100 m

This can be simplified for use on earth to:

where

  • m – mass of object on earth’s surface
  • g – earth gravity constant.

We can compute g by setting both equations equal to each other, canceling the common term of m, we get:

If we substitute the values above, we get  g = 9.801585

That’s the value that is a hard-coded into all the missile launchers, satellite control software, airplane flight control logic, embedded physics math processors, and Scorched Earth games in the world.

So what if it changed? It’s not likely, but it could happen. If a significant amount of mass were added or taken from the earth due to, say, a catastrophic asteroid hit, gravity could be affected. 

But how much would it have to change?

Given the current values, F = mg for 50 kg yields 490.08 N of force on the earth. If earth’s mass increased by 1%, g would be equal to 9.899601, and F would be 494.98 N. Would we feel heavier?

It would certainly destroy precision instrumentation.

However, 1% is a LOT: 5.9742 x 1022 kg. By comparison, the moon is 7.36 x 1022  and the mass of all known asteroids is less than that. On the other hand, if you think gravity can’t be affected by a reasonable event, read this.

So just to be safe for future modifications, make sure all your software takes as parameters G, M, m, and r, and calculates g as needed. You can never be too careful.

😉

Factory Design Pattern to the Rescue: Practical Example

Design patterns really are quite useful. I have a situation in the code I’m working on where I was obviously repeating a lot of the same patterns and code (functions that were 90% the same–the only thing different was the specific class being instantianted): perfect candidate for factory techniques.

Let’s say we have the following set of classes representing a data access layer meant to abstract some database information from the client code. We have a BaseDBObject class that defines all of the common. We derive from that for each table we want to access.

class BaseDBObject
{
    protected BaseDBObject(Database database) {...}
    public void SetProperty(string name, object value) {...}
    //...more common functionality

};

Derived from this base are lots of classes that implement table-specific database objects. To control object creation, constructors are declared protected and static member functions are used. To wit:

class MyTableObject : BaseDBObject
{
    protected MyTableObject(Database database) : base(database) { }
    public static void Create(Database database, int param1, string param2)
    {
        string query = "INSERTO INTO MyTable (param1, param2) VALUES (@PARAM1, @PARAM2)";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //paramterize query
        try {
            //exeute query
            //error check
            MyTableObject mto = new MyTableObject();
            //set object properties to match what's inserted
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exception
        }
        finally
        {
            //close connection
        }
    }
    //...
    public static IList<MyTableObject> LookupById(Database database, int id)
    {
        string query = "SELECT * FROM MyTable WHERE ID = @ID";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //parameterize query
        try
        {
            //execute query
            SqlDataReader reader = cmd.ExecuteReader(...);
            List<MyTableObject> list = new List<MyTableObject>();
            while (reader.Read())
            {
                MyTableObject mto = new MyTableObject();
                //set properties in mto
                list.Add(mto);
               
            }
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exceptions
        }
        finally
        {
            //close connections
        }
    }
};

There are two functions here that must be created for every single table object derivation. That can be a LOT of code, and most of it is doing the same thing. There are a number of simple ways to handle some of the repetition:

  1. There will be multiple LookupByXXXXX functions. They can all specify the query they will use and pass it to a common function that executes and returns a list the class’s objects.
  2. Paramterizing queries can be accomplished by a function that takes a query string, a list of parameters (say, in a struct that describes each parameter), and produces a paramterized SqlCommand, ready for execution.
  3. Other helper functions that do the actual execution and checking of errors.

In the end, however, you are still left with two things that can’t be relegated to helper functions: MyTableObject mto = new MyTableObject(); and List<MyTableObject> list = new List<MyTableObject>(); One possible solution is to use reflection to dynamically generate the required objects. From a performance and understandability perspective, I don’t think this is a first choice.

Which leaves a factory method. My first attempt involved using templates to simplify this (you will see why shortly). Something like this:

class DatabaseObjectFactory<T> where T : BaseDBObject, new()
{
    public T Create(Database database) { return new T(database); } 
    public IList<T> CreateList() { return new List<T>(); }
};

This way, I could simply define a function in the base class BaseDBObject, which I could call like this:

Lookup(database, query, arguments, new DatabaseObjectFactory<MyTableObject>());

and that would automagically return a list of the correct objects. The problem with this approach, however, lies in the Create function. .Net can’t pass arguments to a constructor of T. It can only return new T() with no parameters. Nor can you access properties of BaseDBObject through T after creation. Back to the drawing board…

Now I had to face the problem of creating a duplicate inheritance hierarchy of object factories. This is what I had hoped to avoid by using generics. I designed an interface like this:

interface IDatabaseObjectFactory
{
    BaseDBObject Create(Database database);
    IList<BaseDBObject> CreateList();
};

And in each table object derivation I declare a private class and static member like this:

private class MyTableObject : IDatabaseObjectFactory
{
    public BaseDBObject Create(Database database) { return new MyTableObject(database); }
    public IList<BaseDBObject> CreateList() { return new List<MyTableObject>(); }
};
private static IDatabaseObjectFactory s_factory = new MyTableObjectFactory();

Now, I can have a Lookup function in BaseDBObject that accepts an IDatabaseObjectFactory parameter. At the expense of creating a small, simple factory class for each table object that needs it, I can remove roughly 50 lines of code from each of those classes. Smaller code = fewer bugs and easier maintenance.

The base lookup function would look something like this:

protected Lookup(Database database, string query, ICollection<QueryArgument>, IDatabaseObjectFactory factory)
{
    //paramterize query
    //execute query
    //ignoring error-handling for sake of brevity
    SqlDataReader reader = cmd.ExecuteReader(...);
    IList<BaseDBObject> list = factory.CreateList();
    while (reader.Read())
    {
        BaseDBObject obj = factory.Create(database);
        obj.Fill(reader);    //another common function that
                             // automatically fills in all properties
                             //of object from SqlDataReader
        list.Add(obj);
    }
    return list;
}

But what about theMyTableObject.Create()? It’s possible to do something like this, but in a different way. In order to handle inserting rows in a table that uses identity fields (that you don’t know until after creation), I created a utility function that inserted the data using database, query string, and QueryArgument objects. Then, instead of creating the object directly, I do a Lookup based on values that I know are unique to the row I just inserted. This ensures I get the most complete object (at the expense of an extra database trip).

Getting the real view under a CPreviewView (MFC)

I had an interesting problem at work the other day. In this MFC application, there are a number of views that can be printed and we support the Print Preview function.

However, in one of these views we rely on getting the view from the frame window in order to handle command updates. This is accomplished with code like this:

 pWnd = reinterpret_cast< CBaseView*>(GetActiveView());

However, when you’re in print preview mode, GetActiveView() returns a CPreviewView, not the underlying view (CBaseView). If you look in the source of CPreviewView, you’ll notice that it has a protected member m_pOrigView, which is indeed the one I want. However, there is no way of accessing that value. (I briefly toyed with the idea of directly accessing the memory via its offset from the beginning of the object, but as this software has to run in unpredictable environments, and it’s a horrible idea anyway, I let that go…)

 If you try this:

pWnd=(CBaseView*)pWnd->GetDescendantWindow(MAP_WINDOW);

Where MAP_WINDOW is the ID of the real view that I want, it won’t work (it may work in the general case, but it doesn’t work in my case).

I had two options, just return NULL and tell the higher-level functions to not do those certain command updates when the returned view is NULL. This should be done anyway, so I implemented those checks.

However, it still bugged me that I couldn’t get access to the real view. At last I hit on the idea of going through the document (this uses the Doc/View framework).

 I used this code and it did exactly what I needed:

CDocument* pDoc = GetActiveDocument();
if (pDoc!=NULL) {
    
POSITION pos = pDoc->GetFirstViewPosition();
    
CView* pView = NULL;
    
do {
            
pView = pDoc->GetNextView(pos);
            
if (pView != NULL && pView->IsKindOf(RUNTIME_CLASS(CBaseView)))
                        
return (CBaseView*)pView;     } while (pos!=NULL);
}

 

Rhythmic Programming

Has anyone else ever had the experience of typing code in such a way that you build up an actual rhythm, patterns, a definable velocity punctuated by occasional flourishes? 

I found that happening today. I’m coding up a well-understood pattern in this application and so I can type quite a bit in long spurts. I find that I’m almost typing in “sentences” as I go…it’s very interesting…kind of odd…

Unsupported Frameworks…

Programming with a framework that you’ve developed can be annoying, when you compare it to the ease of IDE-supported frameworks. MFC is a nice framework primarily because Visual C++ has so much built-in support for it.

My little framework has no such support (and I have no ambitions to build in IDE support for it) and that can make it frustrating to do all the repetitive stuff (making the initial window, message handling, for examples).

Another thing that MFC does that I want to avoid is make extensive use of macros. Macros are evil in my book. I’ve been bitten. I know they can be useful in limited circumstances, but MFC accomplishes a lot of its magic with macros. And unfortunately, that’s why it can sometimes be a problem–it becomes magic instead of straightforward code.

Programming as a hobby

People are often amazed when I tell them that programming is not just a job–it’s also my hobby. I know that it’s one of the main reasons I was immediately considered for the job I have now. After looking at my cv, my now-manager headed to my web-site and saw that I had done a number of personal projects.

It’s the whole reason I think I have excelled beyond everything I’ve learned in school in the last few years. It’s one of the reasons I’m learning so much practical knowledge. Working on my own projects lets me do fun things at my own pace (I still try to apply some pressure to get things done). I always try to do things I’ve never done before.

I learned a TON making BRayTracer–about program organization, unit testing, optimization, user interface design, architectural levels, and a whole lot about .Net. There are still so many things I want to add to it so I can learn more.

It’s not something you can do just in an attempt to prove to future employers that you’re hard-core. You have to love it. There are a lot of other fun things in life. I just happen to love writing code, and I try to spend a lot of time outside work doing just that.

All other things being equal, somebody who programs as a hobby will be a better programmer than one just in it for a job.

Finding time is always difficult, though. Work is stressful, and sometimes you need to get off the computer. Still, I’ve got some cool utilities planned, a pocket pc game, and who knows. I’m keeping track of ideas, and I’ll just have to start small and work on one at a time.

What’s Wrong with this code? – 2 – Answer

In the last post, I showed some code that had a very simple problem.

The problem is that when you call HIWORD, it converts the 16 bits into an unsigned short, which then gets passed as an int padded with zeros–not sign extended (it’s not signed). It will NEVER be less than zero. The solution is to cast it to a signed short before calling the function or change the function parameters to be signed short instead of int.

 

The First Thing in an XML file needs to be…

I just discovered a fun tidbit of information. I was using NUnit to test a number of assemblies in a single project, and many functions need an app.config file. The instructions on how to do this can be found in various places, but I was having a problem that nobody else seemed to have.

My config file looked like this:

<!– config values for nunit –>
<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
    <appSettings>
       <add key=”…” value=”……” />
    </appSettings>
</configuration>
    
    
   

Every time NUnit started up that project, it would fail with an assembly load error:

System.IO.FileNotFoundException : File or assembly name nunit.core, or one of its dependencies, was not found.

For further information, use the Exception Details menu item.

My first thought to that was “Huh?!” It was a highly annoying problem, because I thought I was following the instructions exactly and it should have worked!

Do you see the problem? It’s this line:

< ?xml version="1.0" encoding="utf-8" ?>

It needs to be the first line in the file (or taken out completely). Voila, everything works.