Category Archives: Software Development

Factory Design Pattern to the Rescue: Practical Example

Design patterns really are quite useful. I have a situation in the code I’m working on where I was obviously repeating a lot of the same patterns and code (functions that were 90% the same–the only thing different was the specific class being instantianted): perfect candidate for factory techniques.

Let’s say we have the following set of classes representing a data access layer meant to abstract some database information from the client code. We have a BaseDBObject class that defines all of the common. We derive from that for each table we want to access.

class BaseDBObject
{
    protected BaseDBObject(Database database) {...}
    public void SetProperty(string name, object value) {...}
    //...more common functionality

};

Derived from this base are lots of classes that implement table-specific database objects. To control object creation, constructors are declared protected and static member functions are used. To wit:

class MyTableObject : BaseDBObject
{
    protected MyTableObject(Database database) : base(database) { }
    public static void Create(Database database, int param1, string param2)
    {
        string query = "INSERTO INTO MyTable (param1, param2) VALUES (@PARAM1, @PARAM2)";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //paramterize query
        try {
            //exeute query
            //error check
            MyTableObject mto = new MyTableObject();
            //set object properties to match what's inserted
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exception
        }
        finally
        {
            //close connection
        }
    }
    //...
    public static IList<MyTableObject> LookupById(Database database, int id)
    {
        string query = "SELECT * FROM MyTable WHERE ID = @ID";
        SqlCommand cmd = new SqlCommand(query, database.GetConnection());
        //parameterize query
        try
        {
            //execute query
            SqlDataReader reader = cmd.ExecuteReader(...);
            List<MyTableObject> list = new List<MyTableObject>();
            while (reader.Read())
            {
                MyTableObject mto = new MyTableObject();
                //set properties in mto
                list.Add(mto);
               
            }
            return mto;
        }
        catch (SqlException ex)
        {
            //handle exceptions
        }
        finally
        {
            //close connections
        }
    }
};

There are two functions here that must be created for every single table object derivation. That can be a LOT of code, and most of it is doing the same thing. There are a number of simple ways to handle some of the repetition:

  1. There will be multiple LookupByXXXXX functions. They can all specify the query they will use and pass it to a common function that executes and returns a list the class’s objects.
  2. Paramterizing queries can be accomplished by a function that takes a query string, a list of parameters (say, in a struct that describes each parameter), and produces a paramterized SqlCommand, ready for execution.
  3. Other helper functions that do the actual execution and checking of errors.

In the end, however, you are still left with two things that can’t be relegated to helper functions: MyTableObject mto = new MyTableObject(); and List<MyTableObject> list = new List<MyTableObject>(); One possible solution is to use reflection to dynamically generate the required objects. From a performance and understandability perspective, I don’t think this is a first choice.

Which leaves a factory method. My first attempt involved using templates to simplify this (you will see why shortly). Something like this:

class DatabaseObjectFactory<T> where T : BaseDBObject, new()
{
    public T Create(Database database) { return new T(database); } 
    public IList<T> CreateList() { return new List<T>(); }
};

This way, I could simply define a function in the base class BaseDBObject, which I could call like this:

Lookup(database, query, arguments, new DatabaseObjectFactory<MyTableObject>());

and that would automagically return a list of the correct objects. The problem with this approach, however, lies in the Create function. .Net can’t pass arguments to a constructor of T. It can only return new T() with no parameters. Nor can you access properties of BaseDBObject through T after creation. Back to the drawing board…

Now I had to face the problem of creating a duplicate inheritance hierarchy of object factories. This is what I had hoped to avoid by using generics. I designed an interface like this:

interface IDatabaseObjectFactory
{
    BaseDBObject Create(Database database);
    IList<BaseDBObject> CreateList();
};

And in each table object derivation I declare a private class and static member like this:

private class MyTableObject : IDatabaseObjectFactory
{
    public BaseDBObject Create(Database database) { return new MyTableObject(database); }
    public IList<BaseDBObject> CreateList() { return new List<MyTableObject>(); }
};
private static IDatabaseObjectFactory s_factory = new MyTableObjectFactory();

Now, I can have a Lookup function in BaseDBObject that accepts an IDatabaseObjectFactory parameter. At the expense of creating a small, simple factory class for each table object that needs it, I can remove roughly 50 lines of code from each of those classes. Smaller code = fewer bugs and easier maintenance.

The base lookup function would look something like this:

protected Lookup(Database database, string query, ICollection<QueryArgument>, IDatabaseObjectFactory factory)
{
    //paramterize query
    //execute query
    //ignoring error-handling for sake of brevity
    SqlDataReader reader = cmd.ExecuteReader(...);
    IList<BaseDBObject> list = factory.CreateList();
    while (reader.Read())
    {
        BaseDBObject obj = factory.Create(database);
        obj.Fill(reader);    //another common function that
                             // automatically fills in all properties
                             //of object from SqlDataReader
        list.Add(obj);
    }
    return list;
}

But what about theMyTableObject.Create()? It’s possible to do something like this, but in a different way. In order to handle inserting rows in a table that uses identity fields (that you don’t know until after creation), I created a utility function that inserted the data using database, query string, and QueryArgument objects. Then, instead of creating the object directly, I do a Lookup based on values that I know are unique to the row I just inserted. This ensures I get the most complete object (at the expense of an extra database trip).

Rhythmic Programming

Has anyone else ever had the experience of typing code in such a way that you build up an actual rhythm, patterns, a definable velocity punctuated by occasional flourishes? 

I found that happening today. I’m coding up a well-understood pattern in this application and so I can type quite a bit in long spurts. I find that I’m almost typing in “sentences” as I go…it’s very interesting…kind of odd…

Unsupported Frameworks…

Programming with a framework that you’ve developed can be annoying, when you compare it to the ease of IDE-supported frameworks. MFC is a nice framework primarily because Visual C++ has so much built-in support for it.

My little framework has no such support (and I have no ambitions to build in IDE support for it) and that can make it frustrating to do all the repetitive stuff (making the initial window, message handling, for examples).

Another thing that MFC does that I want to avoid is make extensive use of macros. Macros are evil in my book. I’ve been bitten. I know they can be useful in limited circumstances, but MFC accomplishes a lot of its magic with macros. And unfortunately, that’s why it can sometimes be a problem–it becomes magic instead of straightforward code.

Programming as a hobby

People are often amazed when I tell them that programming is not just a job–it’s also my hobby. I know that it’s one of the main reasons I was immediately considered for the job I have now. After looking at my cv, my now-manager headed to my web-site and saw that I had done a number of personal projects.

It’s the whole reason I think I have excelled beyond everything I’ve learned in school in the last few years. It’s one of the reasons I’m learning so much practical knowledge. Working on my own projects lets me do fun things at my own pace (I still try to apply some pressure to get things done). I always try to do things I’ve never done before.

I learned a TON making BRayTracer–about program organization, unit testing, optimization, user interface design, architectural levels, and a whole lot about .Net. There are still so many things I want to add to it so I can learn more.

It’s not something you can do just in an attempt to prove to future employers that you’re hard-core. You have to love it. There are a lot of other fun things in life. I just happen to love writing code, and I try to spend a lot of time outside work doing just that.

All other things being equal, somebody who programs as a hobby will be a better programmer than one just in it for a job.

Finding time is always difficult, though. Work is stressful, and sometimes you need to get off the computer. Still, I’ve got some cool utilities planned, a pocket pc game, and who knows. I’m keeping track of ideas, and I’ll just have to start small and work on one at a time.

Still No Silver Bullet…

Much is being made lately about vulnerabilities in Mac OS X, and various people are either haughtily dengrating the Mac while others are pooh-poohing the results with bad logic.

All of the ridiculous claims of “My OS is [better | more secure | safer] than your OS” is getting old. All these problems really do is serve to show us that, once again, that there really is no silver bullet in software design.

Why Developer Certification Doesn’t Make Sense

Much as been said about the pros and cons of requiring software engineers to be certified, just like the medical, law, and engineering fields.

I personally do not believe this should happen. First of all, those other, certified, fields have existed for thousands of years. Computer Science is not even a century old. The field is incredibly immature. Sure, we have fancier tools, and we can do some amazing things, but we’re still in infancy!

Look at the best software makers you can think of. I won’t name names. Think of the highest quality applications you have ever used. Now think of all the problems and bugs and limitations of that software. If that’s the best we can do, what meaning does certification have?

All the other certified fields have well-established standards that have withstood the tests of time. Computer Science hasn’t had the time. We can’t even agree on the best way to make software!

So go ahead and enforce certification now, but it’s not going to mean anything.

Golden Days

Yesterday was a golden day. Everything I touched turned to gold. I solved all the problems that came up, fixed bugs right and left, and even figured out the root cause of a bug that’s been plaguing us for a month or so.

Some days are like that. I like days like this, because I feel like I’m on top of the world and that on the one hand, I’m not getting paid enough, but on the other it’s so fun I’d do it for free! (if any of my bosses are reading this, concentrate on the first part of that! 😉

Today was merely a silver day. Thankfully, nothing went wrong, and I did quite a bit of good stuff. Not quite golden. Maybe I should have a calendar and put gold and silver stars on the days. That’s probably a bit much. But I could have rust-covered frown-faces for those unspeakable days.

Code Security and Typed Assembly Language

Over the summer I’m taking a class called Programming Languages and Security. This is the first time I’ve delved into security at this level before. It’s a seminar style, which means lots of paper reading, and I am going to give two presentations.

My first presentation was this past Thursday. I spoke about typed assembly language and security automata. It was absolutely fascinating, ignoring the formality of proofs, and all the mathematical notations.

The two papers I discussed were:

The TALx86 begins by describing many shortcomings of the Java Virtual Machine Language (bytecode), including such things as:

  • Semantic errors in the bytecode that could have been discovered if a formal model had been used in its design.
  • Difficulty in compiling languages other than Java into bytecode. For example, it’s literally impossible to correctly compile Scheme into bytecode. OK, Scheme is a pretty esoteric language, but…
  • Difficulty even in extending the Java language because of the bytecode limitations
  • Interpretation is slow, and even though JIT is often used these days, that’s not built-in to the VM

My immediate thought on reading this was, “Hey! .Net addresses each and every single one of these points!”

  • The CLR defines a minimal subset of functionality that must be supported by every .Net language–allowing over 40 languages to be compiled to MSIL
  • As a bonus, MSIL is typed (as is Java bytecode)
  • Just-In-Time compilation was designed in from the beginning and generally has superior performance to Java (in my experience)

It also seems that many of the experimental features present in such early research, such as TALx86, has ended up in .Net and compilers these days. Type safety is being pushed lower and lower. Security policies are being implemented into frameworks, operating systems and compilers, and there are other tools that analyze your code for adherence to security best practices.

On platforms such as .Net, type safety is more important because you can have modules written in VB.Net interacting with objects written in C++ or Python, for example. Those languages don’t know about each other’s types, but at the MSIL level you can ensure safety.

If you’d like, a copy of the presentation is available.

Managing Complexity – Part 2

Last time, I covered some generalities and anecdotes about minimizing complexity in my life. Today, I have a few more thoughts about complexity as it relates to software.

Software engineering has continually evolved since the inception of computer programming languages. One way of looking at that evolution is to see it in terms of improvements on complexity management. (This is a bit simplistic, since computers have simultaneously become much more complex.)

The first computers were simple by today’s standards, but the programming methodology was very complex: dials, levers, buttons, or physically connecting wires. Then machine language was developed binary code could be entered on a card, later memory, and interpreted.

These early languages required a perfect familiarity with the machine. If the machine changed, the code changed.

Over the years, the advances in languages have largely been a process of hiding the machine’s underlying complexity. ALGOL came around and hid the machine code and provided the foundation for FORTRAN and C. C built further by providing both structured programming tools and an abstraction of the machine’s language–one foot in each world.

Terminals began to have graphics capabilities and SmallTalk was developed to further hide the complexities of both growing code modules and user interface elements. Java hid the complexities of lower-level code like C and C++, and even took away the concept of a physical machine and substituted its own virtual machine, theoretically consistent across all physical platforms. C# has done much the same for Window–hiding the complexity of thousands of APIs in a hierarchical, intuitive framework of managed code.

Modern processors are beasts of infinite complexity and power compared to the early hulking iron giants, but the languages which we use hide nearly all of the complexity that our forebearers had to deal with on a daily basis.

Now it looks I’ve been really writing about abstraction. It’s extremely strongly related, but I don’t think it’s exactly the same thing. Abstraction is thinking at a higher level; minimizing complexity is thinking less.

Modern languages both abstract away lower level concerns and provide tools to minimize the complexity of things at the highest level.

There is increasingly a proliferation of visual tools, begun with GUI editors, but now including visual code designers.
Aspect-oriented programming and attributes are allowing complexity to be further minimized.

In the future, tools such as these, and increased use of COTS will become vital to accomplishing anything. Software complexity will only increase, but hopefully the trend of tools that minimize complexity will also continue.

Perhaps somebody (not me!) should investiage the theory of a total complexity quotient–a measure of the complexity of a system and the complexity of the tools to develop and manage that system. With this number we could measure if complexity overall is increasing or decreasing, and what/when is the crossover point.

Managing Complexity

Software engineers know that one of the keys to achieving development goals is effective complexity management. The single best way of managing complexity is to remove it from the system completely.

As a simple example, in an earlier version of my BRayTracer project (I really need to come up with a better name!), scene objects were edited by double-clicking on them in the tree view, which opened up a dialog window that displayed the properties of the scene object. The user could make changes and then click OK or Cancel.

This data flow introduced complexity by requiring:

  • An additional window to manage
  • Moving data between the main window and the dialog
  • Making a copy of the scene object
  • Allowing the user to cancel or undo previous actions

The functionality of being able to cancel changes necessitated most of that complexity.

While this example wasn’t overly complex, I still felt there was a better, simpler way. You can probably see a number of possible solutions:

  1. Implement a general undo system (more complexity)
  2. Don’t allow users to cancel changes–they’re committed! (possible, but wise?)
  3. Eliminate the dialog by having a child window on the main form (does not allow cancelling, but removes the additional dialog)
  4. Rethink how objects are edited from the ground up (expensive)

I went with option 3. Obviously, there’s a tradeoff here. I sacrificed some functionality for the sake of simplifying the interface and the code. In fact, in usability testing, many users wanted a way to cancel or undo changes. Someday, I’ll have to go back and add a way of doing it. This illustrates the principle that sometimes complexity is unavoidable for certain features (Undo support, for example, is nearly always very complex to implement effectively) and that often what is really going on is shifting the burden of complexity somewhere else.

Minimization of complexity is also tightly coupled to the principle of optimality (ah…but optimality of what?).


The tendency of developers (at least I assume it’s a tendency–I am, of course, generalizing from my own experience and those people I’ve observed) to minimize complexity is something that can carry over to our normal lives. I notice this myself in a number of ways, most of which probably aggravate Leticia to no end 🙂

  • When driving, I almost aways get in the “optimum” lane as soon as possible, that is, the lane from which I’ll have to make the fewest changes. Changing lanes adds to complexity. Having to worry about changing lanes when it becomes crucial adds too much complexity. While there are exceptions for painfully slow people, I change lanes only about 6 times on my 35 mile commute (4 roads and 4 highways).
  • When I cook, everything is optimized for minimal disruption. All ingredients and equipment are pregathered and then prepared in a way which minimizes work, time, and coordination.
  • When I watch movies at home, I try to optimize the experience by first queing up the movie, setting up the environment, volume, and then making popcorn. That way, once the popcorn is done we can begin watching immediately, while it’s hot. I almost can’t watch a movie without popcorn.