My Wife’s Logic (or Women’s Logic Explained?)

For all of you who learned boolean algebra in your CS courses in college, I am sorry to be the bearer of bad news: your education was incomplete. The list of boolean tautologies and truth tables that you may have memorized or learned over time was wrong, with some startling and glaring errors.

To rectify this, I present some new truth.

First, an example in real life, which really happened. For context, Leticia is my wife, has beautiful, olive skin, with dark brown eyes, and hair with various colors of brown, and the little girl in discussion was as white as can be.

Leticia: Look at that little girl–she’s so beautiful! Do you think we’ll ever have a girl who looks like her?

Me: No.

L: So you think our daughter will be ugly!

M: Uhhh……no. I don’t think she’ll be white.

Transforming this little conversation into boolean logic:

A: This little girl is beautiful

B: Our future daughter will look like this girl

C: Our future daughter will be beautiful

So my wife says that AB–>C, and that if I say !B, then !B–>!C. I always knew that, but what I didn’t know is that the other options I thought existed aren’t actually valid! (i.e., that !B–>C is also true, or in other words, that C can be true regardless of the value of B) Who knew! So I present below corrected truth tables. Wikipedia, take note.

Standard Truth Table for Implication

Improved Truth Table for Implication

X Y X–>Y
F F T
F T T
T F F
T T T
X Y X?Y
F F T
F T F
T F F
T T T

So you see that the correct form of implication when dealing with this logic is the same as equality.

Now you know, beware.

The users are in control

I really enjoyed and appreciated this essay from Raganwald about the user experience at work versus that of their home PC environments (among other topics).

I particularly liked the point:

And meanwhile, the very same users could walk across the street and buy themselves a much better PC for less money than we pay and take it home the same day.

Ain’t that the truth. I put together my Core 2 Duo system for the same price as my crappy Pentium 4 hyperthreaded number at work. The time frames were not that far apart. The Core 2 runs circles around this sick puppy.

A company’s philosophy should be to get users (especially developers like me!) whatever hardware/software they need immediately. Within minutes or hours, not days or weeks. Of course, then you have to trust your employees to make good requests. But if you don’t trust them to know what they need, why trust them to do their job at all?

The essay goes on to talk about writing applications that take advantage of modern PC horsepower. I think I’m doing an ok job of this at work now. For example, we have a database of assets that is continually growing. It used to be we could view all of the assets on a single page that took about 30 seconds to load off-site.

Now that list will take several minutes to bring up. Yeah, we’re growing. So we need tools to help manage all of that information. One thing I’m building right now (as soon as I’m done writing this, as a matter of fact) is a quick filtering functionality on a desktop app that talks to the database. The list of assets is filtered as you type, taking advantage of the fast PCs we have these days.

That’s just one example. I can think of others that are immediately useful in business apps:

  • better visualization – it takes time and thought to develop good data visualization, but the results are usually worth it
  • drag & drop support – make sense to drag assets from a customer to another? I don’t know, maybe.
  • dynamic views – use all that processing power to show something more interesting than fields on a scrolling form. Graphics views that change in response to context
  • track history, undo/redo – might make sense in some contexts
  • attach more meaningful information – pictures, videos, documents, whatever. – with stuff like WPF, it’s easier than ever to display varied content

Technorati Tags: , , ,

Don’t ignore naive or "stupid" algorithms — hardware is cheap and fast

I just had a nice reality check. Sort of pleasant in that I realized I could save a LOT of memory usage (like from 35MB down to 9 MB), but also aggravating because I have spent probably 10-20 hours developing a clever algorithm designed for speed.

Lesson learned. I should have built the naive version first. Instead, I wrote up two successively more “brilliant” versions that went through all sorts of hoops to get the most speed out of it. Of course, to do this, they took up all sorts of memory with indexes, and the index creation was starting to take about 10 seconds or longer. I should have just built the naive version.

I just wrote the naive version and realized I could have done that in about 5 minutes and saved many hours of tweaking. The component is a type of indexing component, so there were three metrics: index creation time, lookup time, index size. Here’s a rough comparison just to give an idea:

  Clever Algorithm Naive Algorithm
Index Creation Time 10s 0.3s
Lookup Time 0.0001s 0.005s
Index Size 35MB 9 MB
# items

~27,000

Pretty impressive speed numbers aren’t they! That clever algorithm really rocks. And it would be awesome to use if I was doing a lot of searching consecutively, but the searching in my app is tied to the UI, thus to the user, so in reality 0.005 seconds is not that much different than 0.0001 seconds. <sigh>

The numbers above are from my main machine, which is a Core 2 Duo. Just to be safe I tested the naive algorithm on my 4-year-old Pentium 4 laptop to validate that it still has acceptable performance on an older machine. The creation takes 0.05 seconds, but lookup time isn’t much slower, if at all.

And 9MB index is MUCH better than a 35MB index.

In summary, lessons learned:

  1. Hardware is cheap and fast. Don’t waste time optimizing for speed if you don’t have to. While there are signs the raw speed of a processor is plateauing as multiple cores become more important, in general, speed is always increasing.
  2. If you’re running something when a user inputs something, speed isn’t critical (as long as you have it faster than human response time)
  3. Every application is different, so measure and think critically. If my app needed to run the search 100 times per second, the clever algorithm would definitely be better.
  4. There is almost always a tradeoff between speed and size. Which is more important depends on the app.
  5. Write the dumb algorithm first. It might be good enough and you’ll save yourself hours of development and debugging time.

Technorati Tags: , , , , ,

Farewell, Robert Jordan

According to his blog, Robert Jordan passed away yesterday. He fought a tough illness for quite a while. I became a big fan of The Wheel of Time a few years ago and forced myself to stop reading the books until the final one comes out.

I loved the books because they were immense, detailed, complex, and very engaging. He was in the middle of writing it, and he has left notes and given an oral narration of the end of the series, but it just won’t be the same.

The Fountain

We just got Darren Aronofsky’s The Fountain in the NetFlix mail today, and we loved it. Definitely worth watching, a thinking movie, a feast for the eyes. The use of lights was spectacular. It was in the same realm as What Dreams May Come (though I liked that one better), but it also made me think of Orson Scott Card’s Speaker for the Dead. I’m afraid if I say why it will give too much of the movie away. It just needs to be seen and experienced.

See it–a wonderful movie.

How to measure memory use in .Net programs

In developing an important component (which I will discuss soon) for my current personal project, there were a number of different algorithms which I could use to attack the problem. I wasn’t sure which one would be better so I decided to implement each of them and measure their time/memory usage. I have a series of articles planned on those, but first I need to mention how I did the measurement.

 

For the timing, I simply wrapped the API functions for QueryPerformanceCounter and QueryPerformanceFrequency into a C# class. There are plenty of examples out there on the net to do this.

The memory usage is even simpler. The function you need is GC.GetTotalMemory(bool forceFullCollection). This function returns the amount of memory allocated. The little program below demonstrates how to use it.

using System;

namespace MemUsageTest
{
    class Program
    {
        static void Main(string[] args)
        {
            long memStart = GC.GetTotalMemory(true);
            Int32[] myInt = new Int32[4];
            long memEnd = GC.GetTotalMemory(true);

            long diff = memEnd - memStart;

            Console.WriteLine("{0:N0} bytes used", diff);
        }
    }
}

The output on my 32-bit system is “40 bytes used”–16 bytes for integers and 24 bytes of array overhead.

Passing true to GetTotalMemory is important–it gives the garbage collector an opportunity to reclaim all unused memory. There is a caveat, though. If you read the documentation, you’ll notice it says it attempts to discover the amount of bytes in use, and that setting forceFullCollection only gives it an opportunity and waiting a bit before returning. It does NOT guarantee that all unused memory is reclaimed.

As far as my work goes, I’ve noticed that it does a pretty reliable and consistent job.

Technorati Tags: , ,

On the cover of Wired magazine

OK, It’s a bit old now, but I thought I’d show myself on the cover of Wired magazine. Cool, isn’t it? This was part of a promotion by Xerox where they printed 5,000 (more?) custom covers. This was the July issue. I mostly like how we’re obviously on the water in the picture, but not according to the map. Cool, anyway. Definitely a keepsake.

Ben and Leticia on the cover of Wired

Technorati Tags: ,

What would the human race look like?

On my drive into work this morning, I heard an interesting story on on WAMU (sorry, can’t find the specific story link) about a Korean-American adopted by white American parents. While initially struggling against her Korean heritage, she eventually came to appreciate and be proud of it. The commentator, himself an adopted Korean in the same situation, was very grateful for the chance to grow up in such a mixed household.

Along with these thoughts, I there’s a set of novels I recently finished: Orson Scott Card’s Shadow series. In it, he describes an Earth that starts out a lot like the one we know today (maybe a few hundred years in the future), and is eventually unified under a single ruler. The novels are very good, despite the fact that they leave quite a bit of details about how this would happen, given the nature of humanity. On the other hand, maybe it’s merely evoking the philosophy that humans are inherently good, and given the right set of circumstances, they will choose to do good for all of us. I think I could believe that, despite what we see in the world today.

But these two stories together got me thinking: what if the entire world were open to us–no borders, easy  transportation, peaceful coexistence, interdependency, leading to high intermixing, intermarriage, etc…. What would we end up looking like as a human race? I initially thought of this question in terms of physical features, but it’s interesting to think about language, culture, economics, technology–anything at all. It certainly requires a great deal of imagination and taking things for granted to see this world, but I think it’s interesting in a futuristic, sci-fi sort of way.

Thoughts on Process: Automation (and examples)

If a process, or part of a process, can be automated it should be. For example, in a project at work, part of our process is to make sure that every dialog present in the English resources of an MFC application is also present in all the other dialogs. We do this manually by loading each language into the app and triggering every dialog. This is tedious, time-consuming, and error-prone. Did I mention we have 12 languages?

A better way would be to have a utility that quickly parses a resource file and extracts the dialog names and compares that list to all the other languages. It could even by expanded to do a control-by-control comparison to make sure all of those were present.

At some point, of course, the dialogs have to be visually inspected for spacing issues in other languages, but for an automated check, it can find the most common errors pretty quickly.

I am a huge fan of automating processes like this. There’s no reason to waste brainpower on highly-repetitive tasks. Even if it takes me a whole day to write a tool, it’s usually worth it.

Examples of automated processes:

  • Build process – if you’re building and packaging your product manually, you’re doing it wrong. All modern environments include command-line, scriptable components. When we build our flagship product, I type “make” and ten minutes later there’s a setup.exe in a distribution folder for our team.
  • Unit testing – Unit testing suites are usually automated to some degree (you hit a button, all tests run), but it can be taken further by running them during builds or as part of a continuous integration server.
  • Documentation/Change-logs – whenever we produce a new build, we send out an e-mail to internal staff about the changes. These are usually culled from the check-in comments in subversion. It’s a manual process. It might be nice to automatically dump them to a file, which can then be edited instead of written from scratch.
  • Code checks – this can encompass almost anything, but having static analysis tools is invaluable. Like the utility that I mentioned earlier to compare resource files across all the languages we support, it can save tons of manual, “stupid” labor.
  • Loading source onto a new machine – How long does it take to get up and running on a new machine? Admittedly, you shouldn’t be switching machines all THAT often, but when you do how easy is it to grab the source and start debugging? Are all the required libraries and tools in source control and automatically configured by build scripts?
  • E-mail – If you’re like me, you get scores if not hundreds of e-mails a day. How are you organizing, sorting, responding, ignoring, deleting them? Setup filters to put them into different folders, highlight or tag them when certain keywords appear. Also, get Google Desktop Search or Windows Desktop Search. I like both of them, but I’m currently using Google’s version. I may switch back in a while.
  • Bug reporting – While not strictly about automation, I think it’s close enough. Reporting bugs and code changes in a text file is good for a while–if you’re the only one working, and you have a small number of them to deal with. Once you start involving more programmers, and perhaps a manager who wants to see some basic reports, the text file doesn’t cut it. Get a simple bug reporting tool. I use BugTracker.Net because it’s easy, simple and does exactly what we need with minimum fuss. How do I know what to work on? I open up a web page and it tells me. I’ve automated not only some manual labor, but also some needless thought processes.
  • Calendaring – Do you need to write a weekly report for your manager? Keep track of employee’s vacation schedules? Use Outlook’s (or whatever PIM you choose) task list and calendar for anything you need to remember about a specific date. Set reminders for when you need to think about them, and then forget about them.
  • Data production – if you’re in a production environment generating data that needs to be analyzed, create tools to do as much of it as possible. Of course, the tools need to be checked for correctness, but once you’re confident, do it and don’t look back.

There are many, many ways you can optimize, reduce, and automated the work you’re doing. Remember, the whole point is to get rid of the “dumb” work and let yourself concentrate on the important, creative things.

Technorati Tags: , , , , , , ,