Tag Archives: usability

Amazon Kindle + Audible = Killer-app?

My wife sent me a link to the Amazon Kindle the other day, and asked, “Have you heard of this? what do you think?” I think she wants one.

I have to admit that the thought of such a device is appealing. I have tried reading e-books on my PDA and BlackBerry occasionally, but other than a quick read now and then, it’s too painful–the screen was too small.

But the Kindle…this might work out. I’m seriously considering getting one.

With the news that Amazon is buying Audible, the story gets more interesting. Personally, I haven’t gotten much into audio books, but I know people who do and love them.

I have no idea if or how Amazon will integrate Audible into the Kindle’s experience, but I have a feature request. For a killer feature:

Sell the audio version of a book at a discount (or free, or + $1)when someone buys the e-book format (or vice-versa). Then, synchronize the bookmarks between the two formats. That way, I can plug the Kindle into my car’s stereo on the way home to listen to my current selection, and at night I can pull it out and continue reading from where the audio left off.

That’s my prediction for a killer app. My wife and I do a LOT of reading (we JUST ordered our first TV, and it’s only for NetFlix, and we will not be hooking it up for any broadcast or cable). I think someday soon we’ll both have our own Kindle–it would save a lot of bookshelf space.

Technorati Tags: ,,,

Multiple-Item Clipboard a good idea?

Jeff Atwood laments the single-item nature of the Windows clipboard, and points out utilities that can expand the capabilities to multiple items. I think that’s a great power tool to have, but I’m not sure having a multiple-item clipboard is really the best thing.

I think one of the strengths of the clipboard is its single-mindedness. You always know that the thing you’ve copied is what you’ll paste (even if it is stored in multiple formats in the clipboard, it’s al conceptually the same). Once you expand the clipboard to contain multiple items, will the average user be able to handle a menu popping up asking which item to paste? No, probably not. That’s why the utility he mentions only pops up that menu when you hit it’s special shortcut. That’s the way it should be. It’s a great thing for advanced users, but I think of little benefit to the average.

For me, the simplicity of the clipboard is a good feature. Even if I had the extended capabilities of a clipboard that held multiple items, I probably would still use the basic functionality most often.

Technorati Tags: ,

A Visual Studio that’s easier on the eyes

After you’ve looked at Visual Studio all day for a few days in a row, the brightness of the white background can really start to bother you, especially as LCD monitors get brighter and brighter. That’s why I’ve become a big fan of Dave Reed’s Dark Side theme for Visual Studio 2005. It took me a few hours to get used to it, but now I’m hooked.

The only thing I changed was the font. I really like Consolas, size 13. Courier New is Courier-Old-and-No-Longer-Used.

At work, I still have to use VS2003 for a project, and keeping it in the older, white background really helps me distinguish which environment I’m in.

The users are in control

I really enjoyed and appreciated this essay from Raganwald about the user experience at work versus that of their home PC environments (among other topics).

I particularly liked the point:

And meanwhile, the very same users could walk across the street and buy themselves a much better PC for less money than we pay and take it home the same day.

Ain’t that the truth. I put together my Core 2 Duo system for the same price as my crappy Pentium 4 hyperthreaded number at work. The time frames were not that far apart. The Core 2 runs circles around this sick puppy.

A company’s philosophy should be to get users (especially developers like me!) whatever hardware/software they need immediately. Within minutes or hours, not days or weeks. Of course, then you have to trust your employees to make good requests. But if you don’t trust them to know what they need, why trust them to do their job at all?

The essay goes on to talk about writing applications that take advantage of modern PC horsepower. I think I’m doing an ok job of this at work now. For example, we have a database of assets that is continually growing. It used to be we could view all of the assets on a single page that took about 30 seconds to load off-site.

Now that list will take several minutes to bring up. Yeah, we’re growing. So we need tools to help manage all of that information. One thing I’m building right now (as soon as I’m done writing this, as a matter of fact) is a quick filtering functionality on a desktop app that talks to the database. The list of assets is filtered as you type, taking advantage of the fast PCs we have these days.

That’s just one example. I can think of others that are immediately useful in business apps:

  • better visualization – it takes time and thought to develop good data visualization, but the results are usually worth it
  • drag & drop support – make sense to drag assets from a customer to another? I don’t know, maybe.
  • dynamic views – use all that processing power to show something more interesting than fields on a scrolling form. Graphics views that change in response to context
  • track history, undo/redo – might make sense in some contexts
  • attach more meaningful information – pictures, videos, documents, whatever. – with stuff like WPF, it’s easier than ever to display varied content

Technorati Tags: , , ,

Queue multiple GUI updates into a single update event (C# .Net)

I’m writing a simple utility that involves scanning a hard disk and updating a display with the latest status. There are potentially many, many, many, many, MANY changes that happen to the display in very quick succession and sending an update event which triggers a screen refresh for every single state change would be an enormous bottleneck.

I implemented a simple queueing scenario where all the updated objects are queued for up to 200ms before being batched into a single event. The GUI can then update the display for every changed object all at once.

First, the queue:

private DateTime lastUpdateSent = DateTime.MinValue; //queue updated folders to limit the number of screen updates List<FolderObject> updatedFoldersQueue = new List<FolderObject>();

A simple list. Then I have a function which is called to handle the actual event generation:

private void FireUpdate(FolderObject folder) { if (this.OnFolderUpdated!=null) { try { if (!folder.Dirty) { folder.Dirty = true; updatedFoldersQueue.Add(folder); } TimeSpan diff = DateTime.Now - lastUpdateSent; if (diff.TotalMilliseconds > 200){ FolderUpdatedEventArgs args = new FolderUpdatedEventArgs(updatedFoldersQueue); int numFolders = updatedFoldersQueue.Count; //reinitialize queue updatedFoldersQueue = new List<FolderObject>(numFolders); OnFolderUpdated(this, args); Thread.Sleep(0); lastUpdateSent = DateTime.Now; } } catch (Exception ex) { System.Diagnostics.Trace.WriteLine(ex.Message); System.Diagnostics.Trace.WriteLine(ex.StackTrace); } } }

The folder.Dirty flag merely prevents a folder from being queued twice. The TimeSpan diff is the important bit. By checking the time difference between now and the last updated time, we can ensure that updates only get sent as often as the display–and our eyes–can handle them. The Thread.Sleep(0) ensures that the GUI thread gets a chance to draw–this is supposed to be interactive, after all.

ClearType is like a new pair of glasses

Many have said it already, but let me just add my voice: ClearType technology is the most wonderful thing to hit Windows in a long time. I recently received a new computer at work (3.4Ghz hyperthreaded, 1 GB RAM, 80/200 GB disks–a screaming machine, at least compared to what I used to have). and during the setup process (3 versions of Visual Studio, Office, dozens of developer tools) I remembered that I needed to turn ClearType on.

Wow. It’s like the same difference when you’ve been needing glasses for a while and finally get them and realize that the world isn’t that blurry after all.*

It doubles the perceived resolution of LCDs.

(* only a little ironic that ClearType works by deliberately “blurring” edges through antialiasing.)

Linux Reality Check part 2

Scoble has a great commentary on the state of Linux fonts. It’s something I never thought about much before, but now that he’s brought it up, I realize that poor font quality is something I’ve definitely suffered through when I did actively use Linux.

It’s just another example of one the seemingly-minor-but-actually-major issues facing Linux. It’s amazing how much effort must be expended in order to implement so many things we take for granted.

Linux Reality Check

Over at Slashdot, Fedora Project Leader Max Spevack responds to some frank question about the Fedora project.

He talks about a number of topics:

  1. Unified package managers across distros
  2. Propritetary drivers
  3. Differences in Linux over time
  4. Fedora’s biggest weakness
  5. Threat of Vista
  6. inclusion of NTFS driver in kernel
  7. Wacky package dependencies
  8. a few others…

What his article demontrates to me is that Linux is going through some growing pains and that the community is realizing the difficulties that Apple and Microsoft have already dealt with in their own ways.

For example,

I guess the “problem” with package managers is that they are so integral to the rest of a distro that it’s a major endeavor to switch them. One reason is that a switch of that kind would break the upgrade chain.

Welcome to the real world of computing. Upgrading, advancing, improving are all important issues for real users using their computers. The only reason we still use the x86 architecture is backward compatibility. The only reason Windows has universal marketshare is that it works with basically everything ever written.

Another fundamental issue:

In terms of getting people to use Linux instead of proprietary operating systems — I think that battle is best fought in the world of people who are new to computers. People will tend to be loyal to the first thing that *just works* and doesn’t cause them pain. Making that first experience for people a Linux one as opposed to a proprietary one — that’s the challenge.

How true. It’s been a while since I’ve installed Linux, but my memories of it were not all that pleasant. It worked well enough, I suppose, but it certainly isn’t as polished or streamlined as it should be. MS and Apple are still years ahead of Linux in this regard.

Windows Live Search Toolbar — not quite there yet

I forced myself to uninstall the Google toolbar and exclusively try out the new Windows Live Toolbar. I think I’m going to uninstall it today. First of all, I like a lot of things about it:

  • Customizable buttons
  • Lots of great features
  • Search history with automatic drop down list that shows past/related searches
  • Desktop search works as well as always. I couldn’t live without it.

I also noticed that the search results for Windows Live were just about as good as Google’s. That is a great thing–we need more competition in this space to keep things going.

However, with all that good there are some pretty significant issues (at least for me).

  • It is sloooooooooooooooow. I mean, noticeably slow, painfully slow, distractingly slow. I want my search results nearly instantaneous. None of the pretty features matter if I have to wait 15 seconds for search results to show up when your competitor can come up with the same result in 2 seconds. Is it the web-site or the toolbar–I’m not sure yet.
  • Lack of Instant Answers. One of the things I love about MSN search is the ability to track packages, get weather, and lookup addresses. Why wasn’t this built into Windows Live from the beginning? I know they’ll be adding them, but still…
  • Sometimes, the toolbar refreshes or does something that erases what I’ve already typed. I think this is an issue when it first starts up–if I’m too quick to begin searching. Not a huge issue…

I’ll be getting a new computer at work in a few weeks. I’ll try again after that. Hopefully Microsoft will have made some improvements. I’ve submitted my list of issues to their feedback page, and hopefully they will make this product better.

Managing Complexity – Part 2

Last time, I covered some generalities and anecdotes about minimizing complexity in my life. Today, I have a few more thoughts about complexity as it relates to software.

Software engineering has continually evolved since the inception of computer programming languages. One way of looking at that evolution is to see it in terms of improvements on complexity management. (This is a bit simplistic, since computers have simultaneously become much more complex.)

The first computers were simple by today’s standards, but the programming methodology was very complex: dials, levers, buttons, or physically connecting wires. Then machine language was developed binary code could be entered on a card, later memory, and interpreted.

These early languages required a perfect familiarity with the machine. If the machine changed, the code changed.

Over the years, the advances in languages have largely been a process of hiding the machine’s underlying complexity. ALGOL came around and hid the machine code and provided the foundation for FORTRAN and C. C built further by providing both structured programming tools and an abstraction of the machine’s language–one foot in each world.

Terminals began to have graphics capabilities and SmallTalk was developed to further hide the complexities of both growing code modules and user interface elements. Java hid the complexities of lower-level code like C and C++, and even took away the concept of a physical machine and substituted its own virtual machine, theoretically consistent across all physical platforms. C# has done much the same for Window–hiding the complexity of thousands of APIs in a hierarchical, intuitive framework of managed code.

Modern processors are beasts of infinite complexity and power compared to the early hulking iron giants, but the languages which we use hide nearly all of the complexity that our forebearers had to deal with on a daily basis.

Now it looks I’ve been really writing about abstraction. It’s extremely strongly related, but I don’t think it’s exactly the same thing. Abstraction is thinking at a higher level; minimizing complexity is thinking less.

Modern languages both abstract away lower level concerns and provide tools to minimize the complexity of things at the highest level.

There is increasingly a proliferation of visual tools, begun with GUI editors, but now including visual code designers.
Aspect-oriented programming and attributes are allowing complexity to be further minimized.

In the future, tools such as these, and increased use of COTS will become vital to accomplishing anything. Software complexity will only increase, but hopefully the trend of tools that minimize complexity will also continue.

Perhaps somebody (not me!) should investiage the theory of a total complexity quotient–a measure of the complexity of a system and the complexity of the tools to develop and manage that system. With this number we could measure if complexity overall is increasing or decreasing, and what/when is the crossover point.