Monthly Archives: November 2008

NDepend: A short review

NDepend is a tool I’d heard about for years, but had yet to really dive into recently. Thanks to the good folks developing it, I was able to try out a copy and have been analyzing my own projects with it.

Here’s a brief run-down of my initial experience with it.

Installation

There is no installation file—everything is packaged into a zip. After running, I was greeted by a project selection screen, in which I created a new project and added some assemblies. NDepend main screen

Analysis

Once you have all the assemblies you want to analyze selected, you can run the analysis, which generates both an HTML report with graphics, and an interactive report that you can use to drill down into almost any detail of your code. Indeed, it’s almost overwhelming the amount of detail present in this tool.

One graph you see almost immediately is Abstractness Vs. Instability.

Abstractness vs. Instability

This is a good high-level overview of your entire project at the assembly level. Basically, what this means is that assemblies that are too abstract and unstable are potentially useless and should be culled, while assemblies that are concrete and stable can be hard to maintain. Instability is defined in the help docs in terms of coupling (internal and external), while abstractness is the ratio of abstract types to total types in an assembly.

This is followed by the dependency graph:

Dependency graph

After these graphics come lots of reports that dig into your code for all sorts of conditions.

For example, the first one in my report was “Quick summary of methods to refactor".” That seems pretty vague, until you learn how they determine this. All the reports in NDepend are built off of a SQL-like query language called CQL (Code Query Language). The syntax for this is extremely easy. The query and result for this report are:

NDepend_RefactorMethods

With very little work on my part, I instantly have a checklist of items I need to look at to improve code quality and maintainability.

There are tons of other reports: methods that are too complex, methods that are poorly commented, have too many parameters, to many local variables, or classes with too many methods, etc. And of course, you can create your own (which I demonstrate below).

Interactive Visualization

All of these reports are put into the HTML report. But as I said, you can use the interactive visualizer to drill down further into your code.

The first thing you’re likely to see is a group of boxes looking like this:

NDepend_Metrics

These boxes show the relative sizes of your code from the assembly level down to the methods. Holding the mouse over a box will bring up more information about the method. You can also change the metric you’re measuring by—say to cyclomatic complexity.

Another view, perhaps the most useful of all is the CQL Queries view. In this, you can see the results from all of hundreds of code queries, as well as create your own. For instance, I can see all the types with poor cohesion in my codebase:

NDepend_Cohesion

In this view, the CQL queries are selected in the bottom-right, and the results show up on the left. The metrics view highlights the affected methods.

Creating a query

Early in the development of my project, I named quite a few classes starting with a LB prefix. I’ve changed some of them, but I think there are still a few lying around and I want to change them as well. So I’ll create CQL query to return all the types that begin with “LB.”

   1: // <Name>Types beginning with LB</Name>
   2: WARN IF Count > 0 IN SELECT TYPES WHERE 
   3:  NameLike "LB" AND     
   4:  !IsGeneratedByCompiler AND 
   5:  !IsInFrameworkAssembly     

NDepend_LB That’s it! You can see the results to the right. It’s ridiculously easy to create your own queries to examine nearly any aspect of your code. And that’s if the hundreds of included queries don’t do it for you. In many ways, the queries are similar to the analysis FxCop does, but I think CQL seems generally more powerful (while lacking some of the cool things FxCop has).

 

VS and Reflector Add-ins

NDepend has a couple of extras that enable integration of Visual Studio (2005 and 2008) and NDepend and Reflector. When you right-click on an item in VS, you will have some additional options available:

NDepend_VSPlugin1

Clicking on the submenu gives you options to directly run queries in NDepend. Very cool stuff.

Summary and where to get more info

If you are at all interested in code metrics, and how good your code is behaving, how maintainable it is, you need this tool. It’s now going to be a standard part of my toolbox for evaluating the quality of my code and what parts need attention.

If you’re using NDepend for personal and non-commercial reasons, you can download it for free. It doesn’t have all the features, but it has more than enough. Professional use does require a license.

One of the things I was particularly impressed with was the amount of help content available. There are tons of tutorials for every part of the program.

I’m going to keep playing with this and I’m sure I’ll mention some more things as I discover them. For now, NDepend is very cool—it’s actually fun to play with, and it gives you good information for what to work on.

Links:

Girl from Mars – Magneta Lane

I first saw this video at the Microsoft Company Meeting 2008, and looked for the song everywhere, but couldn’t find the Magneta Lane version. They recorded it just for Microsoft. Nevertheless, the original Ash version is great too, so get that in the meantime.

Magneta’ Lane’s MySpace page does mention the song, and maybe a release is on the way.

Update: Forgot the music video from Ash. I like it.

On the importance of ignoring your problems

I’ve been having a great time at Microsoft over the last couple of months, but the ramp to full productivity is very steep. Recently, I’ve been working on an important improvement to some monitoring software, which requires a fairly good understanding of part of the system. It can be a little overwhelming trying to design something to handle all the nuances involved. Add to that the time crunch, and things get a little interesting. Last night, I ran into a little wall where I was uncertain about how to proceed. This kind of situation is always a little precarious. I was worried because today and tomorrow I have full-day training and won’t be able to dedicate a lot of time to solving this problem.

Half way through the training, while discussing something tangentially related to my problems, I realized the technical solution to the specific issue I was facing.

Sometimes you just need to ignore the problem for a while, and come back to it from a different angle.

Consequences of a Star Trek-like computer

Star Trek, along with other science fiction futures, has given us many things, apart from a vision of humanity that is hopefully a little better than we prove to be, but also a taste of what technology can be like when it is integrated so fully into people’s lives that it’s nearly taken for granted.

The computer on the Enterprise is an interesting entity to think about. A crew member can ask it just about any question and it can give the desired answer. It doesn’t matter if the question is slightly vague, or depends on prior knowledge of the conversation. What phenomenal power! How does it work?

I can think of only two possibilities:

  1. It can read their minds
  2. It has been paying attention to their conversation, and thus understands the context.

Not discounting the possibility of the first scenario, I want to think about the second.

Context

How much understanding of our immediate environment is due to context? When analyzing a situation we have at our disposal our

  1. Experience
  2. Book knowledge
  3. Logical analysis/intuition (I include them in the same since intuition could theoretically be a subconscious logical process, colored with experience—I don’t know if I believe that, but it’s not important)

Now take away experience. How would you fare when confronted with new situations (Which, by definition, are all situations)?

Most of us, I think, would understandably quail under the rigor of thought required to get through such an ordeal. If you believe otherwise, make the situation extreme—flying a plane, or leading a squad into war. No amount of knowledge or rational thought will help here—you need the benefit of hard-core training: experience–context.

Do this exercise: describe to someone what salt tastes like.

On the other hand, saying “It’s too salty.” immediately conveys exactly what you mean, based on shared context, mutual experience.

There is an enormous gap between where our computer systems are now, versus what is perhaps the holy grail of foreseeable technology, the computer on the Enterprise—an all-seeing, all-knowing, conversant entity. It’s like Wikipedia, but to a depth of knowledge unheard of on any web site today, all cross-referenced and Searchable.

Wikipedia is a decent (I won’t say great) source of much knowledge, but it’s hardly definitive, or all-encompassing. Also, it’s just facts. It’s not calculation or interpretation. It does not advise or synthesize.

In Star Trek, when  a crewmember asks questions, they can be these fact-based, context-free questions that require simple look-ups to respond with. But often, there is a series of questions, with dialogue in-between, all related to a certain topic. Each query does not contain the total information required to retrieve a response. Rather, the computer has tracked the context and maintained an accurate representation of the conversation thus far. In essence, the computer is participating in the conversation fully.

An idea of what context means is demonstrated by this simple list of questions. Just imagine giving these to a computer or search engine today. The first one is ok, but following that, not so much.

  • What are the latest Hubble Telescope pictures?
  • When were these taken?
  • How much longer will it stay up?
  • How will the next space telescope be different?
  • Compare the efforts of all G7 nations to build orbiting observation platforms.

Each of those questions presupposed the previous one. The computer must keep track of this. That last one is a real doozy—it’s asking the computer to synthesize information from multiple sources into a coherent, original response. We can’t even dream of something this advanced right now, but I believe it’s coming.

On the other hand, let’s take a different direction, more personal:

  • Which of my friends are having a birthday in the next month?
  • What book should I read next?
  • What do I need to get at the store?
  • Where are my children?

Is this possible to do today? Yes, technologically speaking.

It’s not technology that will hold us back. It’s us.

Security, Privacy

Think of what it means to have a computer able to access full context to answer any query you throw at it. It has to know everything about you. To give you good food recommendations, it has to know where you’ve eaten and how you liked it. To be able to answer arbitrary questions in context, it needs to record your every conversation, parse it,  cross-reference it, and store it for later access.

In our current culture, what this means is tying together all systems. There are intimations of this happening. Every time you hear of a company providing an API to access its data, that’s a little piece of this context being hooked up. It means that the Computer now has access to your Facebook and LinkedIn data, so that when you search for “tortoise” it can see you’re a developer and a high-proportion of software developers want to actually download “TortoiseSVN”, not see pictures of reptiles. In fact, it probably means there is no such thing as Facebook (or any other social network) anymore. There is just one network filled with data.

It becomes even more intertwined. If I really want the computer to have full context of me, it should monitor what I watch on TV, what my tastes in music are, where I go, where I work, my habits, who I call, what I talk about, etc., etc., etc. It never ends.

Now, here’s the million-dollar question: who would agree to such invasive procedures, even if the benefit was enormous?

In many ways, we are agreeing to it all the time. We allow places like Amazon, Netflix, and iTunes to track all our purchases in order to give us decent recommendations (in the hope we’ll purchase more). We give up our privacy a bit when we get the grocery loyalty cards, or even credit cards. This is all tracked and correlated. In the case of recommendation systems, there is a tangible benefit for us,but the loyalty cards are less certainly valuable to us, other than lower prices (which is not an inherent benefit of those cards, just a marketing tactic). Indeed, studies have shown for just how little we humans give up our privacy.

There are a few levels of security we need to worry about. At a low level, how much do you trust Google, or Microsoft, or Apple, or Amazon with your information? Right now, a lot of us trust them with a fair amount, but nowhere near our entire life’s story. We have it neatly segmented. Part in Amazon, part in Facebook, part in Google, part in all the other companies we deal with. If mistakes are made, consequences not thought through, we have problems like all our friends seeing what we’re purchasing.

At a high level, we need to consider all of this context ending up in the wrong hands. Not just scammers and other low-lifes, but government, foreign and domestic. The potential for abuse is massive—so much so that most of us wouldn’t voluntarily agree to any of the unifying ideas in this essay in our lifetimes. We just don’t trust anybody that much.

In essence, a Star Trek-like computer would require massive amounts of “spyware” on every system in the world, all tied together in a massive database. This is possible (maybe even desired?) in a closed system like a ship, where everything is easily monitored and hierarchies of security are well-understood. In the world at large, it’s just scary.

Economy and Altruism

I believe another obstacle to this is money. The way our society works, with limited resources, we are required (?) to have some system of trade, an economy. These days, the trade is often over information, the very thing this mythical Star Trek computer depends on. Think credit reports, buying history, demographics.

What is the specific danger of businesses finding out personal information about you? Can they force you to buy something? Not likely. But they can manipulate the environment in such a way to make it more likely. They present a lie designed to sell you something you don’t need. More maliciously, they can also sell your information to more vital entities, like insurance companies, or governments. If the government is too powerful, there is no way to prevent this. Think about what happens in China.

Is the only way to have such efficient and helpful systems to do away with our current capitalistic economy? Yes and no. Such far-reaching, life-changing technologies will undoubtedly continue to be developed and become more a part of our lives than they already are. Unfortunately, the potential for abuse is enormous and will grow as we become more and more dependent on them. We have no inherent trust in the system, nor should we. Just look at the ridiculous politicking taking place over voting machines. That’s just one system, and our society can’t get it right. We have a thousand such systems, many hanging onto usefulness and security by a thread. I bet that it’s not even the exception, but the rule to have such systems. Why should we trust such things to run our lives? We shouldn’t. There are so many reasons for this: corruption, economy, politics, and motivation.

Perhaps motivation is they key. We are often motivated now by money, comfort, or some other selfish reason—reasonable or not. In the Star Trek vision of the future, we see a population depicted as motivated by a quest for knowledge and understanding. That’s why they can have all-knowing computers. They trust who created it and what it does. They know there is no political or other ulterior motive. Yes, there’s adequate security and protections against attack, but the whole starting mindset is different.

Don’t think that I’m in favor of destroying capitalism in favor of more socialistic or idealistic systems. Imposing a system of “fairness” or “equality” does nothing to further those goals and I’m not advocating any political or economic system—I’m merely stating what I think the reality must be in the future for us to make these advancements. People themselves must reform their motivations. Pushing any political system has no effect because the fundamentals of our world haven’t changed. Resources are still scarce, thus economy must still exist. If people’s intrinsic motivations are to be changed, I believe resources must be (practically) infinite.

When this happens the nature of the Internet will change as well. If the economics change and we are no longer concerned with that, and we also have an altruistic frame of mind, information that is posted on the Internet will similarly change. No longer do we have to actually care about our walled gardens—the information is just put “up there”, in the “cloud”, to use the popular term. A computer would be free to just quote the contents to the user, or recombine it with other content. It’s all just content, with a single interface to access it all.

Understanding

There’s an important issue I glossed over in the above paragraphs. That is understanding. I talked a little about this in my previous blog entry about Software Creativity and Strange Loops.

I’m excited for this future. I doubt I’ll live to see advances fully along these lines. The problems are phenomenally difficult and they’re not all technical, but it’s still exciting to think about. Those of us who can just need to do our small part to contribute towards it.

Software Creativity and Strange Loops

I’ve been thinking a lot lately about the kind of technology and scientific understanding that would need to go into a computer like the one on the Enterprise in Star Trek, and specifically its interaction with people. It’s a computer that can respond to questions in context—that is, you don’t have to restart in every question everything needed to answer. The computer has been monitoring the conversation and has thus built up a context that it can use to understand and intelligently respond.

A computer that records and correlates conversations real-time must have a phenomenal ability (compared to our current technology) to not just syntactically parse the content, but also construct semantic models of it. If a computer is going to respond intelligently to you, it has to understand you. This is far beyond our current technology, but we’re moving there. In 20 years who knows where this will be. In 100, we can’t even imagine it. 400 years is nearly beyond contemplation.

The philosophy of computer understanding, and human-computer interaction specifically is incredibly interesting. I was led to think a lot about this while reading Robert Glass’s Software Creativity 2.0. This book is about the design and construction of software, but it has a deep philosophical undercurrent running throughout that kept me richly engaged. Much of the book is presented as conflicts between opposing forces:

  • Discipline versus Flexibility
  • Formal Methods versus Heuristics
  • Optimizing versus satisficing
  • Quantitative versus qualitative reasoning
  • Process versus product
  • Intellectual versus clerical
  • Theory versus practice
  • Industry versus academe
  • Fun versus getting serious

Too often, neither one of these sides is “right”—they are just part of the problem (or the solution). While the book was written from the perspective of software construction, I think you can twist the intention just a little and consider them as attributes of software itself, not just how to write it, but how software must function. Most of those titles can be broken up into a dichotomy of Thinking versus Doing.

Thinking: Flexibility, Heuristics, Satisficing, Qualitative, Process, Intellectual, Theory, Academe

Doing: Discipline, Formal Methods, Optimizing, Quantitative, Product, Clerical, Practice, Industry

Computers are wonderful at the doing, not so much at the thinking. Much of thinking is synthesizing information, recognizing patterns, and highlighting the important points so that we can understand it. As humans, we have to do this or we are overwhelmed and have no comprehension. A computer has no such requirement—all information is available to it, yet it has no capability to synthesize, apply experience and perhaps (seemingly) unrelated principles to the situation. In this respect, the computer’s advantage in quantity is far outweighed by its lack of understanding. It has all the context in the world, but no way to apply it.

A good benchmark for a reasonable AI on the level I’m dreaming about is a program that can synthesize a complex set of documents (be they text, audio, or video) and produce a comprehensible summary that is not just selected excerpts from each. This functionality implies an ability to understand and comprehend on many levels. To do this will mean a much deeper understanding of the problems facing us in computer science, as represented in the list above.

You can start to think of these attributes/actions as mutually beneficial and dependent, influencing one another, recursively, being distinct (at  first), and then morphing into a spiral, both being inputs to the other. Quantitative reasoning leads to qualitative analysis which leads back to qualitative measures, etc.

It made me think of Douglas R. Hofstadter’s opus Godel, Escher, Bach: An Eternal Golden Braid. This is a fascinating book that, if you can get through it (I admit I struggled through parts), wants you to think of consciousness as the attempted resolution of a very high-order strange loop.

The Strange Loop phenomenon occurs whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started.

In the book, he discusses how this pattern appears in many areas, most notably music, the works of Escher, and in philosophy, as well as consciousness.

My belief is that the explanations of “emergent” phenomena in our brains—for instance, ideas, hopes, images, analogies, and finally consciousness and free will—are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing “resonance” between different levels… The self comes into being at the moment it has the power to reflect itself.

I can’t help but think that this idea of a strange loop, combined with Glass’s attributes of software creativity are what will lead to more intelligent computers.