Tag Archives: star trek

Consequences of a Star Trek-like computer

Star Trek, along with other science fiction futures, has given us many things, apart from a vision of humanity that is hopefully a little better than we prove to be, but also a taste of what technology can be like when it is integrated so fully into people’s lives that it’s nearly taken for granted.

The computer on the Enterprise is an interesting entity to think about. A crew member can ask it just about any question and it can give the desired answer. It doesn’t matter if the question is slightly vague, or depends on prior knowledge of the conversation. What phenomenal power! How does it work?

I can think of only two possibilities:

  1. It can read their minds
  2. It has been paying attention to their conversation, and thus understands the context.

Not discounting the possibility of the first scenario, I want to think about the second.

Context

How much understanding of our immediate environment is due to context? When analyzing a situation we have at our disposal our

  1. Experience
  2. Book knowledge
  3. Logical analysis/intuition (I include them in the same since intuition could theoretically be a subconscious logical process, colored with experience—I don’t know if I believe that, but it’s not important)

Now take away experience. How would you fare when confronted with new situations (Which, by definition, are all situations)?

Most of us, I think, would understandably quail under the rigor of thought required to get through such an ordeal. If you believe otherwise, make the situation extreme—flying a plane, or leading a squad into war. No amount of knowledge or rational thought will help here—you need the benefit of hard-core training: experience–context.

Do this exercise: describe to someone what salt tastes like.

On the other hand, saying “It’s too salty.” immediately conveys exactly what you mean, based on shared context, mutual experience.

There is an enormous gap between where our computer systems are now, versus what is perhaps the holy grail of foreseeable technology, the computer on the Enterprise—an all-seeing, all-knowing, conversant entity. It’s like Wikipedia, but to a depth of knowledge unheard of on any web site today, all cross-referenced and Searchable.

Wikipedia is a decent (I won’t say great) source of much knowledge, but it’s hardly definitive, or all-encompassing. Also, it’s just facts. It’s not calculation or interpretation. It does not advise or synthesize.

In Star Trek, when  a crewmember asks questions, they can be these fact-based, context-free questions that require simple look-ups to respond with. But often, there is a series of questions, with dialogue in-between, all related to a certain topic. Each query does not contain the total information required to retrieve a response. Rather, the computer has tracked the context and maintained an accurate representation of the conversation thus far. In essence, the computer is participating in the conversation fully.

An idea of what context means is demonstrated by this simple list of questions. Just imagine giving these to a computer or search engine today. The first one is ok, but following that, not so much.

  • What are the latest Hubble Telescope pictures?
  • When were these taken?
  • How much longer will it stay up?
  • How will the next space telescope be different?
  • Compare the efforts of all G7 nations to build orbiting observation platforms.

Each of those questions presupposed the previous one. The computer must keep track of this. That last one is a real doozy—it’s asking the computer to synthesize information from multiple sources into a coherent, original response. We can’t even dream of something this advanced right now, but I believe it’s coming.

On the other hand, let’s take a different direction, more personal:

  • Which of my friends are having a birthday in the next month?
  • What book should I read next?
  • What do I need to get at the store?
  • Where are my children?

Is this possible to do today? Yes, technologically speaking.

It’s not technology that will hold us back. It’s us.

Security, Privacy

Think of what it means to have a computer able to access full context to answer any query you throw at it. It has to know everything about you. To give you good food recommendations, it has to know where you’ve eaten and how you liked it. To be able to answer arbitrary questions in context, it needs to record your every conversation, parse it,  cross-reference it, and store it for later access.

In our current culture, what this means is tying together all systems. There are intimations of this happening. Every time you hear of a company providing an API to access its data, that’s a little piece of this context being hooked up. It means that the Computer now has access to your Facebook and LinkedIn data, so that when you search for “tortoise” it can see you’re a developer and a high-proportion of software developers want to actually download “TortoiseSVN”, not see pictures of reptiles. In fact, it probably means there is no such thing as Facebook (or any other social network) anymore. There is just one network filled with data.

It becomes even more intertwined. If I really want the computer to have full context of me, it should monitor what I watch on TV, what my tastes in music are, where I go, where I work, my habits, who I call, what I talk about, etc., etc., etc. It never ends.

Now, here’s the million-dollar question: who would agree to such invasive procedures, even if the benefit was enormous?

In many ways, we are agreeing to it all the time. We allow places like Amazon, Netflix, and iTunes to track all our purchases in order to give us decent recommendations (in the hope we’ll purchase more). We give up our privacy a bit when we get the grocery loyalty cards, or even credit cards. This is all tracked and correlated. In the case of recommendation systems, there is a tangible benefit for us,but the loyalty cards are less certainly valuable to us, other than lower prices (which is not an inherent benefit of those cards, just a marketing tactic). Indeed, studies have shown for just how little we humans give up our privacy.

There are a few levels of security we need to worry about. At a low level, how much do you trust Google, or Microsoft, or Apple, or Amazon with your information? Right now, a lot of us trust them with a fair amount, but nowhere near our entire life’s story. We have it neatly segmented. Part in Amazon, part in Facebook, part in Google, part in all the other companies we deal with. If mistakes are made, consequences not thought through, we have problems like all our friends seeing what we’re purchasing.

At a high level, we need to consider all of this context ending up in the wrong hands. Not just scammers and other low-lifes, but government, foreign and domestic. The potential for abuse is massive—so much so that most of us wouldn’t voluntarily agree to any of the unifying ideas in this essay in our lifetimes. We just don’t trust anybody that much.

In essence, a Star Trek-like computer would require massive amounts of “spyware” on every system in the world, all tied together in a massive database. This is possible (maybe even desired?) in a closed system like a ship, where everything is easily monitored and hierarchies of security are well-understood. In the world at large, it’s just scary.

Economy and Altruism

I believe another obstacle to this is money. The way our society works, with limited resources, we are required (?) to have some system of trade, an economy. These days, the trade is often over information, the very thing this mythical Star Trek computer depends on. Think credit reports, buying history, demographics.

What is the specific danger of businesses finding out personal information about you? Can they force you to buy something? Not likely. But they can manipulate the environment in such a way to make it more likely. They present a lie designed to sell you something you don’t need. More maliciously, they can also sell your information to more vital entities, like insurance companies, or governments. If the government is too powerful, there is no way to prevent this. Think about what happens in China.

Is the only way to have such efficient and helpful systems to do away with our current capitalistic economy? Yes and no. Such far-reaching, life-changing technologies will undoubtedly continue to be developed and become more a part of our lives than they already are. Unfortunately, the potential for abuse is enormous and will grow as we become more and more dependent on them. We have no inherent trust in the system, nor should we. Just look at the ridiculous politicking taking place over voting machines. That’s just one system, and our society can’t get it right. We have a thousand such systems, many hanging onto usefulness and security by a thread. I bet that it’s not even the exception, but the rule to have such systems. Why should we trust such things to run our lives? We shouldn’t. There are so many reasons for this: corruption, economy, politics, and motivation.

Perhaps motivation is they key. We are often motivated now by money, comfort, or some other selfish reason—reasonable or not. In the Star Trek vision of the future, we see a population depicted as motivated by a quest for knowledge and understanding. That’s why they can have all-knowing computers. They trust who created it and what it does. They know there is no political or other ulterior motive. Yes, there’s adequate security and protections against attack, but the whole starting mindset is different.

Don’t think that I’m in favor of destroying capitalism in favor of more socialistic or idealistic systems. Imposing a system of “fairness” or “equality” does nothing to further those goals and I’m not advocating any political or economic system—I’m merely stating what I think the reality must be in the future for us to make these advancements. People themselves must reform their motivations. Pushing any political system has no effect because the fundamentals of our world haven’t changed. Resources are still scarce, thus economy must still exist. If people’s intrinsic motivations are to be changed, I believe resources must be (practically) infinite.

When this happens the nature of the Internet will change as well. If the economics change and we are no longer concerned with that, and we also have an altruistic frame of mind, information that is posted on the Internet will similarly change. No longer do we have to actually care about our walled gardens—the information is just put “up there”, in the “cloud”, to use the popular term. A computer would be free to just quote the contents to the user, or recombine it with other content. It’s all just content, with a single interface to access it all.

Understanding

There’s an important issue I glossed over in the above paragraphs. That is understanding. I talked a little about this in my previous blog entry about Software Creativity and Strange Loops.

I’m excited for this future. I doubt I’ll live to see advances fully along these lines. The problems are phenomenally difficult and they’re not all technical, but it’s still exciting to think about. Those of us who can just need to do our small part to contribute towards it.


Check out my latest book, the essential, in-depth guide to performance for all .NET developers:

Writing High-Performance.NET Code, 2nd Edition by Ben Watson. Available for pre-order:

Software Creativity and Strange Loops

I’ve been thinking a lot lately about the kind of technology and scientific understanding that would need to go into a computer like the one on the Enterprise in Star Trek, and specifically its interaction with people. It’s a computer that can respond to questions in context—that is, you don’t have to restart in every question everything needed to answer. The computer has been monitoring the conversation and has thus built up a context that it can use to understand and intelligently respond.

A computer that records and correlates conversations real-time must have a phenomenal ability (compared to our current technology) to not just syntactically parse the content, but also construct semantic models of it. If a computer is going to respond intelligently to you, it has to understand you. This is far beyond our current technology, but we’re moving there. In 20 years who knows where this will be. In 100, we can’t even imagine it. 400 years is nearly beyond contemplation.

The philosophy of computer understanding, and human-computer interaction specifically is incredibly interesting. I was led to think a lot about this while reading Robert Glass’s Software Creativity 2.0. This book is about the design and construction of software, but it has a deep philosophical undercurrent running throughout that kept me richly engaged. Much of the book is presented as conflicts between opposing forces:

  • Discipline versus Flexibility
  • Formal Methods versus Heuristics
  • Optimizing versus satisficing
  • Quantitative versus qualitative reasoning
  • Process versus product
  • Intellectual versus clerical
  • Theory versus practice
  • Industry versus academe
  • Fun versus getting serious

Too often, neither one of these sides is “right”—they are just part of the problem (or the solution). While the book was written from the perspective of software construction, I think you can twist the intention just a little and consider them as attributes of software itself, not just how to write it, but how software must function. Most of those titles can be broken up into a dichotomy of Thinking versus Doing.

Thinking: Flexibility, Heuristics, Satisficing, Qualitative, Process, Intellectual, Theory, Academe

Doing: Discipline, Formal Methods, Optimizing, Quantitative, Product, Clerical, Practice, Industry

Computers are wonderful at the doing, not so much at the thinking. Much of thinking is synthesizing information, recognizing patterns, and highlighting the important points so that we can understand it. As humans, we have to do this or we are overwhelmed and have no comprehension. A computer has no such requirement—all information is available to it, yet it has no capability to synthesize, apply experience and perhaps (seemingly) unrelated principles to the situation. In this respect, the computer’s advantage in quantity is far outweighed by its lack of understanding. It has all the context in the world, but no way to apply it.

A good benchmark for a reasonable AI on the level I’m dreaming about is a program that can synthesize a complex set of documents (be they text, audio, or video) and produce a comprehensible summary that is not just selected excerpts from each. This functionality implies an ability to understand and comprehend on many levels. To do this will mean a much deeper understanding of the problems facing us in computer science, as represented in the list above.

You can start to think of these attributes/actions as mutually beneficial and dependent, influencing one another, recursively, being distinct (at  first), and then morphing into a spiral, both being inputs to the other. Quantitative reasoning leads to qualitative analysis which leads back to qualitative measures, etc.

It made me think of Douglas R. Hofstadter’s opus Godel, Escher, Bach: An Eternal Golden Braid. This is a fascinating book that, if you can get through it (I admit I struggled through parts), wants you to think of consciousness as the attempted resolution of a very high-order strange loop.

The Strange Loop phenomenon occurs whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started.

In the book, he discusses how this pattern appears in many areas, most notably music, the works of Escher, and in philosophy, as well as consciousness.

My belief is that the explanations of “emergent” phenomena in our brains—for instance, ideas, hopes, images, analogies, and finally consciousness and free will—are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing “resonance” between different levels… The self comes into being at the moment it has the power to reflect itself.

I can’t help but think that this idea of a strange loop, combined with Glass’s attributes of software creativity are what will lead to more intelligent computers.


Check out my latest book, the essential, in-depth guide to performance for all .NET developers:

Writing High-Performance.NET Code, 2nd Edition by Ben Watson. Available for pre-order:

Top 5 Attributes of Highly Effective Programmers

What attributes can contribute to a highly successful software developer versus the ordinary run-of-the-mill kind? I don’t believe the attributes listed here are the end-all, be-all list, nor do I believe you have to be born with them. Nearly all things in life can be learned, and these attributes are no exception.

Like this article? Check out my new book, Writing High-Performance .NET Code.

Humility

Humility is first because it implies all the other attributes, or at least enables them. There are a lot of misunderstandings of what humility is and sometimes it’s easier to explain it by describing what humility isn’t:

  • humility isn’t letting people walk all over you
  • humility isn’t suppressing your opinions
  • humility isn’t thinking you’re a crappy programmer

C.S. Lewis said it best, in the literary guise of a devil trying to subvert humanity:

Let him think of [humility] not as self-forgetfulness but as a certain kind of opinion (namely, a low opinion) of his own talents and character. . . Fix in his mind the idea that humility consists in trying to believe those talents to be less valuable than he believes them to be. . . By this method thousands of humans have been brought to think that humility means pretty women trying to believe they are ugly and clever men trying to believe they are fools.*

Ok, so we realize humility isn’t pretending to be worse than you are and it’s not timidity. So what is it?

Simply put, humility is an understanding that the world doesn’t begin and end with you. It’s accepting that you don’t know everything there is to know about WPF, or Perl, or Linux. It’s an acknowledgment of the fact that, even if you’re an expert in some particular area, there is still much to learn. In fact, there is far more to learn than you could possibly do in a lifetime, and that’s ok.

Once you start assuming you’re the expert and final word on something, you’ve stopped growing, stopped learning, and stopped progressing. Pride can make you obsolete faster than you can say “Java”.

The fact is that even if you’re humble, you’re probably pretty smart. If you work in a small organization with few programmers (or any organization with few good programmers), chances are you’re more intelligent than the majority of them when it comes to computers. (If you are smarter than all of them about everything, then either you failed the humility test or you need to get out of that company fast). Since you happen to know more about computers and how software works than most people, and since everybody’s life increasingly revolves around using computers, this will give you the illusion that you are smarter than others about everything. This is usually a mistake.

Take Sales and Marketing for example. I have about 50 Dilbert strips hung in my office. I would guess half of them make fun of Sales or Marketing in some way. It’s easy, it’s fun, and it’s often richly deserved!

But if they didn’t sell your software, you wouldn’t get paid. You need them as much as they need you. If someone asked you to go sell your software, how would you do it? Do you even like talking to people? As clueless as they are about the realities of software development, they have skills you don’t.

There are some industries where extreme ego will get you places. I do not believe this to be the usual case in software, at least in companies run by people who understand software. Ego is not enough–results matter. If you have a big ego, you’d better be able to back it up. Unfortunately, the problem with egos is that they grow–eventually you won’t be able to keep up with it, and people will see through you.

The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. – Dijkstra

Without humility, you will make mistakes. Actually, even with humility you’ll make mistakes, but you’ll realize it sooner. By assuming that you know how to solve a problem immediately, you may take steps to short-circuit the development process. You may think you understand the software so well that you can easily fix a bug with just a few tweaks…and yet, you didn’t realize that this other function over here is now broken. A humble programmer will first say “I don’t know the right way to solve this yet” and take the time to do the analysis.

Finally, humble people are a lot more pleasant to work with. They don’t make their superiority an issue. They don’t always have the “right answer” (meaning everybody else is wrong, of course). You can do pair programming with a humble person, you can do code reviews with a humble person, you can instruct a humble person.

Leave your ego out of programming.

It’s not at all important to get it right the first time. It’s vitally important to get it right the last time. – Andrew Hunt and David Thomas

Perhaps most importantly, a humble programmer can instruct him/herself…

Love of Learning

If you’re new to this whole programming thing, I hate to break it to you: school has just begun. Whatever you thought of your BS/MS in Computer Science you worked so hard at–it was just the beginning. It will get you your first job or two, but that’s it. If you can’t learn as you go, from now until you retire, you’re dead weight. Sure, you’ll be able to find a job working somewhere with your pet language for the rest of your life–COBOL and Fortran are still out there after all, but if you really want to progress you’re going to have to learn.

Learning is not compulsory. Neither is survival. – W. Edwards Deming

This means reading. A lot. If you don’t like reading, I suggest you start–get into Harry Potter, fantasy, science fiction, historical fiction, whatever. Something. Just read. Then get some technical books. Start with my list of essential developer books. They’re not as exciting as Harry Potter, but they’re not bad either.

A lot material is online, but for high quality, authoritative prose on fundamental subjects, a good book beats all.

Reading isn’t enough, though. You have to practice. You have to write your own test projects. You have to force yourself to push your boundaries. You can start by typing in code sample, but then you need to change them in ways unique to you. You should have personal projects and hobbies that expand your skills. Be writing your own tools, or “fun” programs. Write a game. Do what it takes to learn new things!

The type of programs you write have a big bearing on how well you learn new material. It has to be something that interests you, or you won’t keep it up. In my case, I’m developing software related to my LEGO hobby. In the past, I’ve written tools for word puzzles, system utilities, multimedia plug-ins, and more. They all started out of a need I had, or a desire to learn something new and useful to me personally.

Another aspect of this love of learning is related to debugging software. An effective developer is not satisfied with a problem until they understand how it works, why it happens, and the details of how to fix it. The details matter–understanding why a bug occurs is just as important as knowing what to do to fix it.

Learn from mistakes. I have seen programmers who make a mistake, have the correct solution pointed out to them, say, “huh, wonder why it didn’t work for me”, and go on their merry way, none-the-wiser. Once their code is working “the way it should” they’re done. They don’t care why or how–it works and that’s enough. It’s not enough! Understand the mistake,what fixed it, and why.

Good judgement comes from experience, and experience comes from bad judgement. – Fred Brooks

Obviously, some balance has to be struck here. You cannot learn everything–it simply isn’t possible. Our profession is becoming increasingly specialized because there is simply too much out there. I also think that in some respects, you need to love learning just for the sake of learning.

Detail-orientedness

Developing software with today’s technologies is all about the details. Maybe in 100 years, software will progress to the point where it can write itself, be fully component-pluggable, self-documenting, self-testing, and then…there won’t be any programmers. But until that comes along (if ever), get used to paying attention to a lot of details.

To illustrate: pick a feature of any software product, and try to think of all the work that would have to be done to change it in some way. For anything non-trivial, you could probably come up with a list of a hundred discrete tasks: modifying the UI (which includes graphics, text, localization, events, customization, etc.), unit tests, algorithms, interaction with related components, and on and on, each discrete step being broken into sub-steps.

I have always found that plans are useless, but planning is indispensable. – Dwight Eisenhower

Here’s a problem, though: few humans can keep every single task in their head, especially over time. Thankfully, detail-orientedness does not necessarily mean being able to mentally track each and every detail. It means that you develop a mental pattern to deal with them. For example, the steps of changing a piece of software could be:

  1. Thoroughly understand what the code is doing and why
  2. Look for any and all dependencies and interactions with this code
  3. Have a well-thought-out mental picture of how it fits together.
  4. Examine the consequences of changing the feature.
  5. Update all related code that needs to (and repeat this cycle for those components)
  6. Update auxiliary pieces that might depend on this code (build system, installer, tests, documentation, etc.)
  7. Test and repeat.

An example: I find that as I’m working on a chunk of code, I realize there are several things I need to do after I’m finished with my immediate task. If I don’t do them, the software will break. If I try to remember all of them, one will surely slip by the wayside. I have a few choices here:

  1. Defer until later, while trying to remember them all
  2. Do them immediately
  3. Defer until later, after writing  them down

Each of these might be useful in different circumstances. Well…maybe not #1. I think that’s doomed to failure from the start and creates bad habits. If the secondary tasks  are short, easy, and well-understood, just do them immediately and get back to your primary task.

However, if you know they’ll require a lot of work, write them down. I prefer a sturdy engineering notebookin nearly all cases, but text files, Outlook tasks, notes, OneNote, bug tracking systems, and other methods can all work together to enable this.

The more experience you have, the easier you’ll be able to track the details you need to worry about. You’ll also analyze them quicker, but you will always need some way of keeping track of what you need to do next. There are simply too many details. Effective organization is a key ability of any good software engineer.

Another aspect of paying attention to details is critical thinking. Critical thinking implies a healthy skepticism about everything you do. It is particularly important as you examine the details of your implementation, designs, or plans. It’s the ability to pull out of those details what is important, what is correct, or on the other hand, what is garbage and should be thrown out. It also guides when you should use well-known methods of development, and when you need to come up with a novel solution to a hard problem.

Adaptability

“Enjoying success requires the ability to adapt. Only by being open to change will you have a true opportunity to get the most from your talent.” – Nolan Ryan

Change happens. Get used to it. This is a hard one for me, to tell the truth. I really, really like having a plan and following it, adapting it to my needs, not those of others.

The fact is, in software development, the project you end up writing will not be the one you started. This can be frustrating if you don’t know how to handle it.

To become adaptive first requires a change in mind set. This mind set says that change is inevitable, it’s ok, and you’re ready for it. How do you become ready for it? This is a whole other topic in itself, and I will probably devote a separate essay for it.

Other than the shift in mind set, start using techniques and technologies that enable easy change. Things like unit testing, code coverage, and refactoring all enable easier modification of code.

In war as in life, it is often necessary when some cherished scheme has failed, to take up the best alternative open, and if so, it is folly not to work for it with all your might. – Winston Churchill

For me, the first step in changing my mind set is to not get frustrated every time things change (“But you specifically said we were NOT going to implement the feature to work this way!”).

Passion

I think passion is up there with humility in importance. It’s so fundamental, that without it, the others don’t matter.

Anyone can dabble, but once you’ve made that commitment, your blood has that particular thing in it, and it’s very hard for people to stop you. – Bill Cosby

It’s also the hardest to develop. I’m not sure if it’s innate or not. In my own case, I think my passion developed at a very early age. It’s been there as long as I remember, even if I had periods of not doing much with it.

I’ve interviewed dozens of prospective developers at my current job, and this is the one thing I see consistently lacking. So many of them are in it just for another job. If that’s all you want, just a job to pay the bills, so be it. (Of course, I have to ask, if that’s the case, why are you reading this article?)

One person with passion is better than forty people merely interested. – E. M. Forster

There’s a world of difference between someone who just programs and someone who loves to program. Someone who just programs will probably not be familiar with the latest tools, practices, techniques, or technologies making their way down the pipeline. They won’t think about programming outside of business hours. On the weekends, they do their best to forget about computers. They have no personal projects, no favorite technologies, no blogs they like to read, and no drive to excel. They have a hard time learning new things and can be a large burden on an effective development team.

Ok, that’s maybe a bit of exaggeration, but by listing the counterpoints, it’s easier to see symptoms of someone who does have passion:

  • Thinks and breaths technology
  • Reads blogs about programming
  • Reads books about programming
  • Writes a blog about programming
  • Has personal projects
  • These personal projects are more important than the boring stuff at work
  • Keeps up with latest technologies for their interests
  • Pushes for implementation of the latest technologies (not blindly, of course)
  • Goes deep in technical problems.
  • Not content with merely coding to spec.
  • Needs an outlet of creativity, whether it be professional (software design) or personal (music, model building, LEGO building, art, etc.)
  • Thinks of the world in terms of Star Trek

Just kidding on the last one…

…(maybe)

That’s my list. It’s taken a few months to write this, and I hope it’s genuinely useful to someone, especially new, young software engineers just getting started. This is a hard industry, but it should also be fun. Learning these attributes, changing your mind set, and consciously deciding to become the engineer and programmer you want to be are the first steps. And also part of every step thereafter.

Nobody is born with any of these–they are developed, practiced, and honed to perfection over a lifetime. There is no better time to start than now.

* The Screwtape Letters, C.S. Lewis


Check out my latest book, the essential, in-depth guide to performance for all .NET developers:

Writing High-Performance.NET Code, 2nd Edition by Ben Watson. Available for pre-order:

Infinity – Infinite Energy

Power. Electricity. The Holy Grail of modern technology.

I say this because the information revolution completely depends on electricity, whether it’s batteries, hybrid motors, or the grid. Everything we do depends on converting some naturally occurring resource into power to drive our lives.

I was thinking about power recently while watching an episode of Star Trek: The Next Generation. Everything they do depends on an infinite (or nearly so) source of energy. Their warp core powers the ship for a 20-year mission. Each device they have is self-powered. From what? Do they need recharging? I imagine not, but it’s been a while since I’ve read the technical manual.

In any case, much of that world (and other Sci-Fi worlds) depends on powerful, long-lasting, disconnected energy sources. For one example, think of the energy required to power a laser-based weapon. And it has to fire more than once.

The truth is that having such a power source is more than world-changing. It has the potential to completely rebuild society from the ground up. If you think about it, much of the world’s conflict is over sources of energy. Authority and power is derived from who controls the resources. If energy was infinitely available, it would be infinitely cheap (at least in some sense). I almost think it would change society from being so focused on worldly gain, to more on pursuit of knowledge, enlightenment, and improvement. We wouldn’t have to worry about how to get from one place to another, or who has more oil, or what industries to invest energy resources in. So much would come free.

When I speak of “infinite” power, don’t take it literally. What I mean is “So much to be practically unlimited.”

Of course there are different types of infinities:

  1. Infinite magnitude – Can produce any amount of power you desire. Not very likely. Something like this would be dangerous. “Ok, now I want Death Star phasers. ok. Go.” Boom.
  2. Infinite supply – There’s a maximum magnitude in the amount of power it can generate, but it can continue “forever” (or at least a reasonable approximation of forever). This is the useful one.

And there are a few other requirements we should consider:

  1. Non-destructive. Environment. Mankind, etc.
  2. Highly-efficient.
  3. Contained and controlled. Obvious.
  4. Portable. Sometimes microscopically so.

It’s nice to dream about such things…

  • Cell phones and Laptops that never need recharged
  • Tiny devices everywhere that never need an external power source (GPS, sensors, communications devices, robots, etc.)
  • Cars that do not fuel. Ever. We’d probably keep them a lot longer. They could do more, be larger, more efficient, faster, safer.
  • Vehicles that can expand the boundaries of their current form. How big can you make an airplane if you don’t have to worry about using up all its fuel? (not to mention the weight)
  • Easier to get things into orbit–space program suddenly becomes much more interesting. Maybe we can develop engines that produce enough power to escape gravity, without using propellant (a truly ancient technology).
  • Devices that can act more intelligently, and just do more than current devices. Think if your iPod that turns itself off after a few minutes of not using it. That scenario would be a thing of the past.

With such a power source the energy economy of devices that we have to pay such close attention to now goes out the window. Who cares how much energy it uses if there’s an endless amount to go around (and since we’ve already established that the energy source is non-destructive and highly-efficient, environmental factors don’t enter in). There would be no need for efficiency until you started bumping up the boundaries of how much power you needed.

del.icio.us Tags: ,,


Check out my latest book, the essential, in-depth guide to performance for all .NET developers:

Writing High-Performance.NET Code, 2nd Edition by Ben Watson. Available for pre-order: