Tag Archives: future

Software Creativity and Strange Loops

I’ve been thinking a lot lately about the kind of technology and scientific understanding that would need to go into a computer like the one on the Enterprise in Star Trek, and specifically its interaction with people. It’s a computer that can respond to questions in context—that is, you don’t have to restart in every question everything needed to answer. The computer has been monitoring the conversation and has thus built up a context that it can use to understand and intelligently respond.

A computer that records and correlates conversations real-time must have a phenomenal ability (compared to our current technology) to not just syntactically parse the content, but also construct semantic models of it. If a computer is going to respond intelligently to you, it has to understand you. This is far beyond our current technology, but we’re moving there. In 20 years who knows where this will be. In 100, we can’t even imagine it. 400 years is nearly beyond contemplation.

The philosophy of computer understanding, and human-computer interaction specifically is incredibly interesting. I was led to think a lot about this while reading Robert Glass’s Software Creativity 2.0. This book is about the design and construction of software, but it has a deep philosophical undercurrent running throughout that kept me richly engaged. Much of the book is presented as conflicts between opposing forces:

  • Discipline versus Flexibility
  • Formal Methods versus Heuristics
  • Optimizing versus satisficing
  • Quantitative versus qualitative reasoning
  • Process versus product
  • Intellectual versus clerical
  • Theory versus practice
  • Industry versus academe
  • Fun versus getting serious

Too often, neither one of these sides is “right”—they are just part of the problem (or the solution). While the book was written from the perspective of software construction, I think you can twist the intention just a little and consider them as attributes of software itself, not just how to write it, but how software must function. Most of those titles can be broken up into a dichotomy of Thinking versus Doing.

Thinking: Flexibility, Heuristics, Satisficing, Qualitative, Process, Intellectual, Theory, Academe

Doing: Discipline, Formal Methods, Optimizing, Quantitative, Product, Clerical, Practice, Industry

Computers are wonderful at the doing, not so much at the thinking. Much of thinking is synthesizing information, recognizing patterns, and highlighting the important points so that we can understand it. As humans, we have to do this or we are overwhelmed and have no comprehension. A computer has no such requirement—all information is available to it, yet it has no capability to synthesize, apply experience and perhaps (seemingly) unrelated principles to the situation. In this respect, the computer’s advantage in quantity is far outweighed by its lack of understanding. It has all the context in the world, but no way to apply it.

A good benchmark for a reasonable AI on the level I’m dreaming about is a program that can synthesize a complex set of documents (be they text, audio, or video) and produce a comprehensible summary that is not just selected excerpts from each. This functionality implies an ability to understand and comprehend on many levels. To do this will mean a much deeper understanding of the problems facing us in computer science, as represented in the list above.

You can start to think of these attributes/actions as mutually beneficial and dependent, influencing one another, recursively, being distinct (at  first), and then morphing into a spiral, both being inputs to the other. Quantitative reasoning leads to qualitative analysis which leads back to qualitative measures, etc.

It made me think of Douglas R. Hofstadter’s opus Godel, Escher, Bach: An Eternal Golden Braid. This is a fascinating book that, if you can get through it (I admit I struggled through parts), wants you to think of consciousness as the attempted resolution of a very high-order strange loop.

The Strange Loop phenomenon occurs whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started.

In the book, he discusses how this pattern appears in many areas, most notably music, the works of Escher, and in philosophy, as well as consciousness.

My belief is that the explanations of “emergent” phenomena in our brains—for instance, ideas, hopes, images, analogies, and finally consciousness and free will—are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing “resonance” between different levels… The self comes into being at the moment it has the power to reflect itself.

I can’t help but think that this idea of a strange loop, combined with Glass’s attributes of software creativity are what will lead to more intelligent computers.

We Need More Growth of Nuclear Power

With this post, I’m beginning a new series or category of bog posts that I’m loosely terming “A Better Future.”


I’ve been thinking a lot about the grail of infinite power, coupled with the enormous rise in gas prices this week.

While I am all in favor of reducing wasteful consumption, increasing efficiency, and generally being smarter about everything, I do not believe we will ever reduce our energy requirements in the long-term. We are always inventing, always creating, and most things we create require power in some form. It’s a fool’s errand to try to reduce the actual energy we’ll use overall. This doesn’t even take into account all of the peoples of the world who are just now beginning to participate in the global economy. There will always be something to eat up the energy we produce. Fighting against this trend seems to me, in a way, trying to run evolution and progress backwards. Our race as a whole won’t do that. Given this, it makes much more sense to develop clean, efficient, abundant, cheap sources of energy.

Increasingly, I am convinced that the way to build out a vast network of nuclear reactors powering our grid. We have an enormous network of power distribution–we should be taking more advantage of it.

According to the US Department of DOE, our 103 active nuclear plants provide 20% of the nation’s electricity. You can even get the operational status of each one.

Worldwide, the IAEA predicts that the electric power generation capacity of the world in 2015 will be roughly 20,000 billion kilowatt hours. In that year, nuclear generation will provide roughly 2,972 billion kilowatt hours, or less than 15%. That report has a lot of other information and I highly encourage you to read it.

We need to increase that percentage drastically–to the point where it supplies power not just to homes, but to plug-in hybrid cars, and everything else.

Nuclear power has gotten a bad rap in the US and other parts of the world for a long time. I think the attitudes are changing, but not quickly enough. At what point will the benefits outweigh the risks in most minds? I think that point is almost upon us.

With the increasing development of pebble-bed reactors, nuclear technology is advancing. We need to increase this development to promote further advances in the safety and efficiency of these promising power sources. None of the operational reactors in the US are pebble-bed reactors (aka HTGR–high temperature gas-cooled reactors), nor are any planned. There is a research reactor at Idaho National Laboratory. All of the commercial HTGR development is taking place for other countries. These reactors, while not universally acclaimed, seem to be safer, cheaper, and the spent fuel less able to be repurposed as weapons-grade material.

We can’t wait for others to do these things–we need to do them. Our country needs to get in on the act at a higher level of commitment than ever. We can’t wait for these technologies to become perfected, either–that will happen over time. As we use a technology more, we will learn new techniques, ways to improve efficiency, and how to lower costs further.

There is no excuse for the US not  to be a leader in this area–we have one of the largest energy demands, the most capital, the most to gain by investing in it, and the most  to lose by not doing it.

The next generation of nuclear technology may not be the ultimate energy savior we’re looking for, but it’s a huge step in the right direction–a step we’ve delayed taking for too long.

Nuclear certainly has some down sides, but I’ll discuss those in a future entry.

Relevant Links:

  1. Pebble-bed reactors at wikipedia
  2. Energy Information Administration / Department of Energy
  3. International Atomic Energy Agency (IAEA)
  4. Inconvenient  Truths: Get Ready to Rethink What It Means to Be Green (Wired Magazine)
  5. Idaho National Laboratory
  6. Module Pebble Bed Reactor (MIT)

Infinity – Infinite Energy

Power. Electricity. The Holy Grail of modern technology.

I say this because the information revolution completely depends on electricity, whether it’s batteries, hybrid motors, or the grid. Everything we do depends on converting some naturally occurring resource into power to drive our lives.

I was thinking about power recently while watching an episode of Star Trek: The Next Generation. Everything they do depends on an infinite (or nearly so) source of energy. Their warp core powers the ship for a 20-year mission. Each device they have is self-powered. From what? Do they need recharging? I imagine not, but it’s been a while since I’ve read the technical manual.

In any case, much of that world (and other Sci-Fi worlds) depends on powerful, long-lasting, disconnected energy sources. For one example, think of the energy required to power a laser-based weapon. And it has to fire more than once.

The truth is that having such a power source is more than world-changing. It has the potential to completely rebuild society from the ground up. If you think about it, much of the world’s conflict is over sources of energy. Authority and power is derived from who controls the resources. If energy was infinitely available, it would be infinitely cheap (at least in some sense). I almost think it would change society from being so focused on worldly gain, to more on pursuit of knowledge, enlightenment, and improvement. We wouldn’t have to worry about how to get from one place to another, or who has more oil, or what industries to invest energy resources in. So much would come free.

When I speak of “infinite” power, don’t take it literally. What I mean is “So much to be practically unlimited.”

Of course there are different types of infinities:

  1. Infinite magnitude – Can produce any amount of power you desire. Not very likely. Something like this would be dangerous. “Ok, now I want Death Star phasers. ok. Go.” Boom.
  2. Infinite supply – There’s a maximum magnitude in the amount of power it can generate, but it can continue “forever” (or at least a reasonable approximation of forever). This is the useful one.

And there are a few other requirements we should consider:

  1. Non-destructive. Environment. Mankind, etc.
  2. Highly-efficient.
  3. Contained and controlled. Obvious.
  4. Portable. Sometimes microscopically so.

It’s nice to dream about such things…

  • Cell phones and Laptops that never need recharged
  • Tiny devices everywhere that never need an external power source (GPS, sensors, communications devices, robots, etc.)
  • Cars that do not fuel. Ever. We’d probably keep them a lot longer. They could do more, be larger, more efficient, faster, safer.
  • Vehicles that can expand the boundaries of their current form. How big can you make an airplane if you don’t have to worry about using up all its fuel? (not to mention the weight)
  • Easier to get things into orbit–space program suddenly becomes much more interesting. Maybe we can develop engines that produce enough power to escape gravity, without using propellant (a truly ancient technology).
  • Devices that can act more intelligently, and just do more than current devices. Think if your iPod that turns itself off after a few minutes of not using it. That scenario would be a thing of the past.

With such a power source the energy economy of devices that we have to pay such close attention to now goes out the window. Who cares how much energy it uses if there’s an endless amount to go around (and since we’ve already established that the energy source is non-destructive and highly-efficient, environmental factors don’t enter in). There would be no need for efficiency until you started bumping up the boundaries of how much power you needed.

del.icio.us Tags: ,,

Infinity – Infinite Storage

Anybody who’s taken high school or college mathematics know how phenomenal exponential growth is. Even if the exponent is very, very small, it eventually adds up. With that in mind, look at this quick-and-dirty chart I made in Excel, plotting the growth in hard drive capacity over the years. [source: http://www.pcguide.com/ref/hdd/hist-c.html]

Hard Drive Capacity Graph

Ok. it’s ugly, but notice a few things:

  1. The pink denotes the data points from the source data or what I put in (I added 1000 GB in 2007).
  2. The scale is logarithmic, not linear. Each y-axis gridline represents a ten-fold increase in capacity.
  3. At the current rate of growth, by 2020, we’ll have 1,000,000 GB hard drives. That’s 1 petabyte (1PB). (by the way, petabyte is not in Live Writer’s spelling dictionary–get with the times Microsoft!)
  4. The formula, as calculated by Excel, says that the drive capacity should double roughly every 2 years.

Also, this doesn’t really take into account multiple-hard drive storage schemes like NAS, RAID, etc. Right now, it’s quite easy to lash individual storage units together into packages such as those for more space, redundancy, etc. I’ll ignore that ability for now.

So 2020: that’s 12 years from now. We can expect to have a petabyte in our computers. That’s a LOT of space. Imagine the amount of data that can be stored. How about every book ever written? How about all your music, high-def DVDs, ripped with no lossy compression?

Tools such as Live Desktop and Google Desktop take on a whole new level of importance when faced with the task of cataloging petabytes of information on your home PC. Because, let’s face it, you’ll never delete anything. You’ll take thousands of pictures with your digital camera and never delete any of them. You’ll take hours of high-def footage and never watch or edit them, but you’ll want to find something in them (with automated voice recognition and image analysis, of course). Every e-mail you get  over your entire lifetime can be permanently archived.

What if you could get a catalog of every song ever recorded? That would probably require more than a few petabytes, even compressed, but we’re heading that way. I don’t think the amount of music in the world is increasing exponentially, is it? Applications like iTunes and Window Media Player, not to mention things like iPods, would have to have a critically-designed interface to handle the organization and searching for desired music. I think Windows Media Player 11 is incredible, but I don’t think it could handle more than about 100,000 songs without choking–has anyone approached any practical limits with it?

What about the total information in the world–that probably is increasing exponentially.  Will we eventually have enough storage so that everyone can have their own local, easily searchable copy of the vast sum of human knowledge and experience? (Ignoring the question of why we would want to)

Let’s extrapolate this growth out 100 years to the year 2100. I won’t show the graph, but it approaches 1E+20 GB by the year 2100.

How do the economics of digital goods change when you can have an infinite number of them? It’s the opposite of real estate, an ever-diminishing good.

On my home PC, for  the first time, I do have a lot of storage that isn’t being used. I have about 1 TB of storage, and about 300 GB free. I suppose I could rip all my DVDs, rip all my music at lossless compression (it’s currently all WMA / 192Kbps).

The rules of the game can change quickly when that much storage is available. It will be interesting to see what happens in the coming decades. Of course, all this discussion is completely ignoring the increasingly connected, networked world we live in.

del.icio.us Tags: ,,,

Getting Green Off the Grid

Going green is something I am slowly becoming more interested in. I’m not really sure what steps exactly we need to take–I don’t think we have an inordinate impact on the environment, and to be honest, right my pocketbook is far more important. That said, I do drive a Honda Civic that I’ve been able to get more than 42mpg out of. We try to use everything we buy, and dispose, give away, recycle, sell, etc. everything we don’t need. We try to walk places where we can.

Thank you to Eric for his contribution to BuyMeALego. He has a genuinely interesting site. Getting Green Off the Grid is a blog about both more sustainable living and living independently.

About the site:

This is a journal of my research into becoming more independent, away from the power grid. My goal one day is to live out in the middle of nowhere, dependent upon none but myself and my family. That dream is a long way away, but every little step counts.

I think that is a very enviable position to be in–completely independent. Independent power utility in particular fascinates me. Or better, being able to sell your power back to the power company.

I don’t think it’s possible to turn off our dirty technologies or habits all at once, but having people like this who do the research, who advocate, who publicize the next big clean technology is absolutely vital. We need to start down the path and have smart people working on it hard. We’ll get there, eventually.

Anyway, I think I will subscribe to his blog for a while and check it out–the posts I’ve read are interesting and he links to some good stuff.

Technorati Tags: , , ,

What would the human race look like?

On my drive into work this morning, I heard an interesting story on on WAMU (sorry, can’t find the specific story link) about a Korean-American adopted by white American parents. While initially struggling against her Korean heritage, she eventually came to appreciate and be proud of it. The commentator, himself an adopted Korean in the same situation, was very grateful for the chance to grow up in such a mixed household.

Along with these thoughts, I there’s a set of novels I recently finished: Orson Scott Card’s Shadow series. In it, he describes an Earth that starts out a lot like the one we know today (maybe a few hundred years in the future), and is eventually unified under a single ruler. The novels are very good, despite the fact that they leave quite a bit of details about how this would happen, given the nature of humanity. On the other hand, maybe it’s merely evoking the philosophy that humans are inherently good, and given the right set of circumstances, they will choose to do good for all of us. I think I could believe that, despite what we see in the world today.

But these two stories together got me thinking: what if the entire world were open to us–no borders, easy  transportation, peaceful coexistence, interdependency, leading to high intermixing, intermarriage, etc…. What would we end up looking like as a human race? I initially thought of this question in terms of physical features, but it’s interesting to think about language, culture, economics, technology–anything at all. It certainly requires a great deal of imagination and taking things for granted to see this world, but I think it’s interesting in a futuristic, sci-fi sort of way.

Worse than Y2K–what if gravity changes?

Though the danger to life, civilization, and future of all that is good and beautiful was greatly oversold, Y2K was still a pretty big deal. It required the detailed analysis and updated of millions of lines of legacy code in all sectors, levels, nooks, and crannies of computer civilization.

We survived, somehow. Planes didn’t fall out of the air. Elevators did not plummet to the basement. Satellites did not launch lasers and nukes at random targets. Cats and Dogs did not start living together.

But what if something even more fundamental than our calendaring system changed?

What if a fundamental assumption about the way Earth functions changed?

Take, for example, gravity. The force of gravity is defined by the following equation:

 

Constants are:

  • G – universal gravity constant. 6.6742×10-11Nm2/kg2
  • M – mass of first object. Earth = 5.9724 x 1024 kg
  • m – mass of second object.
  • r – radius from center to center of objects. Earth = 6,378,100 m

This can be simplified for use on earth to:

where

  • m – mass of object on earth’s surface
  • g – earth gravity constant.

We can compute g by setting both equations equal to each other, canceling the common term of m, we get:

If we substitute the values above, we get  g = 9.801585

That’s the value that is a hard-coded into all the missile launchers, satellite control software, airplane flight control logic, embedded physics math processors, and Scorched Earth games in the world.

So what if it changed? It’s not likely, but it could happen. If a significant amount of mass were added or taken from the earth due to, say, a catastrophic asteroid hit, gravity could be affected. 

But how much would it have to change?

Given the current values, F = mg for 50 kg yields 490.08 N of force on the earth. If earth’s mass increased by 1%, g would be equal to 9.899601, and F would be 494.98 N. Would we feel heavier?

It would certainly destroy precision instrumentation.

However, 1% is a LOT: 5.9742 x 1022 kg. By comparison, the moon is 7.36 x 1022  and the mass of all known asteroids is less than that. On the other hand, if you think gravity can’t be affected by a reasonable event, read this.

So just to be safe for future modifications, make sure all your software takes as parameters G, M, m, and r, and calculates g as needed. You can never be too careful.

😉