Welcome to Asimov's Science Fiction

Stories from Asimov's have won 53 Hugos and 28 Nebula Awards, and our editors have received 20 Hugo Awards for Best Editor.

Click on each cover to order your edition!

For Digital Issues
Current Issue Anthologies Forum e-Asimov's Links Contact Us Contact Us
Subscribe
Asimov's Science Fiction Analog Science Fiction & Fact
Thought Experiments: Adventures in Gnarly Computation
by Rudy Rucker
 

 

Everything Is a Computation

 

That is reality? One type of answer to this age-old question has the following format: “Everything is ________.” Over the years I’ve tried out lots of different ways to fill in the blank: particles, bumps in spacetime, thoughts, mathematical sets, and more. I once had a friend who liked to say, “The universe is made of jokes.”

Now there may very well be no correct way to fill in the “Everything is” blank. It could be that reality is fundamentally pluralistic, that is, made of up all kinds of fundamentally incompatible things. Maybe there really isn’t any single one underlying substance. But it’s interesting to think that perhaps there is.

Lately I’ve been working to convince myself that everything is a computation. I call this belief universal automatism. Computations are everywhere, once you begin to look at things in a certain way. The weather, plants and animals, your personal thoughts and shifts of mood, society’s history and politics—all computations.

 One handy aspect of computations is that they occur at all levels and in all sizes. When you say that everything’s made of elementary particles, then you need to think of large-scale objects as being made of a zillion tiny things. But computations come in all scales, and an ordinary natural process can be thought of as a single high-level computation.

 If I want to say that all sorts of processes are like computations, it’s to be expected that my definition of computation must be fairly simple. I go with the following: A computation is a process that obeys finitely describable rules.

People often suppose that a computation has to “find an answer” and then stop. But our general notion of computation allows for computations that run indefinitely. If you think of your life as a kind of computation, it’s quite abundantly clear that there’s not going to be a final answer and there won’t be anything particularly wonderful about having the computation halt! In other words, we often prefer a computation to yield an ongoing sequence of outputs rather than to attain one final output and turn itself off.

 

 

Everything Is a
Gnarly Computation

 

If we suppose that many natural phenomena are in effect computations, the study of computer science can tell us about the kinds of natural phenomena that can occur. Starting in the 1980s, the scientist-entrepreneur Stephen Wolfram did a king-hell job of combing through vast seas of possible computations, getting a handle on the kinds of phenomena that can occur, exploring the computational universe.

Simplifying just a bit, we can say that Wolfram found three kinds of processes: the predictable, the random-looking, and what I term the gnarly. These three fall into a Goldilocks pattern.

 

•Too cold (predictable). Processes that produce no real surprises. This may be because they die out and become constant, or because they’re repetitive in some way. The repetitions can be spatial, temporal, or scaled so as to make fractally nested patterns that are nevertheless predictable.

•Too hot (random-looking). Processes that are completely scuzzy and messy and dull, like white noise or video snow. The programmer William Gosper used to refer to computational rules of this kind as “seething dog barf.”

•Just right (gnarly). Processes that are structured in interesting ways but nonetheless unpredictable. In computations of this kind we see coherent patterns moving around like gliders; these patterns produce large-scale information transport across the space of the computation. Gnarly processes often display patterns at several scales. We find them fun to watch because they tend to appear as if they’re alive.

 

Gnarliness lies between predictability and randomness. It’s an interface phenomenon like organic life, poised between crystalline order and messy deliquescence.

Why do I use the world gnarly? Well, the original meaning of “gnarl” was simply “a knot in the wood of a tree.” In California surfer slang, “gnarly” came to be used to describe complicated, rapidly changing surf conditions. And then, by extension, something gnarly came to be anything with surprisingly intricate detail. As a late-arriving and perhaps over-assimilated Californian, I get a kick out of the word.

Clouds, fire, and water are gnarly in the sense of being beautifully intricate, with purposeful-looking but not quite comprehensible patterns. Although the motion of a projectile through empty space would seem to be predictable, if we add in the effects of mutually interacting planets and suns, the calculation may become gnarly. And earthly objects moving through water or air tend to leave turbulent wakes—which very definitely involve gnarly computations.

All living things are gnarly, in that they inevitably do things that are much more complex than one might have expected. The shapes of tree branches are of course the standard example of gnarl. The life cycle of a jellyfish is way gnarly. The wild three-dimensional paths that a humming bird sweeps out are kind of gnarly too, and, if the truth be told, your ears are gnarly as well.

Needless to say, the human mind is gnarly. I’ve noticed, for instance, that my moods continue to vary even if I manage to behave optimally and think nice correct thoughts about everything. I might suppose that this is because my moods are affected by other factors—such as diet, sleep, exercise, and biochemical processes I’m not even aware of. But a more computationally realistic explanation is simply that my emotional state is the result of a gnarly unpredictable computation, and any hope of full control is a dream.

Still on the topic of psychology, consider trains of thought, the free-flowing and somewhat unpredictable chains of association that the mind produces when left on its own. Note that trains of thoughts need not be formulated in words. When I watch, for instance, a tree branch bobbing in the breeze, my mind plays with the positions of the leaves, following them and automatically making little predictions about their motions. And then the image of the branch might be replaced by a mental image of a tiny man tossed up high into the air. His parachute pops open and he floats down toward a city of lights. I recall the first time I flew into San Jose, and how it reminded me of a great circuit board. I remind myself that I need to see about getting a new computer soon, and then in reaction, I think about going for a bicycle ride. And so on.

Society, too is carrying out gnarly computations. The flow of opinion, the gyrations of the stock markets, the ebb and flow of success, the accumulation of craft and invention—gnarly, dude.

 

 

So What?

 

If you were to believe all the ads you see, you might imagine that the latest personal computers have access to new, improved methods that lie wholly beyond the abilities of older machines. But computer science tells us that if I’m allowed to equip my old machine with additional memory chips, I can always get it to behave like any new computer.

This carries over to the natural world. Many naturally occurring processes are not only gnarly, they’re capable of behaving like any other kind of computation. Wolfram feels that this behavior is very common, and he formulates this notion in the claim that he calls the Principle of Computational Equivalence (PCE): Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication.

If the PCE is true, then, for instance, a leaf fluttering in the breeze outside my window is as computationally rich a system as my brain. I seem to be a fluttering leaf? Some scientists find this notion an affront. Personally, I find serenity in accepting that the flow of my thoughts and moods is a gnarly computation that’s fundamentally the same as a cloud, a flame, or a fluttering leaf. It’s soothing to realize that my mind’s processes are inherently uncontrollable. Looking at the waving branches of trees calms me down.

But rather than arguing for the full PCE, I think it’s worthwhile to formulate a slightly weaker claim, which I call the Principle of Computational Unpredictability (PCU): Most naturally occurring complex computations are unpredictable.

In the PCU, I’m using “unpredictable” in a specific computer-science sense; I’m saying that a computation is unpredictable if there’s no fast shortcut way to predict its outcomes. If a computation is unpredictable and you want to know what state it’ll be in after, say, a million steps, you pretty much have to crunch out those million steps to find out what’s going to happen.

Traditional science is all about finding shortcuts. Physics 101 teaches students to use Newton’s laws to predict how far a cannonball will travel when shot into the air at a certain angle and with a certain muzzle-velocity. But, as I mentioned above, in the case of a real object moving through the air, if we want to get full accuracy in describing the object’s motions, we need to take the turbulent flow of air into account. At least at certain velocities, flowing fluids are known to produce computationally complex patterns—think of the bumps and ripples that move back and forth along the lip of a waterfall, or of eddies of milk stirred into coffee. So an earthly object’s motion will often be carrying out a gnarly computation, and these computations are unpredictable—meaning that the only certain way to get a really detailed prediction of an artillery shell’s trajectory through the air is to simulate the motion every step of the way. The computation performed by the physical motion is unpredictable in the sense of not being reducible to a quick shortcut method. (By the way, simulating trajectories was the very purpose for which the U.S. funded the first electronic computer, ENIAC, in 1946, the same year in which I was born.)

Physical laws provide, at best, a recipe for how the world might be computed in parallel particle by particle and region by region. But—unless you have access to some so-far-unavailable ultra-super computer that simulates reality faster than the world does itself—the only way to actually learn the results is to wait for the actual physical process to work itself out. There is a fundamental gap between T-shirt physics equations and the unpredictable gnarl of daily life.

 

 

Some SF Thought Experiments

 

One of the nice things about science fiction is that it lets us carry out thought experiments. Mathematicians adopt axioms and deduce the consequences. Computer scientists write programs and observe the results of letting the programs run. Science fiction writers put characters into a world with arbitrary rules and work out what happens.

Science fiction is a powerful futurological tool because, in practice, there are no quick shortcuts for predicting the effects of new technological developments. Only if you place the new tech into a fleshed-out fictional world and simulate the effects in your novelistic reality can you get a clear image of what might happen.

This relates to the ideas I’ve been talking about. We can’t predict in advance the outcomes of naturally occurring gnarly systems; we can only simulate (with great effort) their evolution step by step. In other words, when it comes to futurology, only the most trivial changes to reality have easily predictable consequences. If I want to imagine what our world will be like one year after the arrival of, say, soft plastic robots, the only way to get a realistic vision is to fictionally simulate society’s reactions during the intervening year.

These days I’ve been working on a fictional thought experiment about using natural systems to replace conventional computers. My starting point is the observed fact that gnarly natural systems compute much faster than our supercomputers. Although in principle, a supercomputer can simulate a given natural process, such simulations are at present very much slower than what nature does. It’s a simple matter of resources: a natural system is inherently parallel, with all its parts being updated at once. And an ordinary sized object is made up of something on the order of an octillion atoms (1027) <http://education.jlab.org/qa/mathatom_04.html>. Naturally occurring systems update their states much faster than our digital machines can model what the process is. That’s why existing computer simulations of reality are still rather crude.

(Let me insert a deflationary side-remark on the Singularity that’s supposed to occur when intelligent computers begin designing even more intelligent computers and so on. Perhaps the end result of this kind of process won’t be a god. Perhaps it’ll be something more like a wind-riffled pond, a campfire, or a fly buzzing around your backyard. Nature is, after all, already computing at the maximum possible flop.)

Now let’s get into my own thought experiment. If we could harness a natural system to act as a computer for us, we’d have what you might call a paracomputer that totally outstrips anything that our man-made beige buzzing desktop machines can do. I say “paracomputer” not “computer” to point out the fact that this is a natural object which behaves like a computer, as opposed to being a high-tech totem that we clever monkeys made. Wolfram’s PCE suggests that essentially any gnarly natural process could be used as a paracomputer.

A natural paracomputer would be powerful enough to be in striking range of predicting other natural systems in real time or perhaps even a bit faster than real time. The problem with our naturally occurring paracomputers is that they’re not set up for the kinds of tasks we like to use computers for—like predicting the stock market, rendering Homer Simpson, or simulating nuclear explosions.

To make practical use of paracomputers we need a solution to what you might call the codec or coding-decoding problem. If you want to learn something specific from a simulation, you have to know how to code your data into the simulation and how to decode it back out. Like suppose you’re going to make predictions about the weather by reading tea-leaves. To get concrete answers, you code today’s weather into a cup of tea, which you’re using as a paracomputer. You swirl the cup around, drink the tea, look at the leaves, and decode the leaf pattern into tomorrow’s weather. Codec.

This is a subtle point, so let me state it again. Suppose that you want to simulate the market price of a certain stock, and that you have all the data and equations to do it, but the simulation is so complicated that it requires much more time than the real-time period you want to simulate. And you’d like to turn this computation into, say, the motions of some wine when you pour it back and forth between two glasses. You know the computational power is there in the moving wine. But where’s the codec? How do you feed the market trends into the wine? How do you get the prediction numbers out? Do you drink the paracomputer?

Finding the codec that makes a given paracomputer useful for a particular task is a hard problem, but once you have the codec, your paracomputer can solve things very fast. But how to find the codec? Well, let’s use a science fiction cheat, let’s suppose that one of the characters in our thought experiment is, oh, a mathematical genius who creates a really clever algorithm for rapidly finding codecs that are, if not perfect, at least robust enough for practical use.

So now suppose that we’re able, for instance, to program the wind in the trees and use it as a paracomputer. Then what? For the next stage of my thought experiment, I’m thinking about a curious real-world limitative result that could come into play. This is the Margolus-Levitin theorem, which says that there’s some maximum computational rate that any limited region of spacetime can perform at any given energy level. (See for instance Seth Lloyd’s paper on the “Computational Capacity of the Universe,” <http://arxiv.org/PS_cache/ quant-ph/pdf/0110/0110141.pdf>.) The limit is pretty high—some ten-to-the-fiftieth bit-flips per second on a room-temperature laptop—but science fiction writers love breaking limits.

In the situation I’m visualizing, a couple of crazy mathematicians (some things never change!) make a paracomputer from a vibrating membrane, use clever logic to find desired codecs, and set the paracomputer to predicting its own outputs. I expect the feedback process to produce an ever-increasing amount of computation within the little paracomputer. The result is that the device is on the point of violating the Margolus-Levitin limit, and perhaps the way the universe copes with this is by bulging out a big extra hump of spacetime in the vicinity of the paracomputer. And this hump acts as—a tunnel to a higher universe inhabited by, of course, super-intelligent humanoid cockroaches and carnivorous flying cone shell mollusks!

Now let’s turn the hard-SF knob up to eleven. Even if we had natural paracomputers, we’d still be limited by the PCU, the principle that most naturally occurring computations are unpredictable. Your paracomputers can speed things up by a linear factor because they’re so massively parallel. Nevertheless, by the PCU, most problems would resist being absolutely crushed by clever shortcuts. The power of the paracomputer may indeed let you predict tomorrow’s weather, but eventually the PCU catches up with you. You still can’t predict, say, next week’s weather. Even with a paracomputer you might be able to approximately predict a person’s activities for half an hour, but not to a huge degree of accuracy, and certainly not out to a time several months away. The PCU makes prediction impossible for extended periods of time.

Now, being a science fiction writer, when I see a natural principle, I wonder if it could fail. Even if it’s a principle such as the PCU that I think is true. (An inspiration here is a story by Robert Coates, “The Law,” in which the law of averages fails. The story first appeared in the New Yorker of November 29, 1947, and can also be found in Clifton Fadiman’s The Mathematical Magpie.)

So now let’s suppose that, for their own veiled reasons, the alien cockroaches and cone shells teach our mathematician heroes some amazing new technique that voids the PCU! This notion isn’t utterly inconceivable. Consider, for instance, how drastically the use of language speeds up the human thought process. Or the way that using digital notion speeds up arithmetic. Maybe there’s some thought tool we’ve never even dreamed of that can in fact crush any possible computation into a few quick chicken-scratches on the back of a business card. So our heroes learn this trick and they come back to spread the word.

And then we’ve got a world where the PCU fails. This is a reality where we can rapidly predict all kinds of things arbitrarily far into the future: weather, moods, stocks, health. A world where people have oracles. SF is all about making things immediate and tactile, so let’s suppose that a oracle is like a magic mirror. You look into it and ask it a question about the future, and it always gives you the right answer. Nice simple interface. What would it be like to live in a world with oracles?

I’m not sure yet. I’m still computing the outcome of this sequence of thought experiments—the computation consists of writing an SF novel called Mathematicians in Love.

 

 

How Gnarly Computation
 Ate My Brain

 

I got my inspiration for universal automatism from two computer scientists: Edward Fredkin and Stephen Wolfram. In the 1980s, Fredkin <http://www.digitalphilosophy.org/> began saying that the universe is a particular kind of computation called a cellular automaton (CA for short). The best-known CA is John Conway’s Game of Life, but there are lots of others. I myself have done research involving CAs, and have perpetrated two separate free software packages for viewing them. <http://www.rudyrucker.com/lifebox /downloads/>

Wolfram is subtler than Fredkin; he doesn’t say that the universe is a cellular automaton. Wolfram feels that the most fundamental secret-of-life type computation should instead be something like a set of rules for building up a network of lines and dots. He’s optimistic about finding the ultimate rule; recently I was talking to him on the phone and he said he had a couple of candidates, and was trying to grasp what it might mean to say that the secret of the universe might be some particular rule with some particular rule number. Did someone say 42?

I first met Wolfram at the Princeton Institute for Advanced Study in 1984; I was a freelancer writing an article about cellular automata destined for, as chance would have it, Asimov’s Science Fiction (April 1987). You might say that Wolfram converted me on the spot. I moved to Silicon Valley, retooled, and became a computer science professor at San Jose State University (SJSU), also doing some work as a programmer for the computer graphics company Autodesk. I spent the last twenty years in the dark Satanic mills of Silicon Valley. Originally I thought I was coming here as a kind of literary lark—like an overbold William Blake manning a loom in Manchester. But eventually I went native on the story. It changed the way I think.

For many years, Wolfram promised to publish a book on his ideas, and finally in 2002 he published his monumental A New Kind of Science, now readable in its entirety online <http://www.wolfram science.com/nksonline/toc.html>. I like this book exceedingly; I think it’s the most important science book of our generation. My SJSU grad students and I even created a website for it. <http://sjsu.rudyrucker.com/>

I’d been kind of waiting for Wolfram to write his book before I wrote my own book about the meaning of computation. So once he was done, I was ready to brush the lint of bytes and computer code off myself, step into the light, and tell the world what I learned among the machines. The result: The Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning Of Life, and How To Be Happy (Thunder’s Mouth Press, 2005) <http://www.rudy rucker.com/lifebox>.

Where did I get my book’s title? I invented the word “lifebox” some years ago to describe a hypothetical technological gizmo for preserving a human personality. In my book title, I’m using “Lifebox” as shorthand for the universal automatist thesis that everything, even human consciousness, is a computation.

The antithesis is the fact that nobody is really going to think that a wised-up cell-phone is alive. We all feel we have something that’s not captured by any mechanical model—it’s what we commonly call the soul.

My synthesis is that gnarly computation can breathe life and soul into a lifebox. The living mind has a churning quality, like the eddies in the wake of a rock in a stream—or like the turbulent patterns found in cellular automata. Unpredictable yet deterministic CAs can be found in nature, most famously in the patterns of the Wolfram-popularized South Pacific sea snail known as the textile cone. Thus the “seashell” of my book title. (See <http://www. rudyrucker.com/blog/search.php?q=cone+shell> for information about these venomous mollusks.)

Coming back to Wolfram’s A New Kind of Science, a lot of people seem to have copped an attitude about this book. Although it sold a couple of hundred thousand copies, many of the reviews were negative <http:// www.math.usf.edu/~eclark/ANKOS_reviews.html>, and it’s my impression that people are not enthusiastically taking up his ideas. Given that I think these ideas are among the most important new intellectual breakthroughs of our time, I have to wonder about the resistance.

I see three classes of reasons why scientists haven’t embraced universal automatism. (1) Dislike the messenger. Thanks to the success of his Mathematica software, Wolfram is a millionaire entrepreneur rather than a professor. Perhaps as a result, he has a hard-sell writing style, an iconoclastic attitude toward current scientific practice, and a sometimes cavalier attitude toward the niceties of sharing credit. (2) Dislike the form of the message. Some older scientists resent the expansion of computer science and the spread of computational technology. If you hate and fear computers, you don’t want to hear the world is made of computations! (3) Dislike the content of the message. Wolfram’s arguments lead to the conclusion that many real-world scientific questions are impossible to solve. Being something of a perennial enfant terrible, Wolfram is prone to putting this as bluntly as possible, in effect saying that traditional science is a blind alley, a waste of time. Even though he’s to some extent right, it’s hardly surprising that the mandarins of science aren’t welcoming him with open arms.

One thing that sets my book off from Wolfram’s is the goal. At this point in my life, I don’t worry very much about convincing anyone of anything. To me the real purpose of writing a science book is to achieve personal enlightenment. And to get new ideas for science fiction novels.

On the enlightenment front, The Lifebox, the Seashell, and the Soul ends with a discussion of six keys to happiness, drawn from considerations involving six successively higher levels of gnarly computation. And these will make a nice note upon which to end this article.

 

•Computer science. Turn off the machine. Nature computes better than any buzzing box.

•Physics. See the gnarl. The world is doing interesting things all the time. Keep an eye on the clouds, on water, and on the motions of plants in the wind.

•Biology. Pay attention to your body. It’s at least as smart as your brain. Listen to it, savor its complexities.

•Psychology. Release your thoughts from obsessive loops. Avoid repetition and go for the gnarl.

•Sociology. Open your heart. Others are as complex as you. Each of us is performing much the same kind of computation. You’re not the center.

•Philosophy. Be amazed. The universe is an inexplicable miracle. m

Subscriptions If you enjoyed this sample and want to read more, Asimov's Science Fiction offers you another way to subscribe to our print magazine. We have a secure server which will allow you to order a subscription online. There, you can order a subscription by providing us with your name, address and credit card information.

Copyright

"Thought Experiments: Adventures in Gnarly Computation" by Rudy Rucker, copyright © 2005, with permission of the author.

Welcome to Adobe GoLive 5
Current Issue Anthologies Forum electronic Asimov Links Contact Us Subscribe Privacy Statement

To contact us about editorial matters, send an email to Asimov's SF.
Questions regarding subscriptions should be sent to our subscription address.
If you find any Web site errors, typos or other stuff worth mentioning, please send it to the webmaster.

Advertising Information

Asimov's Science Fiction is available at most major bookstores.

Copyright © 2014 Dell Magazines, A Division of Penny Publications, LLC

Current Issue Anthologies Forum Contact Us