Invitation to an Unfamiliar Sort of Experimental Science

September 6, 2002

Figure 1. The top row in each box gives one of the possible combinations of colors for a cell and its immediate neighbors. The bottom row then specifies the color of the center cell on the next step. All figures from A New Kind of Science.
Book Review
James Case

A New Kind of Science. By Stephen Wolfram, Wolfram Media Inc., Champaign, Illinois, 2002, xiv + 1197 pages + unpaginated index, $44.95.

Stephen Wolfram's third book describes the truly ambitious program of research on which he embarked some twenty years ago. He has conducted novel computer experiments to explore mysteries that "ordinary science" has largely failed to penetrate. His targets range from "the origins of apparent randomness in physical systems" to "the character of intelligence in the universe," and include the interplay between free will and determinism, the development of complexity in biology and economics, the ultimate scope and limitations of mathematics, and the possibility of "a truly fundamental theory of physics." It all began with a show-stopping discovery in what had seemed a routine search.

During the early 1980s, Wolfram had begun to experiment with what might be called "discrete cell-coloring systems." The simplest of them color the cells in an infinite grid---not necessarily rectangular---starting in the top row and working down, according to specific rules. More complicated systems color higher-dimensional grids. Wishing to begin at the beginning, Wolfram originally restricted his attention to the 256 rules requiring just two colors---say black and white---in which the color of each individual cell of a planar grid is dictated by the colors of the three cells immediately above it. Such rules have representations of the sort shown in Figure 1.

Each experiment was begun by blackening a single square in the top row of the grid, and allowing a rule of the type illustrated in Figure 1 to dictate the contents of all subsequent (lower) rows. Wolfram seems to have begun with the expectation---although he never quite says so---that he would eventually learn to predict (calculate) the two-color pattern that a given rule would produce. Indeed, as Figure 2 suggests, the early results probably served to fuel his optimism.

Figure 2. Basic examples of behavior produced by cellular automata with simple underlying rules: repetition (rule 250, left) and nesting (rule 90, right).

The pattern on the left in Figure 2 is repetitive, while that on the right exhibits nesting. It was not until he tested his "rule 30" that Wolfram encountered anything more complex. Figure 3 displays the first 50 rows of the output generated by rule 30. Like Ed Lorentz on discovering the chaotic system of ODEs that now bears his name, Wolfram realized immediately that the output before him was too intricate and baffling to permit any realistic hope of prediction.

Figure 3. The first 50 rows of the output generated by rule 30 illustrate the complexity of the output that can be obtained from very simple computer programs.

Wolfram examined more than a million rows of output from rule 30, and has reproduced numerous blocks of it in the book in an effort to build up a valid new intuition about the complexity of output obtainable from very simple computer programs. Having examined in minute detail the middle column of output from rule 30, among other things, he concluded that it passes every known test for a finite random sequence.

As a result, each release of Mathematica, starting with the very first, has used exactly this process to generate (pseudo-) random binary sequences. Carefully programmed, such subroutines run quickly and appear to perform as randomly as any rival.

Wolfram's next task was to convince himself that the complexity inherent in rule 30 is not a unique occurrence. He devotes several chapters to the project, searching one class of discrete cell-coloring system after another for complexity and apparent randomness. Among the most important classes considered are Turing machines, which operate on doubly infinite "tapes," broken up into squares, with a mobile "head" that may or may not (i) alter the color of the currently occupied square before (ii) altering its own internal state and (iii) moving to an adjacent square.

Figure 4 depicts a Turing machine with only two colors---gray and white---and three internal states---call them noon, four o'clock, and eight o'clock---along with the results for a launch of the machine from a white square at noon. The clock "hand" in each row of the figure identifies the square over which the head comes to rest after each three-phase operation. The successive rows in the grid depict the state of the tape before and after each of the first 30 such operations. With no more than three internal states, only repetitive or nested patterns (like the ones in Figure 2) are ever produced. But with four such states, randomness and complexity can emerge.

Figure 4. An example of a Turing machine. Like a mobile automaton, the Turing machine has one active cell, or "head," but now the head has several possible states, indicated by the direction of the arrows in the figure.

Both Turing machines and cellular automata can operate on multiple tapes or on higher-dimensional grids. On the two-dimensional grids, output is recorded on successive pages in a book, rather than in successive rows on a grid. Instructions for coloring a specific square on page n + 1 depend on the colors (and states, if any) of the corresponding square on page n, along with its "official" neighbors. Squares with a common edge are always neighbors, while those with only a common corner may or may not be regarded as such. The problem disappears on hexagonal grids, which have much to recommend them.

To avoid jumping to conclusions prematurely, Wolfram was careful to examine a wide variety of discrete cell-coloring systems, including what he describes as tag systems, substitution systems, symbolic systems, and register systems, along with first-generation hybrids. He observes repeatedly that he would never have examined half as many had Mathematica not made it so easy to do. He finds, unsurprisingly, that complexity in overall system behavior requires the coloring rules to exceed a definite (but surprisingly low) threshold of randomness and complexity. Every class of systems investigated was found to harbor at least a few instances of each.

It would not be surprising to obtain a column of random output from rule 30 if the entire top row of Figure 3 had been filled in at random. It is only surprising that randomness should occur in the output without having been included in the input. Wolfram has devoted considerable thought to the nature of randomness, with special attention to the randomness found in natural processes. In his opinion, most of the randomness studied to date is exogenous randomness, in the sense that it is fed into an otherwise deterministic system from without, either in the form of random initial data or as a series of random errors made by the system in applying its (hard-wired) coloring rules. Too little thought, he believes, has been given to the sources of randomness in nature.

He points out that the apparent randomness in Figure 3, and in the output of his other discrete cell-coloring systems, can be of neither sort: Those systems were invariably supplied with the simplest possible initial data, and functioned flawlessly throughout each experiment. He believes the carefully documented randomness in the output to be of a third and different kind, endogenous to the discrete cell-coloring system that generated it.

Trained in the natural sciences---in 1979, at the tender age of 20, he earned a PhD in theoretical physics from Caltech---Wolfram never intended to study simple computer programs for their own sake. On the contrary, his interest in them was sparked by a suspicion that such systems are responsible for important natural phenomena. One such phenomenon is the formation of snowflakes in moist atmospheres at or near 0o centigrade. He begins his investigation of that process with a display of actual snowflakes, asking whether and how a simple mechanism could be responsible for so large a variety of shapes. He observes, in passing, that some snowflakes are characterized by finger-like extremities protruding from a more or less convex core, while others look as if they had once possessed such extremities, before the intervening spaces were (at least partially) filled in.

To model the snowflake-formation process, he observes first that the molecules of a snowflake lie on or near a planar hexagonal grid in which significant numbers of cells remain ice-free. That's the way real snowflakes develop---each time new ice is added to a snowflake, a quantity of heat is released near the site of the addition, inhibiting nearby growth. Wolfram therefore compares the shapes of real snowflakes to the stages through which a two-dimensional cellular automaton evolves when, starting with a small cluster of black cells, it blackens a white cell if and only if exactly one of that cell's (hexagonal) neighbors was black during the previous period. While none of the resulting figures exactly replicate any of the real snowflakes on display, most exhibit features found in nature. Whereas anyone attempting to build a more complete model of snowflake formation must confront an intimidating variety of additional effects, with no way of knowing in advance which will turn out to be of practical importance, Wolfram's extremely simple model appears to furnish an appropriate starting point for more detailed explorations.

He subsequently devotes several pages to cellular automata intended to simulate the fracture of solids, before moving on to the most elusive physical process he attempts to explain: the flow of a viscous fluid about a solid obstacle. He begins his study of fluids by imagining a two-dimensional liquid or gas to consist of minute particles constrained to move along the edges of an infinite hexagonal grid. He then displays---as usual, diagramatically---his assumptions concerning collisions. Those involving the fixed obstacle are presumed to be elastic. Finally, he exhibits the results. At resolutions that render individual fluid particles visible, no patterns appear. Yet when average velocities are computed over windows containing a few hundred hexagonal cells, the results include the images shown in Figure 5, which strikingly resemble photos from wind tunnel experiments.

Figure 5. A large example of a cellular automaton system. In each picture there are a total of 30 million underlying cells. The individual velocity vectors drawn correspond to averages over 20 x 20 blocks of cells. Particles are inserted in a regular way at the left-hand end so as to maintain an overall flow speed equal to about 0.4 of the maximum possible. To make the patterns of flow easier to see, the velocities shown are transformed so that the fluid is on average at rest and the plate is moving. The underlying density of the particles is approximately one per cell, or 1/6 the maximum possible---a density that more or less minimizes the viscosity of the fluid. The Reynolds number of the flow shown is then approximately 100. The agreement with experimental results or actual fluid flows is striking.

At low speeds, the pattern of eddies is completely regular. But above a critical speed, the pattern begins to exhibit random irregularities---like those in Figure 5---which closely resemble those associated with turbulence in real viscous fluids. These random irregularities cannot have been inserted into the problem through the details of the initial data---a possibility that can never be ruled out in wind tunnel experiments---since the data were chosen to be entirely regular and the collision rules remain inviolate. On the contrary, the apparent randomness in Figure 5 can have been generated only by the mechanism itself. It seems likely, therefore, that the same can be said for turbulence in the atmosphere, the sea, and other naturally turbulent fluids. The turbulence found in nature is more likely to have arisen spontaneously than to have been imposed from without.

Wolfram goes on to observe that, since all his computations were made in integers, they contain no propagating round-off error. The results are therefore completely reliable as they apply to the imaginary fluids in question. If a close connection could be established between those fluids and the sort that obey the Navier-Stokes equations, it should be possible to deduce theorems of a sort currently in short supply---theorems pertaining to conditions far downstream from an obstacle. Wolfram displays 21 photographs of exceedingly intricate fluid flows, including clouds, flames from an oil fire, billowing smoke, and a drop of ink fallen into a glass of water, and predicts that remarkably simple programs will one day "reproduce the main features of even the most intricate and apparently random forms of fluid flow." If and when that happens, it will strengthen his argument that programs are more fundamental than equations in the study of natural phenomena.

Wolfram comes to regard any process that evolves over time according to definite rules as a "computation" in which the (hard-wired) rules of evolution define a programmable computer, while the (user-supplied) initial data furnish whatever information may be needed to run that program, beginning with the program itself. Accordingly, natural processes---which evolve according to the laws of nature---are computations too. Some are ridiculously simple, while others are obviously complex. But surprisingly simple rules can lead to complex evolutionary behavior, as his experiments abundantly demonstrate.

His main conclusion is contained in what he calls the "principle of computational equivalence," his preferred statement of which asserts that "almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication." In other words, there exists a notion of computational equivalence that separates all possible computations into two and only two classes. These cannot be P and NP, since problems are known that belong to neither. Yet the first must contain all computations that achieve "the highest level of computational sophistication," while the second contains everything else.

He gives the name "universality" to the property possessed by every member of the former class, and demonstrates that cellular automata, Turing machines, register machines, and "practically every kind of system that I have considered in this book" can possess this property. Indeed, among the 256 simplest cellular automata, rule 110 has been shown by Matthew Cook, a former assistant of Wolfram's, to be universal. At least three other rules like 110 are known to be universal, while others are demonstrably not. The jury is still out on rule 30, which seems to be universal but has yet to be proven so. Generally speaking, Wolfram finds it easier to prove that a given rule is universal than to demonstrate that another one is not.

It may seem strange to describe a twelve hundred-page book as "compact." Yet if this one is read as an invitation and guide to an unfamiliar sort of experimental science, it merits that description. Despite Wolfram's conversational style, and the myriad pictures adorning its pages, almost every paragraph seems to contain helpful hints for the legion of emulators Wolfram clearly hopes to spawn. Though many of his findings are unoriginal---pioneers in the field of cellular automata and complexity already knew much of what he has learned---the book under review helps unify seemingly unrelated fields of research and expand scattered frontiers of knowledge, and just might someday live up to its ambitious title.

James Case writes from Baltimore, Maryland.

Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+