A Rational Approach to "Earth Management"

July 10, 2001

Increased interaction between mathematical and geophysical scientists, the goal of a recent NSF-sponsored workshop, is seen as a key to progress in solving the huge, often multiscale problems that arise in the geosciences.

From plate tectonics to large-scale flows in the oceans and complex tropospheric circulation, geoscientists identify a host of scientific problems to which mathematics has much to contribute.

Scientific investigations of geophysical phenomena have two goals: an understanding of the states and typical behavior modes of the system, and an ability to make forecasts. No one would contest the pressing need for improved weather and climate forecasts, and the case is just as compelling for earthquake predictions and advance knowledge of changes in ocean conditions.

All geophysical systems are characterized by activity on a vast range of interacting scales. Moreover, the inherent nonlinearities in the systems complicate the separation of the scales. The notorious advection term of hydrodynamics creates much of the nonlinearity, but other effects share in the responsibility, including temperature- (or property-) dependent diffusion coefficients (viscosity can change by ten orders of magnitude in the earth's crust). Turbulent flows in the atmosphere and in ocean basins have huge Reynolds numbers, with excited length scales from centimeters to thousands of kilometers and, crossing time scales, the rapidly changing atmosphere forces the slowly evolving ocean.

Laboratory experiments can represent only very simplified situations, and the verification of geophysical theory thus depends largely on unrepeatable observations. Recent advances in observation technologies have resulted in an abundance of data in all areas of the geosciences. Ironically, however, the data are still very sparse: Most atmospheric data are obtained from weather balloons located over land, and subsurface data for the oceans come from comparatively small numbers of float experiments. This situation presents enormous challenges for computational modelers, who must not only assimilate vast quantities of different types of data, but also effectively fill in spatial and temporal regions not covered by the data.

In March, the National Science Foundation brought together a broad group of scientists to consider these issues. The charge of the group, which met for three days at the Institute for Mathematics and Its Applications of the University of Minnesota, was to formulate a vision for increasing interaction between mathematical scientists and geophysical researchers. A pilot program that will fund new projects incorporating mathematical advances in the geosciences is to be announced soon by NSF. Advance notice of these upcoming interdisciplinary funding opportunities, along with an overview of the workshop, was given in June by Thomas Fogwell at the SIAM Conference on Mathematical and Computational Issues in the Geosciences.

As discussions at the NSF/IMA workshop progressed and interactions between the disciplines increased, a number of issues came into focus. The tremendously enthusiastic group discussed specific research issues, as mentioned below, but also emphasized the need for effective interdisciplinary education, training, and support if people are to work effectively in this area. One striking observation was of the extraordinary overlap in the basic issues being confronted by geoscientists in different areas, and in their mathematical needs.

An Array of Mathematical Problems
Multiscale phenomena arise in all areas of the geosciences. The atmosphere is in motion on planetary, synoptic, meso, and convective scales, with mixing and transfer of energy, mass, and other state properties (e.g., temperature, humidity) among these scales. In the ocean, large-scale currents like the Gulf Stream are influenced by small-scale turbulence-like activity. Parametrizations of effects at different scales are often achieved by crude approximations, such as eddy-viscosity techniques for the ocean. Hierarchical models incorporating activity at different scales are a critical need. A related demand is for models that describe different levels of evolving activity; biological populations in the ocean are affected by local fluid conditions, for example, and models that mix biophysical and fluid mechanical effects are now emerging. While some multi-scale models have been developed, there was a strong sense at the meeting that computational and mathematical techniques are really just now poised to contribute to significant advances.

Given the sparseness of data and the complexity of motion in the atmosphere, the oceans, and on land, there is a striking need for quantification of the uncertainty in predictions. This is a natural consequence of the increasing use of stochastic modeling; many issues remain unresolved, however, and a deeper understanding of the relation of the sources of uncertainty to the basic physics and error growth is needed. Model-validation techniques that can distinguish between models of varying complexity, in reference to collected data, need to be developed. Recently formulated theories based on statistical mechanics have been successful in capturing macroscopic features (e.g., the organization of a ring of high vorticity surrounding a region of low vorticity into a hurricane), but much work remains in the application of these ideas.

The use of stochastic modeling does not preclude the characterization of evolving geophysical systems as complex dynamical systems. Indeed, it is a partnership between these two complementary perspectives that promises the greatest advances. The view of the atmosphere as a chaotic dynamical system has been popular since the work of Lorenz in 1963. A full characterization of the "atmospheric attractor," however, has eluded researchers-the number of degrees of freedom in such a dynamical system is large enough to defy such a reduction. Nevertheless, the dynamical perspective has been very influential, and the identification of transient phenomena as particular examples of dynamical paradigms seems to be a fruitful direction. Particularly elucidating has been the dynamical systems analysis of Lagrangian phenomena, such as fluid exchange and transport.

From the National Oceanic and Atmospheric Administration, Pacific Marine Environmental Laboratory, Tropical Atmosphere Ocean Project; http://www.pmel.noaa.gov/tao/elnino/nino-home.html.

A critical question concerns the meshing of comprehensive and simplified models. General circulation models attempt to include all the relevant physics, chemistry, and biology. When describing a phenomenon like El Ni�o (see illustration) or the input of fresh water in the Atlantic, leading to a millennium oscillation (which affects the thermohaline circulation), these models must cope with very large ranges of time and length scales. As the number of computed degrees of freedom increases, it becomes more and more difficult to understand the connection between cause and effect in such a calculation. On the other hand are incomplete models that preserve only the properties deemed to be essential; although offering the advantage of simplicity, these models may miss a key factor or exaggerate an effect.

One possible resolution is to consider a hierarchy of models, establishing systematic ways for passing from one model to the next in the hierarchy. This would involve an analysis of singular perturbations (taking limits of characteristic dimensionless parameters) of the dynamical equations. Stochastic terms would naturally enter into reduced-model equations as the contributions of the ignored variables. Such terms can be important, as when flux corrections to a simple model alter its whole bifurcation picture.

Research throughout the geosciences is increasingly computational. As computational techniques develop, the adaptability and accuracy of the techniques are more important than ever. Researchers are looking for proofs of convergence, as well as demonstrated confidence in the robustness of the schemes. A critical characteristic of effective techniques is flexibility in assimilating data. Given the wealth of data available, efficient assimilation into computational models is essential for achieving the best models. While data assimilation is a well-developed technique with a proper claim as part of control theory, it has not received much mathematical attention.

A related topic is the development of techniques for improving the quality of the data available through targeted observations. A mathematical theory providing methods for the optimal design of observations, with a goal of improving the accuracy of the resulting data-assimilated models, could have far-reaching consequences. Control theorists have well-developed methods for assessing the robustness of linear systems, but further development is needed if such methods are to be useful in the context of particular nonlinear geophysical situations. For the Lagrangian observations common to ocean investigations, dynamical systems methods could be used to design optimal observation strategies. For the atmosphere, singular vectors have been used to target observations by weather balloons but remain the topic of much debate.

Each of these issues arises in investigations of the atmosphere, the ocean, and the earth. Perhaps most challenging of all, however, are the problems addressing the coupling of these great geophysical systems-at the ocean-atmosphere interface as well as at the surface of the earth and in coastal regions.

Atmosphere
Driving much current research in atmospheric dynamics is the need for accurate forecasts. Propagation of errors in initial conditions produces uncertainty in meteorological forecasts, especially when the atmosphere is convectively unstable. Also, large-scale structures in the atmosphere that vary on time scales longer than those associated with weather events are partially sustained by transfer processes from small scales. A multifaceted approach to modeling the complex, multiscale phenomena of atmospheric dynamics is an obvious need. Turbulence in the presence of rotation and stratification is a fundamental issue in atmospheric research. Dynamics-based closure schemes for developed turbulent flows, improving on simple parametrizations like eddy-viscosity schemes, are a significant challenge. The use of ideas from statistical mechanics for coping with unresolved scales is seen as a promising direction. Using ideas from nonequilibrium statistical mechanics and stochastic processes, researchers may be able to design closure schemes that preserve basic properties of the underlying equations (conservation, realizability, fluctuation-dissipation, and H-theorems).

Finally, the sensitivity to initial conditions, or the lack of predictability, of meteorological flows, as first exemplified by the famous Lorenz attractor, is a crucial issue in all predictions. Better characterizations of the uncertainty, linking the physics of the atmosphere to the mathematics and statistics, are in demand. The theory of linear disturbance amplification, one recent approach to modeling fluctuations, has links to both dynamical systems and control theory.

Oceans
The parametrization of small-scale turbulent effects for meso- and large-scale ocean circulation will be critical to the development of better models of ocean dynamics. Because strongly nonlinear waves may turn out to play an important role, theories that go beyond existing weakly nonlinear wave theory are needed. Unresolved scales will lead to stochastic terms, and various ideas from statistical mechanics may prove helpful to model-reduction efforts.

Given the Lagrangian nature of much of the available ocean data, and the importance of Lagrangian transport in the assessment of ocean mass and energy transport "budgets," methods that focus on the organization of fluid particle motion have gained currency; these methods are based on the use of dynamical systems templates for establishing special material surfaces in the flow fields (stable and unstable manifolds) that orchestrate the flow. An understanding of the connections between dynamics at this level and the Eulerian formulation of the fluid fields could lead to improved float-placement strategies and predictions of instabilities. An important issue to be addressed is the sensitivity of these methods to environmental uncertainty, including both randomness in the data and resolution of the observations.

The ocean, of course, does not exist without the life within it. Computational and mathematical modeling has progressed to the point that we can hope to describe the dynamics of diverse, many-species biological processes. Not only do these processes occur at scales not explicitly represented in numerical models, but their governing equations (typically of the reaction-diffusion type) are not always fully known. Moreover, these equations may not fit neatly into the continuum formulation of the transport equations---it is difficult to say when a continuum limit has been reached in a probabilistic biochemical model.

Earth
Seismic activity and media properties are determined from the detection of three-dimensional waves. This inverse problem is both nonlinear and multiscale, even though the governing wave equations are linear. Indeed, seismic waves propagate through media with complex structures, in which the wave speed can be multiscale, fractal, or even singular. Numerical computations of propagation often require adaptive grid refinements, especially at thin boundary layers, and are further complicated by coupling with other processes, as when liquefaction occurs near the surface. Analytic and computational tools that apply to realistic geometries and boundaries would serve a vital need. In geodynamics, the problem of estimating the material properties of the earth's interior is crucial to any modeling effort, as in the postglacial rebounds or the deformation velocities measured in glaciers and rocks. Strongly varying properties (viscosity and yield strength), complex rheology, mixed phases (solid/fluid), strain localization, and vastly disparate time scales are typical in these problems. Using ideas and techniques from statistical mechanics, researchers are developing novel earthquake models. Simple slider-block models with velocity-dependent friction exhibit chaotic dynamics. A lattice of such blocks, representing faults interacting via induced stress fields, can be used to construct a statistical equilibrium model of the self-organization of seismicity. Simulations show that models of this kind are capable of mimicking universal scaling laws, such as the Gutenberg-Richter law (for the dependence of magnitude on the frequency of seismic events). The next challenge is to elaborate non-equilibrium models that incorporate more realistic physics in the hope of providing a fundamental understanding of the temporal and spatial behavior of major earthquakes. As in other areas of the geosciences, this ambitious goal requires that a hierarchy of models be examined and interrelated, from sand pile or cellular automata models to mean-field models with long-range interactions and beyond.

Summary
Mathematical techniques play a huge role in developments in the geosciences and, at the same time, benefit from interesting new research directions arising from the geophysics. A number of research directions in dynamical systems and statistical approaches to multiple-scale processes are ripe for broad research efforts, with potentially huge payoffs in terms of our ability to understand and predict geophysical processes. Given the trend of globalization, we can also expect a large increase in the importance of such predictive ability in the international arena. It is inevitable that these ideas will need to cross the boundaries of the academic community to form the basis of a rational approach to "earth management."

This article was written by a subset of the workshop committee: Roberto Camassa (UNC-Chapel Hill and Los Alamos), Chris Jones (Brown), Louise Kellogg (UC Davis), Igor Mezic (Harvard and UC Santa Barbara), Annick Pouquet (Observatory of the C�te d'Azur), and Bruce Turkington (UMass Amherst). The other members of the committee were Hal Caswell (Woods Hole), Kelvin Droegemeier (Oklahoma), Darryl Holm (Los Alamos), Doug Nychka (NCAR), Jim Rosenberger (Penn State), Dan Rothman (MIT), Roger Samelson (Oregon State), and Don Turcotte (Cornell).


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+