CSE 2009: Preprocessing for Industrial CFD: More Important Than You Might Think

June 15, 2009


Figure 1. Navier�Stokes analysis around a complex geometry, circa 2009. Clock time (left) and human labor (right), in hours, for preprocessing---geometry preparation (blue) and grid generation (red); CFD solution (green); and postprocessing (yellow).

Todd Michal

Rapid increases in computing speeds over the past three decades have significantly increased the utility of computational analysis to the aerospace industry. Reduced need for costly flight and wind tunnel testing, shorter design cycles, and improved product performance are among the benefits routinely derived from computational analysis. Paralleling the rapid increases in computing power have been corresponding increases in the complexity of the problems being solved. Analysis of very complex geometry models has become commonplace, and the work involved in preprocessing these complex models, in particular, has grown exponentially. The additional difficulty associated with the preprocessing of complex models became a concern at Boeing in the late 1980s, when, for the first time, computational fluid dynamics methods were used to model complete aircraft. With the preprocessing tools then available, preparation of a computational grid for a single model took five or more weeks, thus severely limiting the utility of the eventual CFD analysis.

Today, in terms of clock time, preprocessing is the single most expensive portion of an analysis, with delays having a significant impact on overall design cycle times and, ultimately, on product cost and program risk. Even so, the preprocessing phase is often over-looked in considerations of analysis through-put. In the past several decades, solution computation times have been significantly reduced with the advent of highly scalable parallel computing platforms and solution algorithms; in the same timeframe, by contrast, preprocessing methods have evolved in a serial computing paradigm.

A computational model is usually prepared for analysis in two steps: preparation of a reduced-geometry model appropriate for the analysis, and creation of the computational mesh. The starting point for most analyses is a geometry model developed in a computer-aided design program. This mod-el typically contains information necessary to manufacture the product and, thus, a high level of detail (about fasteners, internal bulkheads, brackets, and other components) that is superfluous to the computational analysis. Preparation of a reduced-geometry model involves removing unwanted geometric detail from the model, re-pairing geometric flaws, such as gaps, overlaps, or poorly formed surfaces, and filling in the resulting holes to form a closed, watertight domain. The time needed to create a reduced model, which can vary from a few hours to a few weeks, is highly dependent on the quality of the starting geometry.

The second preprocessing step, creation of the computational mesh, is commonly performed interactively on a graphical workstation. This labor-intensive step typically involves specification of mesh properties (such as grid element size and stretching rates). An automated method is used to create a mesh on the geometry surface and to fill the volume of the solution domain with mesh elements.

In the past two decades, new preprocessing technologies have substantially reduced the human effort required to preprocess complex geometry models. Automated methods have decreased the time required to prepare a reduced-geometry model---to a few hours for most problems or, in the worst cases, to a few days. Once a reduced-geometry model has been obtained, several new technologies are available for quickly generating a computational grid. With generalized grid topologies (unstructured meshes) and tighter integration with topological information defining the connectivity between neighboring geometry surfaces, much of the grid-generation process has been automated. In addition, specification of grid-resolution requirements, previously a very labor-intensive process, has been simplified by advanced geometry feature-detection algorithms. As a result, generation of a grid on a complex geometry model requires an investment of only a few hours of labor.

Unfortunately, the reduction in hours of human labor has come at a cost: increased computer run times. This is in part because of the complexity of the new automated grid-generation algorithms and in part because of the sheer size of the grids necessary to resolve the high level of geometric fidelity. Tens or even hundreds of millions of elements are not uncommon in these grids, which require upward of 12 computational hours to generate. The breakdown in hours of human labor and wall clock time for a CFD analysis around a complex fighter aircraft is shown in Figure 1.

Preprocessing of this CFD model required only a dozen hours of labor, a large improvement over the multiple months that would have been required in the late 1980s. As to computer run time, the overall preprocessing clock time of 24 hours reflects an additional 12 CPU hours required to generate the grid. Computation of the flow solution, by contrast, involved more than 5000 CPU hours and was completed in only 8 hours of clock time. The high throughput of the flow solution is attributed to the evolution of highly scalable parallel computing algorithms and computer hardware. Preprocessing, which does not lend itself to parallel computing hardware, was performed serially.

Although the ability to preprocess a computational grid in 24 clock hours is clearly a significant improvement, the large gap in throughput (and particularly in scalability) between the preprocessing and flow solution phases is problematic. For applications in which large numbers of solutions are generated on a single computational grid, discrepancies between the preprocessing and solution times of this order can be tolerated. In the context of a design optimization study or a simulation with dynamically varying geometry, with hundreds or thousands of new geometries and associated grids to be developed within the course of a single design cycle or solution, such discrepancies become unacceptable. Techniques like grid adaptation and morphing of grids to a modified geometry reduce the need for a new mesh with each geometry modification, but large numbers of meshes must still be developed over the course of a design. Closing the gap between preprocessing and solution efficiency remains a priority.

A clear need is emerging for scalable preprocessing methods that can capitalize on the massively parallel computing architectures being exploited by today's solvers. Unfortunately, switching to a scalable preprocessing paradigm will require a fundamental shift from the serial algorithms and approaches in wide use today, and as a result will require significant effort. A few researchers have already begun to explore new concepts for parallel preprocessing algorithms. Successful development of these next-generation methods will be a key element in realizing the full potential of computational analysis.

Todd Michal is a technical fellow with the Boeing Research and Technology Organization in St. Louis, Missouri.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+