INTRODUCTION TO THE OPTIMIZATION PROCEDURE

Why is this mechanism different from all other mechanisms?

The systematic optimization procedure used to produce the GRI-Mech mechanisms involves the following steps:

1. Assemble a reaction model consisting of a complete set of elementary chemical reactions.

2. Assign values to their rate constants from the literature or by judicious estimation. Treat temperature and pressure dependences in a proper and consistent manner. Also evaluate error limits, and the thermodynamics used for the equilibrium reverse rate constants.

3. Search the literature for reliable experiments that relate to natural gas combustion and NO formation and reburn; find various shock tube experiments, flame measurements, flame speeds, ignition studies, flow reactor studies, etc., that depend on some or all of the rate and transport parameters in the model. These experiments should include key combustion properties that the mechanism is to predict. Evaluation is again required, although the optimization process itself typically will reveal inconsistencies with the other data. A final selection criterion is that the experimental results be readily modelable Ñ uncertainties in other model parameters, or the model itself, for the actual experiments should not approach those of the kinetics we anticipate optimizing as a result.

4. Use a computer model to solve the reaction mechanism kinetics and any necessary transport equations, computing values for the observables of these "target" experiments. Also apply sensitivity analysis to determine how the model input rate constants affect the result. The sensitivity coefficient S = dX/X / dk/k = dlnX / dlnk. Compare computed results with data.

Since the first solution often yields a computed value that does not match observation, normally the next activity is to adjust input parameters (within error limits), individually or in combinations, to bring the computed value into agreement with the observation. The problem with this approach is that the change in input parameters used to match one observation does not take into account other data sensitive to the same parameters. This other data will not be matched by the model, or may simply be ignored. One must either iterate, or sacrifice reliable predictability for the mechanism. A process that is systematic, simultaneous, and inclusive is required instead.

5. Choose experimental targets sensitive to a representative cross-section of the rate parameters, under a representative set of conditions. Many parameters will apply to more than one target. Also select, according to sensitivities and uncertainties, those parameters making the largest impacts on a given target. These are the potential optimization parameters.

6. Map the model response by repeating computations of the target observables for a minimum subset of combinations of these variables - within their appropriate error limits - according to a central composite factorial design. We typically try to include all "active" (i.e., significant) variables.

7. Create, using the results of these factorial-design-directed calculations, the polynomial functions (the response surface) that mimic the results of the computer simulation for each target. This solution mapping technique in essence creates a representation of the predicted target values for the set of possible mechanisms within stated error estimates. The rate constants are normalized on a logarithmic scale, and a second order polynomial expression is typically created by regression analysis of the appropriate factorial design calculation set. Details of the factorial design and response surface fitting are given in Frenklach et al.[link]

8. Use the response surfaces to calculate target values that are then compared to measured values in an error function known as the objective function (F ), which is then minimized. Here, F = S w [ 1 Ñ calc/exper ]2. The values of the variables thus obtained are statistically the best values resulting from the universe of data considered as targets.

9. The result is a model faithful to both the fundamental kinetics and system data, one that can be reliably employed for modeling purposes.

Not all data or targets are created equal. Some experiments are less trustworthy than others, which can be taken into account by weighting the targets differently (the term w) in the objective function. The solution mapping paradigm allows the use of data of disparate types to refine rate parameters, thermochemistry or other parameters. Gaps or inconsistencies in the data set, and parameters which contribute large collective uncertainty and deserve more investigation, can be identified.

A heuristic process and exercise of judgment is also involved in arriving at the final optimized mechanism in step 8 above. Several factors are at work. Foremost perhaps is the phenomenon of a shallow response surface, whereby the modeled target value only slowly changes with the rate constant(s) in the neighborhood of the suggested optimized solution. This effect occurs when the target sensitivities to a reaction are relatively low or parallel that of another. The optimization process will attempt to squeeze the last 0.001% decrease in F , but it makes no qualitative sense to require the mechanism to make the quantity and range of rate constant alterations to do so. Hence a significant part of the mechanism optimization process consists of locating these shallow or insignificant variables, and freezing their values at the baseline evaluated kinetic expressions. This involves some subjective judgment on how large an error to permit in the predicted target values. Sometimes variables optimize near their starting values, and thus need not be altered. We have typically removed the majority of the variables from the final optimization. Version 1.2 altered 4 of the 30 rate constants considered, but of the 15 nitrogen kinetics reactions added to version 2.11, 10 additional ones were optimized. Version 3.0 narrowed 98 variables down to 28.

Additional factors to consider include the lack of a unique optimized mechanism, and the realization that errors in the data do require some subjective accommodation in the procedure given above. So it is also necessary to explore other possible combinations of rate parameters as optimization variables for groups of targets. A good example of multiple solution possibilities of similar quantitative agreement comes from our experience with the rate constants for OH + CO and H + O2 -> O + OH. Almost all targets are sensitive to one or both of these very important combustion reactions. Yet although the 3.0 optimization calculations suggest a lowering of these rates when they are included, the quality of target fits is not appreciably diminished by freezing them at their initial values (F increases 2%).

Reference: M. Frenklach, H. Wang, M.J. Rabinowitz (1992) Prog. Energy Combust. Sci. 18, 47.