Most of us have heard the advice, “Change only one variable at a time to understand how that variable affects your system.”

Sometimes this advice is correct, but only in a very local sense. For example, if we want to estimate how sensitive a system is to a change in variable A, then we can hold all other variables constant and change variable A very slightly. The change in the system response divided by the change in variable A is an estimate of the sensitivity derivative at the original design point.

But, the key word in the previous sentence is “point,” because *derivatives are defined at a point*. If we select a different starting design point, and then repeat the above exercise, we would expect to get a different value for the sensitivity derivative.

Let’s examine this idea further. If we hold variable B constant at value B1, changing variable A will have a certain influence on the system response. But if we hold variable B constant at value B2, the effect of variable A might be very different than before. If so, then the effect of variable A depends on the value of variable B. When this occurs, we say that there is an interaction between variables A and B. We can easily generalize this argument to many variables.

This concept has several implications for an optimization study. First, consider the fairly common practice of calculating sensitivity derivatives at the baseline design point prior to performing optimization. Typically, the goal of doing this is to filter out those variables that seem to have little effect on the design, so only the most important variables are considered in the optimization study.

This may seem like a good idea, but in fact it is very risky. While a certain group of variables may have a dominant influence within a small neighborhood around the baseline design, other design variables may be needed to guide the search to a truly optimal design outside of that neighborhood. Ignoring these other variables will lead to suboptimal solutions, which can be very costly in terms of unattained design improvement. Unfortunately, there is no way to know this ahead of time.

Unless we really are seeking only incremental changes to a design, the practice of filtering design variables prior to optimization seems both ill advised and wasteful. After all, we really don’t care about the sensitivity derivatives of the baseline design, and the significant effort required to calculate them is probably better spent on the optimization search.

Second, once we have arrived at an optimized solution, we do need to recalculate the sensitivity derivatives for that new design point. The results from a previous design may be completely unrelated to those of the new optimized design.

Finally, it is important to consider the interaction terms (mixed derivatives) in addition to the main effects (derivatives with respect to a single variable) when calculating sensitivity derivatives. It is possible for the main effect of a variable to be relatively small, while its interaction with another variable can be large. In this case, ignoring the interaction effect could lead to an incorrect conclusion about the robustness of a design.

So while it may not be prudent to include every imaginable design variable in every optimization study, we should also be careful not to filter out important design variables based on sensitivities at the baseline design. And, it is certainly unwise to limit the number of design variables so as to miss out on important interactions.

As for the advice to change “one thing at a time,” it was probably relevant when computer resources were more limited and optimization studies were very local in nature. Today, we can add this advice to the growing list of restrictions that have been made obsolete by the power and speed of modern computers.