Forgotten Reasons

Grandma CookingYou may have heard the story about the woman who always sliced about one inch off the end of a large roast before placing it in the pan to be cooked. When asked why she did this, she did not know the reason. But she was sure that it was important, because her mother always did exactly the same thing.

Now curious, the woman called her mother to ask why it is important to cut off the end of a roast before cooking it. Her mother did not know the reason, but she was confident that it was important, because her mother always did exactly the same thing.

A phone call to the woman’s grandmother finally revealed the true reason why two subsequent generations of cooks always cut off the end of a roast before cooking it. The grandmother explained, “A long time ago, the only size of roast at the local store was too large to fit in my pan. So I had to cut a bit off the end in order to cook it. I haven’t had to do that in years!”

I’ll bet the entire family had a good laugh about this situation. Many years ago, there was a really good reason to cut off the end of the roast, but that reason didn’t exist anymore. Yet that step in the process was passed down to future generations, as though it were crucial to the success of the meal.

There are probably many situations in which current limitations give rise to a process that continues to be used long after those limitations are gone. Often the facts become blurred and the philosophical reasoning becomes stronger, so no one questions whether the process is valid.  A paradigm is created that is not easily broken. 

For example, it is still common today to perform a sensitivity study prior to numerical optimization. This process has existed for several decades and has been passed down through generations of engineers. The goal of a sensitivity study is to identify the variables that have the most influence on the performance of the system being optimized. The standard reasoning is that these variables are the ones that should be used during the optimization study, while the others can be neglected. Let’s explore the soundness of this process.

Twenty years ago, computing resources were somewhat limited, so often only a small number of design evaluations could be performed in the time available for a design study. Simulation models were not very robust, so only minor changes to a model could be considered without significant manual intervention. Consistent with these restrictions, most design optimization algorithms relied on sensitivity derivatives and were capable of finding incremental improvements within a small neighborhood around the initial (baseline) design. This small neighborhood represented the design space to be searched.

For many problems, the sensitivity derivatives are not expected to change very much within this small neighborhood, so the gradients associated with the baseline design are a reasonable approximation of the gradients within the entire design space. In this scenario, selecting important variables based on sensitivity gradients is a clever, and mathematically consistent, thing to do.

Today, however, our computing technology and optimization software have improved dramatically, allowing us to significantly broaden the scope of design optimization studies. The designs we seek may not necessarily be near the baseline solution. In fact, we often hope to find designs that are innovative, with properties that can be quite different from those of our baseline design. To accomplish this, we use global optimization methods that explore larger design spaces than before, with more variables and larger variable ranges.

In a large design space that is highly nonlinear and perhaps even multi-modal (i.e., with many peaks and valleys), the derivatives at any single point, such as the baseline design, have no relevance to the rest of the design space. In other words, the values and ranking of the sensitivity derivatives might change dramatically from one design point to another.

In this case, it is mathematically inconsistent to eliminate variables based on the sensitivity derivatives of the baseline design. That’s a nice way of saying “it’s wrong, so don’t do it.”

Some of the eliminated variables might be needed to get to the region where the truly optimal design lies, so if you screen out variables based on sensitivity derivatives, you’ll often get sub-optimal designs. You have essentially forbidden the search algorithm from looking in some of the most fruitful regions of the design space.

In many cases, calculating sensitivity derivatives up front is a whole lot of wasted work that leads to inferior designs. Worst of all, it is often done for reasons that no longer exist.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *