One Thing at a Time

Most of us have heard the advice, “Change only one variable at a time to understand how that variable affects your system.”

One green fishSometimes this advice is correct, but only in a very local sense. For example, if we want to estimate how sensitive a system is to a change in variable A, then we can hold all other variables constant and change variable A very slightly. The change in the system response divided by the change in variable A is an estimate of the sensitivity derivative at the original design point.

But, the key word in the previous sentence is “point,” because derivatives are defined at a point. If we select a different starting design point, and then repeat the above exercise, we would expect to get a different value for the sensitivity derivative.

Let’s examine this idea further. If we hold variable B constant at value B1, changing variable A will have a certain influence on the system response. But if we hold variable B constant at value B2, the effect of variable A might be very different than before. If so, then the effect of variable A depends on the value of variable B. When this occurs, we say that there is an interaction between variables A and B. We can easily generalize this argument to many variables.  Continue reading

All In

In poker, a player declares “all in” when he decides to bet all of his remaining chips on the cards in his hand. He then waits nervously while the remaining cards are dealt, knowing that he will soon either win big or lose all of his chips (“go bust”).

All InA similar gamble occurs when you apply some optimization approaches based on Design of Experiments (DOE) concepts.  In this case, the actual objective function being minimized is evaluated at a predetermined set of design points. Then, a simple approximation of the objective function is developed by fitting an analytical function to these points. This approximate function is often called a response surface (also a surrogate function). The optimization search is then performed on the response surface, because evaluations of this simpler function are usually much quicker than evaluations of the actual objective function.

However, by defining all of your design evaluations ahead of time (going “all in”), you are risking that the corresponding response surface may not accurately represent the true objective function. If the surface fit is not accurate enough, then searching the response surface may not really give you the optimal design. In fact, it is common for an inaccurate response surface to completely mislead the optimization search, resulting in a very poor solution. So, while an accurate response surface could yield an optimized solution at lower cost than some other optimization approaches, a poorly fit surface may yield no useful results at all (you’ll “go bust”).  Continue reading

The “Multi” in Multidisciplinary

Swiss army knifeMulti means “many” or “multiple.” Multidisciplinary design optimization (MDO) has become popular largely because it allows engineers to optimize over many different disciplines at the same time.

For example, you can use MDO to simultaneously optimize a vehicle body for structural, aerodynamic, thermal and acoustic behaviors. In addition, you can directly include non-performance measures, such as cost and manufacturability, in the optimization statement.  Continue reading

Black Box Optimization

Engineers and scientists like to know how things work. They seem to be born with an inner drive to understand the fundamental nature of things. So, naturally, they may have some reservations about using an algorithm if the way it functions is not clear.
Black box
When we can’t see the details about how something works, we often refer to it as a black box. Input goes in and output comes out, without any knowledge of its internal workings.

Black box sometimes has a negative connotation, because knowing how something works is usually a good thing. But if we evaluate the idea of a black box, we find that many common processes and tools – including the human brain – actually fall into this category.

For example, most users of the finite element method have some basic knowledge of its underlying mathematical theory. But many of the element types available in commercial software packages are based on advanced formulations that few users completely understand. These advanced formulations are necessary to overcome deficiencies in the element behavior, and users can apply them accurately without knowing all the mathematical formalities. There are many similar examples in computational mechanics.  Continue reading

The Limits of Intuition

The human brain is capable of making quick and effortless judgments about people, objects or ideas that it has not previously encountered. This sort of unreasoned insight is often called intuition. In his article, “The Powers and Perils of Intuition” (Scientific American MIND, June 2007, pp 24–31), David Myers describes two types of influence that shape our intuition.

Fork in the roadThe first is the development of mental shortcuts, or heuristics, which allow us to make snap judgments, often correctly. For example, our intuition tells us that blurry objects are farther away than clear ones. This is often a helpful assumption, except that on foggy mornings, a car in front of you may be much closer than intuition tells you it is.

The second influence on intuition is “learned associations” or life experiences that guide our actions. This explains why we may be suspicious of a stranger who resembles someone who once threatened us, even if we do not consciously make the association. Similarly, an experienced engineer can often quickly solve a problem that resembles one he worked on many years ago, even if the details of that project are mostly forgotten.   Continue reading

Race to the Bottom

Race to the bottom I have a great idea for a new reality adventure television series.

The basic premise is simple. Contestants are blindfolded and driven to a starting location on the side of a mountain. When the race begins, each contestant must find a path to the base of the mountain as quickly as possible. The blindfolds make it impossible for contestants to detect the contours and obstacles in the landscape.

When contestants are working alone, the strategies they can use are limited. If the terrain is smooth, like a rolling pasture, then contestants might find a successful path by taking small steps in several different directions, and then choosing the direction that leads downward. When the contestant feels that path starting to flatten out or trend upward, she knows it’s time to stop and choose a new downward direction. Repeating this process many times should lead each contestant to the bottom of the nearest valley, which depends on the starting location. The first one to the bottom wins!  Continue reading

Six Stages of Optimization Maturity

There are common stages that most companies pass through when improving their product design process. Each new level promotes greater efficiency and predictability of their process, as well as higher performance and innovation of their products. It is possible to skip one or more steps to reap faster rewards, but the most important thing is to keep moving higher. Which stage represents the optimization maturity of your organization?

Stage 1. Physical prototyping: build and test
Optimization maturity modelA trial-and-error approach to building and testing a myriad of hardware prototypes makes it too expensive to consider many design alternatives.  Continue reading

Optimization Doesn’t Work

Yes, this is an odd title for a blog post that is meant to promote optimization. But this opinion is expressed more often than you might think, especially among engineers who have tried to apply classical optimization technology to their challenging design problems.

Square peg in a round holeSo, what is causing smart people to form this opinion? I believe there are four types of experiences that cause people to lose faith in optimization:

1. The optimized solution was not as good as expected.

Those big improvements you hoped for were not realized. But did you include all the key design variables and allow them to vary broadly enough to really improve the design? Often we are taught to reduce the number and range of the design variables to allow for the limitations of classical optimization algorithms. Modern search strategies don’t have these limitations; they can efficiently explore broader and more complex design spaces, with a higher chance of finding superior solutions.

Continue reading

Churn, Baby, Churn

Conducting more design iterations can lead to higher-quality designs and increased innovation. So, when faced with a tight design schedule, the goal of many organizations is to iterate faster. But in most cases, performing faster manual design iterations doesn’t make the design process more productive.

Gears turningConsider the consequences of maximizing iteration throughput for a typical manual design process. Let’s assume a simple, but familiar, scenario in which each iteration involves the following steps:

  1. Create a CAD model of the geometry,
  2. Build a math model to predict performance,
  3. Execute the math model, and
  4. Interpret its results.

Continue reading

No Soup for You!

Made popular by the Seinfeld television series, the Soup Man restaurant in New York City demands that customers know what kind of soup they want before arriving at the counter. Signs are prominently displayed, stating the rules in several languages:

Soup manFOR THE MOST EFFICIENT AND FASTEST SERVICE
THE LINE MUST BE KEPT MOVING
Pick the soup you want!
Have your money ready!
Move to the extreme left after ordering!

Failure to follow these rules may result in the harshest of penalties — no soup for you!

Like the Soup Man restaurant, the optimization process is not well suited to those who don’t have a clear set of goals in mind.  Continue reading