In poker, a player declares “all in” when he decides to bet all of his remaining chips on the cards in his hand. He then waits nervously while the remaining cards are dealt, knowing that he will soon either win big or lose all of his chips (“go bust”).
A similar gamble occurs when you apply some optimization approaches based on Design of Experiments (DOE) concepts. In this case, the actual objective function being minimized is evaluated at a predetermined set of design points. Then, a simple approximation of the objective function is developed by fitting an analytical function to these points. This approximate function is often called a response surface (also a surrogate function). The optimization search is then performed on the response surface, because evaluations of this simpler function are usually much quicker than evaluations of the actual objective function.
However, by defining all of your design evaluations ahead of time (going “all in”), you are risking that the corresponding response surface may not accurately represent the true objective function. If the surface fit is not accurate enough, then searching the response surface may not really give you the optimal design. In fact, it is common for an inaccurate response surface to completely mislead the optimization search, resulting in a very poor solution. So, while an accurate response surface could yield an optimized solution at lower cost than some other optimization approaches, a poorly fit surface may yield no useful results at all (you’ll “go bust”).
To determine the actual quality of your solution, you must evaluate the objective function at the point suggested by the response surface search. And, to find out if that point is really optimal, you may have to search more thoroughly by evaluating some additional neighboring designs.
Here is the source of the risk. For a given objective function, you cannot predict whether fitting an assumed approximation function to a particular set of design points will produce an accurate response surface. This is because there is no reliable way to know ahead of time the shape and characteristics of the true (implicit) objective function.
Moreover, because you don’t know where the optimal solution is located in your design space, you must spread the predetermined design evaluation points somewhat evenly throughout the space. This maximizes the chances of representing each part of the space equally well in the response surface. At the same time, it practically guarantees that you are wasting a lot of design evaluations in regions of the design space that you don’t care about – where the solutions are suboptimal. The percentage of wasted evaluations is much greater in complex design spaces, for which high-order response surface functions must be used and more design evaluations are needed.
Within a sub-region of the design space, it takes many more design evaluations to fit an accurate surface than it does to figure out if that region should be explored further or ignored altogether.
This is one reason iterative optimization methods are preferred for most problems. These methods “learn” as they go, and focus future design point evaluations in regions of the design space that have a higher chance of yielding better designs. Generally, this is a more efficient and smarter use of overall resources, especially when the design space is complex.
An “all in” scheme described above is often recommended when there are time and resource constraints on a design process. Yet, these are the precise conditions under which this scheme holds the highest risk.
In most cases, an efficient iterative approach can significantly increase your chances of finding optimized solutions within a given time constraint, while minimizing your risk of “going bust.”