Yes, this is an odd title for a blog post that is meant to promote optimization. But this opinion is expressed more often than you might think, especially among engineers who have tried to apply classical optimization technology to their challenging design problems.
So, what is causing smart people to form this opinion? I believe there are four types of experiences that cause people to lose faith in optimization:
1. The optimized solution was not as good as expected.
Those big improvements you hoped for were not realized. But did you include all the key design variables and allow them to vary broadly enough to really improve the design? Often we are taught to reduce the number and range of the design variables to allow for the limitations of classical optimization algorithms. Modern search strategies don’t have these limitations; they can efficiently explore broader and more complex design spaces, with a higher chance of finding superior solutions.
Sometimes the optimized solution is not even as good as the baseline design you started with! This may happen, for example, when the search is performed using a poorly fit response surface. Response surface methods can work well in certain applications, but a lot of expertise and experimentation is often needed to use them properly. Direct search methods do not rely on response surfaces, so this type of error is eliminated.
2. The optimization study could not be completed because some of the function evaluations failed.
A failed function evaluation can be caused by an inability to generate a new math model for a particular design, non-convergence of the math model, or any other type of error during the analysis. These occur in a large percentage of problems.
It is true that many DOE-based studies, and some types of search algorithms, cannot overcome even a single failed function evaluation. But robust optimization algorithms are not adversely affected even when numerous evaluations fail, as long as there are enough successful evaluations to conduct a meaningful search.
3. A different solution was obtained in every run, depending on the starting design.
Like a ball rolling down a hill, local optimization algorithms converge to the nearest local minimum (the lowest point in the valley). If two separate local searches start with designs that are in different valleys, then the final solutions from these two studies will be different. Unfortunately, it is often impossible to know how many valleys there are in a design space, and which valley a design lies in, prior to exploring the entire space.
Another possible explanation is that the multiple optimization runs had not yet converged. The solutions were not the same, because a different path was taken in each run. An optimization algorithm that performs broad exploration, but does not fine tune the solutions locally, may not make any noticeable progress over many evaluations. So, it may just appear to be converged when it is not.
Using an algorithm that performs global exploration and local optimization at the same time dramatically increases both the solution efficiency and the chances of finding the global optimal solution in each run.
4. The total time required to perform a sufficient number of evaluations was too large.
When each evaluation takes hours or days to complete, the total amount of CPU time for an optimization study can be quite large. In this case, the two most important features of an optimization algorithm are its efficiency and its ability to perform parallel evaluations.
An algorithm’s efficiency is measured in terms of the total number of evaluations required to converge or to locate a solution of a certain quality. By this measure, the efficiency of various algorithms can typically vary by factors of 5 or 10, and sometimes by as high as 100 on a given problem. Aside from its ability to consistently find a good solution, an algorithm’s efficiency is its most important characteristic.
When multiple CPUs and analysis software licenses are available, the speed of an optimization study can be increased by a factor equal to the number of CPUs available. For example, if ten machines and licenses are used, an optimization study can run ten times faster. Of course, the selected algorithm and the optimization software infrastructure must be capable of managing, and taking advantage of, multiple simultaneous evaluations. While some algorithms are still not capable of doing this, most modern optimization algorithms are developed to handle this requirement.
Conclusion: The search algorithm is the keyWhen well-defined optimization studies based on reasonable math models are not successful, the root cause is almost always the search algorithm. Selecting an algorithm that is inefficient, not robust, or inappropriate for your problem can lead to disappointing results. While this is no reason to broadly condemn optimization, the frustration caused by applying optimization technology unsuccessfully is certainly understandable.Does optimization really work? Of course it does, provided that you use a suitable search algorithm on a well-defined problem. Industry-leading companies around the world demonstrate this every day on a myriad of challenging problems.
With today’s hybrid and adaptive search technology, unsuccessful optimization studies should soon be a distant memory.