Second, and of greater interest here, the process of optimization must be efficient, economical, and thrifty. That is, finding an optimized solution to a problem should take as little of your time and resources as possible. Unfortunately, in many real-world applications, time is the largest barrier to realizing the true value of automated optimization.
In theory, you should be able to find an optimized solution whenever you have a good system analysis model and an appropriate search algorithm. But if the model requires hours or even days of CPU time for each design evaluation, and the algorithm requires a large number of evaluations, then the total time required to reach that optimized solution may turn out to be completely impractical.
Let’s consider the cost factors involved. The total CPU time needed to find an optimized solution is defined this way:
where NSOL is the number of optimization solutions performed, and the expression inside the brackets represents the total search time per iteration of the optimization solution.
Based on this formula, there are only four ways you can reduce the solution time for an optimization study. You can
- Perform multiple evaluations at the same time (in parallel),
- Perform shorter evaluations,
- Perform fewer evaluations, or
- Perform fewer solution iterations.
It is easy to parallelize the search or the evaluations if you have extra CPUs and analysis software licenses. We recommend this when possible. But adding extra resources is not considered parsimonious (which is the topic of this post!), so we’ll discuss parallel evaluations another time.
Next, you could use simplified models to shorten the time per evaluation, but this can be risky. An inaccurate model can mislead the optimization search and give you disappointing results. So, we will assume for now that it is not possible to reduce model complexity or evaluation time.
The two remaining parameters –number of evaluations and number of solution iterations – will have the greatest impact on your solution efficiency. Yet you have the least amount of control over these, because they are determined mainly by the available optimization technology.
The number of evaluations needed to find an optimal design, or a design of a specified performance level, is entirely determined by the efficiency of the search algorithm. To solve a given problem, the number of evaluations required by different algorithms can vary by a factor of 2, 10 and even 100. So the time to solution can differ by days, weeks or months of CPU time. This can greatly impact your ability to meet deadlines.
Clearly, it is important to choose the most efficient algorithm for a problem, but this is one of the biggest inefficiencies of the entire process. It is very difficult, and sometimes impossible, to know what type of algorithm is best for a particular problem. Moreover, most algorithms have a set of tuning parameters that you must define to control the performance of the algorithm. Sure, you could use the default settings, but these are seldom optimal.
So, it often becomes necessary to solve the problem multiple times using a variety of algorithms, tuning parameters, and starting conditions. The number of solution iterations might range from one to five or more, depending on the complexity of the problem. For problems with expensive models, this approach is intractable.
The irony is that optimization is meant to remove inefficiencies from engineering designs and manufacturing processes, and yet many of the existing optimization tools promote a process that is too inefficient to be used for some of the most important applications.
In order to realize the true potential of automated optimization, engineers and scientists must embrace a new generation of optimization technology that is inherently more efficient…and perhaps even parsimonious.