When More Is Not Better

Suppose that your favorite finite element software boasted the following claims:

Directions“Over a dozen equation solvers are available to approximate the solution of your problem, and each solver contains a rich set of parameters that you can define to tune the solver’s performance. To maximize the accuracy of your solution and the efficiency of the solution process, simply choose the solver that is intended for your problem type, and then tune it properly. Though it is often not possible to classify your problem type beforehand, usually the right solver can be identified within 3-5 attempts. Then, you can use an iterative tuning process to make the solution even more accurate and efficient.”

If the above statements were true, then each finite element solution would require a full-blown research project to find the right equation solver. The added time and cost of numerous solution iterations would offset many of the benefits of the finite element method within the design process. 

Users would get frustrated by their inability to systematically determine what solver to use, and many would eventually avoid using the technology altogether. This state of affairs would be completely unacceptable to modern users of the finite element method. Even the idea seems absurd.

But what if we modify the above hypothetical scenario by simply replacing “equation solvers” with “optimization algorithms” and “solution process” with “search process.” Then, we would arrive at the actual claims made about most modern optimization software:

“Over a dozen optimization algorithms are available to approximate the solution of your problem, and each algorithm contains a rich set of parameters that you can define to tune the algorithm’s performance. To maximize the accuracy of your solution and the efficiency of the search process, simply choose the algorithm that is intended for your problem type, and then tune it properly. Though it is often not possible to classify your problem type beforehand, usually the right algorithm can be identified within 3-5 attempts. Then, you can use an iterative tuning process to make the solution even more accurate and efficient.”

Somehow the absurdity of this statement is not quite as obvious. Most optimization software tool developers are still racing to see who can boast the longest list of algorithms on their product data sheets.

The losers in this race are clearly the users of such optimization tools.

Rather than adding value or clarity to the design process, a longer list of optimization algorithms contributes to inefficiency and confusion. The use of optimization technology will not become mainstream until users rely on a single algorithm to solve many broad classes of problems in an efficient and robust manner. This will require a major paradigm shift in the way search algorithms are developed and how they behave.

What is needed is a search process that does more than follow a fixed set of instructions based on a predefined strategy. One that can lead and organize a team of explorers, each with a diverse and complementary set of skills. One that is agile in adapting the search process to the unforeseen local conditions in various parts of the design space, while also carrying out a broad and efficient exploration of the entire domain. A highly skilled and multilingual trekker. A veritable Sherpa.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *