“The world is changing very fast. Big will not beat small anymore. It will be the fast beating the slow.” – Rupert Murdoch
When computer aided engineering (CAE) analysis techniques, like the finite element method, were first introduced, their primary role was to investigate why a design failed. Surely, this understanding would help designers avoid such failures in the future.
But soon, manufacturing companies realized that it was smarter to use CAE tools to predict whether a design would fail, before manufacturing. This gave designers the chance to make changes to designs and avoid most failures in the first place. This pass/fail test is still in place at many companies, in the form of scheduled iterations of computer aided design (CAD) drawings followed by CAE simulations.
Often, companies decide on a fixed number of manual CAD/CAE design iterations ahead of time. I’ve often wondered how project managers know exactly how many iterations it will take to arrive at the best design. Naturally, they haven’t figured the last-minute redesign “fire drills” and disorganized patchwork of final design changes into that preset number of design iterations. Continue reading
However, often we don’t have enough time or computing resources to carry out the number of design evaluations that would be needed to find the optimal solution. In these cases, we have no choice but to relax our goal and to seek the greatest possible design improvement within the available time.
Sometimes this advice is correct, but only in a very local sense. For example, if we want to estimate how sensitive a system is to a change in variable A, then we can hold all other variables constant and change variable A very slightly. The change in the system response divided by the change in variable A is an estimate of the sensitivity derivative at the original design point.
A similar gamble occurs when you apply some optimization approaches based on Design of Experiments (DOE) concepts. In this case, the actual objective function being minimized is evaluated at a predetermined set of design points. Then, a simple approximation of the objective function is developed by fitting an analytical function to these points. This approximate function is often called a response surface (also a surrogate function). The optimization search is then performed on the response surface, because evaluations of this simpler function are usually much quicker than evaluations of the actual objective function.
Multi means “many” or “multiple.” Multidisciplinary design optimization (MDO) has become popular largely because it allows engineers to optimize over many different disciplines at the same time.
The first is the development of mental shortcuts, or heuristics, which allow us to make snap judgments, often correctly. For example, our intuition tells us that blurry objects are farther away than clear ones. This is often a helpful assumption, except that on foggy mornings, a car in front of you may be much closer than intuition tells you it is.
I have a great idea for a new reality adventure television series.
A trial-and-error approach to building and testing a myriad of hardware prototypes makes it too expensive to consider many design alternatives.
So, what is causing smart people to form this opinion? I believe there are four types of experiences that cause people to lose faith in optimization: