A Brief History of Optimization

Light BulbsWhen Thomas Edison developed the first long-lasting, high-quality light bulb in 1879, his successful design was the result of a lengthy and laborious trial-and-error search for the best filament material, a process we now call the Edisonianapproach.

While Edison had no fundamental knowledge of how various materials resist electrical current, today’s engineers are often armed with greater technical knowledge and experience about their domain. This allows them to create initial designs based on intuition before testing the designs to failure. Design flaws observed during a test can then be incrementally improved through what we call the make-it-and-break-it method.

Advances in computing power and in computer-aided engineering (CAE) software now make it possible to create virtual prototypes of potential designs prior to building and testing expensive physical prototypes.  This reduces the cost and time required to perform each design iteration and provides greater understanding of how a design performs. 

However, the potential of the virtual prototyping approach is still limited by two main factors. First, each iteration requires an engineer to manually create or modify a computer model of the system. Despite ever improving software, this is still a cumbersome, error-prone and time-consuming process.

Second, the success of this approach still relies heavily on the limitations of human intuition and experience. No matter how brilliant the design team is, the human mind often cannot predict or comprehend the effects of changing multiple variables at the same time in a very complex system. This profound barrier, coupled with time constraints, severely limits the number and types of design iterations that get performed. The result is too often a sub-optimal solution that is an artifact of the team’s collective experience.

The desire to increase productivity led naturally to automation of the design iteration process. Process automation software was introduced that could capture and execute automatically the typical manual process to build and test a virtual prototype. Thus, each new design iteration could be performed more quickly, and without the usual drudgery and fear of manual errors.

It soon became obvious that exploration of the design space could also be automated by adding a smart “do-loop” around a design evaluation process, and an instant market was created for all of the classical methods of optimization and design of experiments.

With the promise of reducing design time and cost while improving product quality, automated design optimization held tremendous potential. Starting with a sub-optimal design, a numerical optimization algorithm could be used to iteratively adjust a set of pre-selected design parameters in an attempt to achieve a set of design targets.

But the promise of this approach was not easily realized. It was quickly discovered that classical numerical optimization algorithms have significant limitations when applied to a wide range of commercial design problems. Many algorithms are applicable only to certain types of design variables, or when the number of design variables is small. They may produce smaller improvements than could be attained, or the end result may depend on the starting design. And those methods that have broader search capability are often very inefficient. Moreover, selecting the right algorithm for a problem and setting its tuning parameters turned out to be a complex research problem in itself, usually requiring an iterative process not too dissimilar from the Edisonian approach described above.

As a result, the practical application of design automation tools based on classical optimization technology has most often resulted in only incremental improvements at best, a small benefit compared to the promise of the technology. Many optimization tools in use today still fall into this category.

Fortunately, with advanced hybrid and adaptive design search algorithms available, the capability now exists to efficiently explore a much larger and more complex design space. These methods take full advantage of powerful, inexpensive computers and networks to modify virtual system models, while intelligently searching for optimal values of design parameters that affect product performance and cost.

This means that designers can consider a larger number of design variables each with a wider range, providing two key benefits:  First, designers don’t have to waste time simplifying the definition of a problem into a form that a traditional optimization algorithm can handle.  Second, and most importantly, a wider initial definition of the design problem dramatically increases the chance of discovering a significantly better design, perhaps even a new design concept that is outside the initial intuition of the engineering team.

This new class of optimization technology enables broader, more comprehensive and faster searches for innovative designs than was possible using previous generations of tools. Moreover, it requires no expertise in optimization theory, so it is actually easier to use by non-experts and experts alike.  By leveraging an engineer’s potential to discover new design concepts, this new class of optimization technology overcomes the limits of human intuition and extends the designer’s professional capability to achieve break-through designs and accelerated innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *