Intuition plays a critical role in all stages of a design exploration study, from defining the problem statement to building the simulation model to interpreting the results. But what about the search process itself? Should we make design improvements based on intuition, or should we allow a mathematical search engine to explore the design space for better designs? The answer is both. We call this shared process collaborative design exploration.
The SHERPA search strategy allows you to inject your design ideas before and during an exploration study. Before you start a study, you can seed it with multiple ideas (in the form of actual designs) that might help SHERPA to locate productive regions of the design space more quickly, thus speeding up the overall search. For example, in addition to the baseline design, you might consider seeding the study with other potentially good designs that:
- you have investigated or produced in the past
- your competitors have used
- are feasible, but perhaps not optimal
- are high performing relative to one or more criteria, but not all of them
- have some desirable features, but don’t necessarily perform well
- you have a hunch may work well
- are from a previous HEEDS MDO study
One or more of these injected ideas might contribute to a more efficient search, while the cost of doing this is only the time to enter the variable values that define each of the designs. SHERPA will evaluate the injected designs when the search process is launched, so there is no need to simulate them before injection. Continue reading
Design sensitivities are a measure of how much an objective or constraint response varies due to a small change in a design variable. Based on this definition, they are sometimes referred to as sensitivity derivatives. Let’s discuss how to use them properly, as well as how not to use them.
First, note that the design sensitivities we refer to here are calculated for a particular design, not for a design space. Statistical methods of sensitivity analysis can provide useful information about a design space, but not the type of information we seek here.
Since a design represents a point in the design space, it is clear that sensitivities are defined at a point, as are mathematical derivatives. Two distinct designs within a design space will probably have different sensitivities unless the design space is linear, which is seldom the case for engineering problems. Continue reading
If a valid license is not available for any of your modeling and simulation software during a HEEDS study, the default response is for that design to be classified as an error design. But this behavior can be modified using a pre-analysis command to check for the availability of a license before HEEDS launches each analysis. This simple process can help to avoid many such error designs, making your exploration studies more effective. Let’s review how to do this for FLEXlm based licenses.
We all feel the “need for speed” when trying to find better design solutions through simulation. What can we do to speed up our design exploration studies? Let’s discuss all the options here, including one that may not be obvious.
In a typical study, the total CPU time needed to perform a design exploration study is determined using this simple formula:
In each release of HEEDS, new capabilities are added to simplify capturing and automating process workflows as well as exploring the design space for more innovative solutions and ways of visualizing results to gain deeper insight into product operation and performance.
Comparing design exploration results between different projects, models, or to experimental data is a valuable way of gaining insight. This is often used for evaluating performance against legacy designs or even competitive products. Results comparison is now even easier to undertake in version 2016.10. Results from any external source can now be imported directly into HEEDS Post. We will use an industrial exhaust duct example in Figure 1. to highlight how this works.
Figure 1. Industrial exhaust duct system
The first step in any design exploration study is to define the way in which the virtual prototype simulation model is to be constructed and modified. This typically involves identifying the various modeling and simulation tools that are involved, specifying where they are executed, choosing the simulation models that are being modified, selecting the parameters being driven and monitored, and documenting what outputs are being stored for each design point.
While the actual simulation models may change from project to project, the workflow and the way the models are tested often remains the same. For example, the workflow for finding the best lower control arm configuration for a vehicle front suspension is identical (or very similar) across vehicle platforms. The input for geometry ranges, loads and required performance change.
Figure 1. Example HEEDS workflow
Sometimes we have more than one output response that needs to be either minimized or maximized, so we need some way to encourage multiple responses to be as small as possible or as large as possible at the same time. These are called multi-objective design exploration problems.
One of the most common reasons for using multiple objectives is to assess the trade-off between two or more competing responses. In other words, what is the cost to improve one response in terms of making another response worse? Continue reading
There are many design exploration applications where it is important for performance results to match a certain range of values, whether it be from experimental sources or ideal goals. For example, curves for engine torque vs rpm, bushing deflection vs load, or wing lift vs the angle of attack. Quite often though, the baseline curve data can include fluctuations which makes curve fitting more challenging. There can also be portions of the curve where it is far more important that there be a close fit.
Figure 1. Sample Baseline Curves
To tackle these challenges and to also streamline the curve creation, HEEDS 2016.04 contains additional curve tools to ensure better results alignment. There are now added abilities to:
- Weight curve ranges
- Normalize RMS values
- Simplify imported curve data selection
Let’s review these capabilities in detail to show how they can help. Continue reading
Highlighting a Few New Features that Help You Discover Better Designs, Faster
Often, improvements to the simplest things can have a big impact on your daily tasks. There are many tasks we perform repeatedly when working with HEEDS, and streamlining those saves time and reduces effort. HEEDS 2015.11 contains many enhancements focused on simplifying workflows and I want to highlight a few that help in exploring design performance relationships.
To explore relationships between variables and responses in detail, you typically require multiple plots of the same type, but with different variables to gain a clearer understanding of dependency or influence. However, there are many plot features that are tailored to suit the particular way you want to view the results such as axis scales, data symbols, curves styles, title fonts, and so on.
To avoid having to create a new plot from scratch and redefine all these settings, you can now right click and select the Copy Plot option. This makes an exact copy of the existing plot, with all the customization. You then just need to alter the variables or responses being displayed saving a lot of setup time.
Figure 1. Make a copy of an existing plot with a single right click option
During a design exploration study, HEEDS makes many calls to your simulation model to evaluate potential designs. This means that your model needs to accurately predict design performance values (objectives and constraints) over a wide range of inputs (design variables). Most modern simulation models satisfy this requirement without difficulty.
But in some cases, it is too much to ask that a model be perfect for all combinations of variable values. For example:
- In a shape optimization problem, some combinations of shape parameter values might produce invalid geometries, making it impossible to generate a CAD model for those designs. Ideally, shape parameters should be defined in a way that ensures all geometries are valid, but that is not always a realistic expectation.
- Nonlinear or dynamic CAE models occasionally experience problems with convergence or other kinds of numerical errors. Hopefully your models are robust, but it is more difficult to predict the behavior of some designs than others, so numerical errors will occur now and then.
Of course there are many other reasons why a simulation model might terminate prematurely or predict incorrect results. Because many of these cases are unavoidable, HEEDS has been designed to be robust against these model failures. We refer to these as error designs. Continue reading