Welcome to the HEEDS Design Exploration Blog, your trusted source for education and conversation about Design Exploration and HEEDS. From classical algorithms to modern techniques, structural methods to multidisciplinary strategies, and simple tutorials to advanced commercial applications, we provide the information you need to successfully apply design exploration to virtually any problem. Discover Better Designs, Faster!
Design sensitivities are a measure of how much an objective or constraint response varies due to a small change in a design variable. Based on this definition, they are sometimes referred to as sensitivity derivatives. Let’s discuss how to use them properly, as well as how not to use them.
First, note that the design sensitivities we refer to here are calculated for a particular design, not for a design space. Statistical methods of sensitivity analysis can provide useful information about a design space, but not the type of information we seek here.
Since a design represents a point in the design space, it is clear that sensitivities are defined at a point, as are mathematical derivatives. Two distinct designs within a design space will probably have different sensitivities unless the design space is linear, which is seldom the case for engineering problems. Continue reading
I received an email today from a marketing organization that began with the question: “What if the next time a customer came through your door, they could interact with a hologram speaking their own language. Your company would look pretty innovative and well, just straight up cool.”
If you’re anything like me, you’ve had your fill of speaking to robotic phone operators or pushing fifteen buttons on your phone to get routed to someone who has any chance of actually helping you with your question or challenge. Don’t get me wrong, I love new and innovative technology. But, only if it really helps me do things faster, easier, or better. Continue reading
If a valid license is not available for any of your modeling and simulation software during a HEEDS study, the default response is for that design to be classified as an error design. But this behavior can be modified using a pre-analysis command to check for the availability of a license before HEEDS launches each analysis. This simple process can help to avoid many such error designs, making your exploration studies more effective. Let’s review how to do this for FLEXlm based licenses.
We all feel the “need for speed” when trying to find better design solutions through simulation. What can we do to speed up our design exploration studies? Let’s discuss all the options here, including one that may not be obvious.
In a typical study, the total CPU time needed to perform a design exploration study is determined using this simple formula:
In each release of HEEDS, new capabilities are added to simplify capturing and automating process workflows as well as exploring the design space for more innovative solutions and ways of visualizing results to gain deeper insight into product operation and performance.
Comparing design exploration results between different projects, models, or to experimental data is a valuable way of gaining insight. This is often used for evaluating performance against legacy designs or even competitive products. Results comparison is now even easier to undertake in version 2016.10. Results from any external source can now be imported directly into HEEDS Post. We will use an industrial exhaust duct example in Figure 1. to highlight how this works.
The first step in any design exploration study is to define the way in which the virtual prototype simulation model is to be constructed and modified. This typically involves identifying the various modeling and simulation tools that are involved, specifying where they are executed, choosing the simulation models that are being modified, selecting the parameters being driven and monitored, and documenting what outputs are being stored for each design point.
While the actual simulation models may change from project to project, the workflow and the way the models are tested often remains the same. For example, the workflow for finding the best lower control arm configuration for a vehicle front suspension is identical (or very similar) across vehicle platforms. The input for geometry ranges, loads and required performance change.
Sometimes we have more than one output response that needs to be either minimized or maximized, so we need some way to encourage multiple responses to be as small as possible or as large as possible at the same time. These are called multi-objective design exploration problems.
One of the most common reasons for using multiple objectives is to assess the trade-off between two or more competing responses. In other words, what is the cost to improve one response in terms of making another response worse? Continue reading
There are many design exploration applications where it is important for performance results to match a certain range of values, whether it be from experimental sources or ideal goals. For example, curves for engine torque vs rpm, bushing deflection vs load, or wing lift vs the angle of attack. Quite often though, the baseline curve data can include fluctuations which makes curve fitting more challenging. There can also be portions of the curve where it is far more important that there be a close fit.
To tackle these challenges and to also streamline the curve creation, HEEDS 2016.04 contains additional curve tools to ensure better results alignment. There are now added abilities to:
- Weight curve ranges
- Normalize RMS values
- Simplify imported curve data selection
Let’s review these capabilities in detail to show how they can help. Continue reading
Highlighting a Few New Features that Help You Discover Better Designs, Faster
Often, improvements to the simplest things can have a big impact on your daily tasks. There are many tasks we perform repeatedly when working with HEEDS, and streamlining those saves time and reduces effort. HEEDS 2015.11 contains many enhancements focused on simplifying workflows and I want to highlight a few that help in exploring design performance relationships.
To explore relationships between variables and responses in detail, you typically require multiple plots of the same type, but with different variables to gain a clearer understanding of dependency or influence. However, there are many plot features that are tailored to suit the particular way you want to view the results such as axis scales, data symbols, curves styles, title fonts, and so on.
To avoid having to create a new plot from scratch and redefine all these settings, you can now right click and select the Copy Plot option. This makes an exact copy of the existing plot, with all the customization. You then just need to alter the variables or responses being displayed saving a lot of setup time.
During a design exploration study, HEEDS makes many calls to your simulation model to evaluate potential designs. This means that your model needs to accurately predict design performance values (objectives and constraints) over a wide range of inputs (design variables). Most modern simulation models satisfy this requirement without difficulty.
But in some cases, it is too much to ask that a model be perfect for all combinations of variable values. For example:
- In a shape optimization problem, some combinations of shape parameter values might produce invalid geometries, making it impossible to generate a CAD model for those designs. Ideally, shape parameters should be defined in a way that ensures all geometries are valid, but that is not always a realistic expectation.
- Nonlinear or dynamic CAE models occasionally experience problems with convergence or other kinds of numerical errors. Hopefully your models are robust, but it is more difficult to predict the behavior of some designs than others, so numerical errors will occur now and then.
Of course there are many other reasons why a simulation model might terminate prematurely or predict incorrect results. Because many of these cases are unavoidable, HEEDS has been designed to be robust against these model failures. We refer to these as error designs. Continue reading