Welcome to the HEEDS Design Exploration Blog, your trusted source for education and conversation about Design Exploration and HEEDS. From classical algorithms to modern techniques, structural methods to multidisciplinary strategies, and simple tutorials to advanced commercial applications, we provide the information you need to successfully apply design exploration to virtually any problem. Discover Better Designs, Faster!
- New Portals – Easily include AVL DVI (Cruise), FloEFD, Fluent, MADYMO, and System Synthesis models within your workflow
- Parameter Groups – Organize parameter data into custom groups to easily manage large projects
- Analysis Environment Data – Customize the environment needed for each analysis without the need for scripting
- Vector Response Plot – Create 2D/3D curve plots from vector responses with the User Plot
- Non-Dominated Sorting Tool – Evaluate trade-offs between responses in HEEDS Post for all study types
- Surrogate Sensitivities Plot – Interrogate local sensitivities to better understand trends
Intuition plays a critical role in all stages of a design exploration study, from defining the problem statement to building the simulation model to interpreting the results. But what about the search process itself? Should we make design improvements based on intuition, or should we allow a mathematical search engine to explore the design space for better designs? The answer is both. We call this shared process collaborative design exploration.
The SHERPA search strategy allows you to inject your design ideas before and during an exploration study. Before you start a study, you can seed it with multiple ideas (in the form of actual designs) that might help SHERPA to locate productive regions of the design space more quickly, thus speeding up the overall search. For example, in addition to the baseline design, you might consider seeding the study with other potentially good designs that:
- you have investigated or produced in the past
- your competitors have used
- are feasible, but perhaps not optimal
- are high performing relative to one or more criteria, but not all of them
- have some desirable features, but don’t necessarily perform well
- you have a hunch may work well
- are from a previous HEEDS MDO study
One or more of these injected ideas might contribute to a more efficient search, while the cost of doing this is only the time to enter the variable values that define each of the designs. SHERPA will evaluate the injected designs when the search process is launched, so there is no need to simulate them before injection. Continue reading
Design sensitivities are a measure of how much an objective or constraint response varies due to a small change in a design variable. Based on this definition, they are sometimes referred to as sensitivity derivatives. Let’s discuss how to use them properly, as well as how not to use them.
First, note that the design sensitivities we refer to here are calculated for a particular design, not for a design space. Statistical methods of sensitivity analysis can provide useful information about a design space, but not the type of information we seek here.
Since a design represents a point in the design space, it is clear that sensitivities are defined at a point, as are mathematical derivatives. Two distinct designs within a design space will probably have different sensitivities unless the design space is linear, which is seldom the case for engineering problems. Continue reading
I received an email today from a marketing organization that began with the question: “What if the next time a customer came through your door, they could interact with a hologram speaking their own language. Your company would look pretty innovative and well, just straight up cool.”
If you’re anything like me, you’ve had your fill of speaking to robotic phone operators or pushing fifteen buttons on your phone to get routed to someone who has any chance of actually helping you with your question or challenge. Don’t get me wrong, I love new and innovative technology. But, only if it really helps me do things faster, easier, or better. Continue reading
If a valid license is not available for any of your modeling and simulation software during a HEEDS study, the default response is for that design to be classified as an error design. But this behavior can be modified using a pre-analysis command to check for the availability of a license before HEEDS launches each analysis. This simple process can help to avoid many such error designs, making your exploration studies more effective. Let’s review how to do this for FLEXlm based licenses.
We all feel the “need for speed” when trying to find better design solutions through simulation. What can we do to speed up our design exploration studies? Let’s discuss all the options here, including one that may not be obvious.
In a typical study, the total CPU time needed to perform a design exploration study is determined using this simple formula:
In each release of HEEDS, new capabilities are added to simplify capturing and automating process workflows as well as exploring the design space for more innovative solutions and ways of visualizing results to gain deeper insight into product operation and performance.
Comparing design exploration results between different projects, models, or to experimental data is a valuable way of gaining insight. This is often used for evaluating performance against legacy designs or even competitive products. Results comparison is now even easier to undertake in version 2016.10. Results from any external source can now be imported directly into HEEDS Post. We will use an industrial exhaust duct example in Figure 1. to highlight how this works.
The first step in any design exploration study is to define the way in which the virtual prototype simulation model is to be constructed and modified. This typically involves identifying the various modeling and simulation tools that are involved, specifying where they are executed, choosing the simulation models that are being modified, selecting the parameters being driven and monitored, and documenting what outputs are being stored for each design point.
While the actual simulation models may change from project to project, the workflow and the way the models are tested often remains the same. For example, the workflow for finding the best lower control arm configuration for a vehicle front suspension is identical (or very similar) across vehicle platforms. The input for geometry ranges, loads and required performance change.
Sometimes we have more than one output response that needs to be either minimized or maximized, so we need some way to encourage multiple responses to be as small as possible or as large as possible at the same time. These are called multi-objective design exploration problems.
One of the most common reasons for using multiple objectives is to assess the trade-off between two or more competing responses. In other words, what is the cost to improve one response in terms of making another response worse? Continue reading
There are many design exploration applications where it is important for performance results to match a certain range of values, whether it be from experimental sources or ideal goals. For example, curves for engine torque vs rpm, bushing deflection vs load, or wing lift vs the angle of attack. Quite often though, the baseline curve data can include fluctuations which makes curve fitting more challenging. There can also be portions of the curve where it is far more important that there be a close fit.
To tackle these challenges and to also streamline the curve creation, HEEDS 2016.04 contains additional curve tools to ensure better results alignment. There are now added abilities to:
- Weight curve ranges
- Normalize RMS values
- Simplify imported curve data selection
Let’s review these capabilities in detail to show how they can help. Continue reading