OptiStruct SPMD includes another approach for MPI-based parallelization called Failsafe Topology Optimization (FSO) for topology optimization of structures. Regular Topology optimization runs may not account for the feasibility of a design in situations where a section of the structure fails. FSO divides the structure into damage zones and generates multiple models (equal to the number of failure zones), wherein each model is the same as the original model minus one failure zone. In this process, the FSO method is applied by running Topology Optimization simultaneously for all such generated models and a final design is output which is optimized to account for all generated models.
Typically, the number of damage zones is large, which means the number of SPMD domain is large. Such a job needs to be run on multiple nodes with cluster setup.
Activation
1. | The FAILSAFE continuation line on the DTPL Bulk Data entry, the failsafe topology script run option (-fso), and the number of processors (-np) can be used to activate Failsafe Topology Optimization. For example, /Altair/hwsolvers/script/optistruct filename.fem –np 20 –fso. |
Note: The option for executable, which is equivalent to script option –fso, is -fsomode.
2. | The number of processors should be set equal to the number of damage zones in the original model + 1. To determine the number of damage zones, you can run the Topology Optimization model (with FAILSAFE continuation line) in serial mode (or a check run) and look at the .out file. The number of MPI processes for failsafe optimization is displayed in the .out file. This determines the number of processors (–np) for the subsequent MPI (–fso) run. |
Refer to Launching FSO for information on launching Failsafe Topology Optimization in OptiStruct.
Output
1. | Separate <filename>_FSOi folders are created for damage zone. Each folder contains the full topology optimization results for the corresponding model. For example, the folder <filename>_FSO1 would contain topology results (.out, .stat, .h3d, _des.h3d files, and so on) for the first damaged model (original topology model minus the first damage zone), and so on. |
2. | In the main working directory, the Damage Zones are output for both the first layer and the overlap layer (if it is not deactivated) to the <filename>_fso.h3d file. The Damage Zones can then be visualized in HyperView. |
3. | Additionally, in the main working directory, the final Failsafe Topology Optimization results are output to the <filename>_des.h3d file. It is recommended to compare these results with the initial non-Failsafe Topology results to get a sense of the modified design. |
Supported Solution Sequences
1. | Both Shell and Solid elements are currently supported. |
2. | Multi-body Dynamics (OS-MBD) and Geometric Nonlinear Analysis (RADIOSS Integration) are currently not supported. |
3. | FSO currently cannot be used in conjunction with the Domain Decomposition Method (DDM). |
There are several ways to launch parallel programs with OptiStruct SPMD. Remember to propagate environment variables when launching OptiStruct SPMD, if needed. Refer to the respective MPI vendor’s manual for more details. Starting from OptiStruct 14.0, commonly used MPI runtime software are automatically included as a part of the HyperWorks installation. The various MPI installations are located at $ALTAIR_HOME/mpi.
Using Solver Script
On a single host
[optistruct@host1~]$ $ALTAIR_HOME/scripts/optistruct [INPUTDECK] [OS_ARGS] –mpi [MPI_TYPE] -fso –np [n]
Where,
[MPI_TYPE]: is the MPI implementation used:
pl for IBM Platform-MPI (Formerly HP-MPI)
i for Intel MPI
-- ( [MPI_TYPE] is optional, default MPI implementation on Linux machines is i
Refer to the Run Options page for further information).
[n] is the number of processors
[INPUTDECK] is the input deck file name
[OS_ARGS] lists the arguments to OptiStruct (Optional, refer to Run Options for further information).
Note:
1. | Adding the command line option “-testmpi”, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate. |
3. | It is also possible to launch OptiStruct SPMD without the GUI/ Solver Scripts. (Refer to the Appendix) |
4. | Adding the optional command line option “–mpipath PATH” helps you find the MPI installation if it is not included in the current search path or when multiple MPI’s are installed. |
5. | If a MPI TYPE run option (-fso) is not specified, LDM is run by default. (Refer to OptiStruct SPMD). |
|
|
Using Solver Script
On a single host
[optistruct@host1~]$ $ALTAIR_HOME/hwsolvers/scripts/optistruct.bat [INPUTDECK] [OS_ARGS] –mpi [MPI_TYPE] -fso –np [n]
Where,
[MPI_TYPE]: is the MPI implementation used:
pl for IBM Platform-MPI (Formerly HP-MPI).
i for Intel MPI
ms for MS-MPI
-- ( [MPI_TYPE] is optional, default MPI implementation on Windows machines is i
Refer to the Run Options page for further information).
[n] is the number of MPI Processes
[INPUTDECK] is the input deck file name
[OS_ARGS] lists the arguments to OptiStruct (Optional, refer to Run Options for further information).
Note:
1. | Adding the command line option “-testmpi”, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate. |
3. | It is also possible to launch OptiStruct FSO without the GUI/ Solver Scripts. (Refer to the Appendix). |
4. | Adding the optional command line option “–mpipath PATH” helps you find the MPI installation if it is not included in the current search path or when multiple MPI’s are installed. |
5. | If a MPI TYPE run option (-fso) is not specified, LDM is run by default (Refer to OptiStruct SPMD). |
|
|
|
See Also:
Design Optimization