HyperStudy

Fundamentals

Fundamentals

Previous topic Next topic No expanding text in this topic  

Fundamentals

Previous topic Next topic JavaScript is required for expanding text JavaScript is required for the print function  

Definitions in Approximations

Fitting functions (Approximations) are meta models that represent the actual output responses.

Some simulations are computationally expensive which makes it impractical to rely on them exclusively for design studies. In these cases, use of approximations leads to substantial savings of computational resources.

Some output responses are nonlinear and noisy functions. With such output responses in the problem formulation, optimization algorithms may not be able to find the global trend of the performance functions but instead fall into the minimum or maximum of the surrounding noise. Output response approximations eliminate the noise in such functions; capture the general trend in functions. As a result, optimization algorithms become effective.

When using approximations, the issue of a tradeoff between accuracy and efficiency is ever present. The challenge is how approximate the representation of the design space can be while remaining accurate enough. Answer to this question depends on the nature of the problem as well as the resources; type of output responses, number of design parameters, and how many runs can be afforded, etc. In an attempt to create good-quality approximations to different problems, many methods have been developed. In this chapter, we will cover the approximation methods that are implemented in HyperStudy, namely least squares regression (LSR), moving least squares method (MLSM) and HyperKriging. We will also discuss the appropriate applications for each of the methods and how to diagnose the quality of the approximations.

Regression equation is the polynomial expression that relates the output response of interest to the factors that were varied. Selection of the proper model is required to create an accurate approximation. However this requires a-priori knowledge of the behavior of the output responses (linear, non linear, noisy, etc.) and enough runs to feed the selected model.

Linear Regression model:

linear_regression

Interaction Regression model:

interaction_regression

Quadratic Regression model (2nd order):

quadratic_regression

The question of the number of runs (points) needed to build an approximation is a recurrent question. Obviously, an approximation is only as good as the uniformity of the design sampling and, for example, a two-level parameter only has a linear relationship in the regression. Higher order polynomials can be introduced by using more levels for the factors, but then, using more levels results in more runs.

If n is the number of input variables:

A linear model requires n + 1 runs.
An interaction model requires hs_nruns.
A quadratic model requires nruns2.

 

 

See Also

Post-Processing for Fits