Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Special tools are required for examining and solving optimization problems. The main tools in the study of local optimization are classical calculus and its modern generalizions which form nonsmooth analysis. The gradient and various kinds of generalized derivatives allow us to ac- complish a local approximation of a given function in a neighbourhood of a given point. This kind of approximation is very useful in the study of local extrema. However, local approximation alone cannot help to solve many problems of global optimization, so there is a clear need to develop special global tools for solving these problems. The simplest and most well-known area of global and simultaneously local optimization is convex programming. The fundamental tool in the study of convex optimization problems is the subgradient, which actu- ally plays both a local and global role. First, a subgradient of a convex function f at a point x carries out a local approximation of f in a neigh- bourhood of x. Second, the subgradient permits the construction of an affine function, which does not exceed f over the entire space and coincides with f at x. This affine function h is called a support func- tion. Since f(y) ~ h(y) for ally, the second role is global. In contrast to a local approximation, the function h will be called a global affine support.
Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.