Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
This book is based on a seminar given at the University of California at Los Angeles in the Spring of 1975. The choice of topics reflects my interests at the time and the needs of the students taking the course. Initially the lectures were written up for publication in the Lecture Notes series. How ever, when I accepted Professor A. V. Balakrishnan's invitation to publish them in the Springer series on Applications of Mathematics it became necessary to alter the informal and often abridged style of the notes and to rewrite or expand much of the original manuscript so as to make the book as self-contained as possible. Even so, no attempt has been made to write a comprehensive treatise on filtering theory, and the book still follows the original plan of the lectures. While this book was in preparation, the two-volume English translation of the work by R. S. Liptser and A. N. Shiryaev has appeared in this series. The first volume and the present book have the same approach to the sub ject, viz. that of martingale theory. Liptser and Shiryaev go into greater detail in the discussion of statistical applications and also consider inter polation and extrapolation as well as filtering.
The problem of controlling or stabilizing a system of differential equa tions in the presence of random disturbances is intuitively appealing and has been a motivating force behind a wide variety of results grouped loosely together under the heading of "Stochastic Control." This book is concerned with a special instance of this general problem, the "Adaptive LQ Regulator," which is a stochastic control problem of partially observed type that can, in certain cases, be solved explicitly. We first describe this problem, as it is the focal point for the entire book, and then describe the contents of the book. The problem revolves around an uncertain linear system x(O) = x~ in R", where 0 E {1, ... , N} is a random variable representing this uncertainty and (Ai' B , C) and xJ are the coefficient matrices and initial state, respectively, of j j a linear control system, for eachj = 1, ... , N. A common assumption is that the mechanism causing this uncertainty is additive noise, and that conse quently the "controller" has access only to the observation process y( . ) where y = Cex +~.
The stimulus for the present work is the growing need for more accurate numerical methods. The rapid advances in computer technology have not provided the resources for computations which make use of methods with low accuracy. The computational speed of computers is continually increasing, while memory still remains a problem when one handles large arrays. More accurate numerical methods allow us to reduce the overall computation time by of magnitude. several orders The problem of finding the most efficient methods for the numerical solution of equations, under the assumption of fixed array size, is therefore of paramount importance. Advances in the applied sciences, such as aerodynamics, hydrodynamics, particle transport, and scattering, have increased the demands placed on numerical mathematics. New mathematical models, describing various physical phenomena in greater detail than ever before, create new demands on applied mathematics, and have acted as a major impetus to the development of computer science. For example, when investigating the stability of a fluid flowing around an object one needs to solve the low viscosity form of certain hydrodynamic equations describing the fluid flow. The usual numerical methods for doing so require the introduction of a "computational viscosity," which usually exceeds the physical value; the results obtained thus present a distorted picture of the phenomena under study. A similar situation arises in the study of behavior of the oceans, assuming weak turbulence. Many additional examples of this type can be given.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.