Vi bøger
Levering: 1 - 2 hverdage
Forlænget returret til d. 31. januar 2025

Applications of Linear and Nonlinear Models - Erik W. Grafarend - Bog

Bag om Applications of Linear and Nonlinear Models

This book provides numerous examples of linear and nonlinear model applications. Here, we present a nearly complete treatment of the Grand Universe of linear and weakly nonlinear regression models within the first 8 chapters. Our point of view is both an algebraic view and a stochastic one. For example, there is an equivalent lemma between a best, linear uniformly unbiased estimation (BLUUE) in a Gauss-Markov model and a least squares solution (LESS) in a system of linear equations. While BLUUE is a stochastic regression model, LESS is an algebraic solution. In the first six chapters, we concentrate on underdetermined and overdetermined linear systems as well as systems with a datum defect. We review estimators/algebraic solutions of type MINOLESS, BLIMBE, BLUMBE, BLUUE, BIQUE, BLE, BIQUE, and total least squares. The highlight is the simultaneous determination of the first moment and the second central moment of a probability distribution in an inhomogeneous multilinear estimationby the so-called E-D correspondence as well as its Bayes design. In addition, we discuss continuous networks versus discrete networks, use of Grassmann-Plucker coordinates, criterion matrices of type Taylor-Karman as well as FUZZY sets. Chapter seven is a speciality in the treatment of an overjet. This second edition adds three new chapters: (1)  Chapter on integer least squares that covers (i) model for positioning as a mixed integer linear  model which includes integer parameters. (ii) The general integer least squares problem is formulated, and the optimality of the least squares solution  is shown. (iii) The relation to the closest vector problem is considered, and the notion of reduced lattice basis is introduced. (iv) The famous LLL algorithm for generating a Lovasz reduced basis is explained. (2) Bayes methods  that covers (i) general principle of Bayesian modeling. Explain the notion of prior distribution and posterior distribution. Choose the pragmatic approach  for exploring the advantages of iterative Bayesian calculations and hierarchical modeling. (ii) Present the Bayes methods for linear models with normal distributed errors, including noninformative priors, conjugate priors, normal gamma distributions and (iii) short outview to modern application of Bayesian modeling. Useful in case of nonlinear models or linear models with no normal distribution: Monte Carlo (MC), Markov chain Monte Carlo (MCMC), approximative Bayesian computation (ABC)  methods. (3) Error-in-variables models, which cover: (i) Introduce the error-in-variables (EIV) model, discuss the difference to least squares estimators (LSE), (ii) calculate the total least squares (TLS) estimator. Summarize the properties of TLS, (iii) explain the idea of simulation extrapolation (SIMEX) estimators, (iv) introduce the symmetrized SIMEX (SYMEX) estimator and its relation to TLS, and (v) short outview to nonlinear EIV models.  The chapter on algebraic solution of nonlinear system of equations has also been updated in line with the new emerging field of hybrid numeric-symbolic solutions to systems of nonlinear equations, ermined system of nonlinear equations on curved manifolds. The von Mises-Fisher distribution is characteristic for circular or (hyper) spherical data. Our last chapter is devoted to probabilistic regression, the special Gauss-Markov model with random effects leading to estimators of type BLIP and VIP including Bayesian estimation. A great part of the work is presented in four appendices. Appendix A is a treatment, of tensor algebra, namely linear algebra, matrix algebra, and multilinear algebra. Appendix B is devoted to sampling distributions and their use in terms of confidence intervals and confidence regions. Appendix C reviews the elementary notions of statistics, namely random events and stochastic processes. Appendix D introduces the basics of Groebner basis algebra, its careful definition, the Buchberger algorithm, especially the C. F. Gauss combinatorial algorithm.

Vis mere
  • Sprog:
  • Engelsk
  • ISBN:
  • 9783030946005
  • Indbinding:
  • Paperback
  • Sideantal:
  • 1140
  • Udgivet:
  • 3. oktober 2023
  • Udgave:
  • 23002
  • Størrelse:
  • 155x44x235 mm.
  • Vægt:
  • 1891 g.
  • Ukendt - mangler pt..
Forlænget returret til d. 31. januar 2025
  •  

    Kan formentlig ikke leveres inden jul

Normalpris

Medlemspris

Prøv i 30 dage for 45 kr.
Herefter fra 79 kr./md. Ingen binding.

Beskrivelse af Applications of Linear and Nonlinear Models

This book provides numerous examples of linear and nonlinear model applications. Here, we present a nearly complete treatment of the Grand Universe of linear and weakly nonlinear regression models within the first 8 chapters. Our point of view is both an algebraic view and a stochastic one. For example, there is an equivalent lemma between a best, linear uniformly unbiased estimation (BLUUE) in a Gauss-Markov model and a least squares solution (LESS) in a system of linear equations. While BLUUE is a stochastic regression model, LESS is an algebraic solution. In the first six chapters, we concentrate on underdetermined and overdetermined linear systems as well as systems with a datum defect. We review estimators/algebraic solutions of type MINOLESS, BLIMBE, BLUMBE, BLUUE, BIQUE, BLE, BIQUE, and total least squares. The highlight is the simultaneous determination of the first moment and the second central moment of a probability distribution in an inhomogeneous multilinear estimationby the so-called E-D correspondence as well as its Bayes design. In addition, we discuss continuous networks versus discrete networks, use of Grassmann-Plucker coordinates, criterion matrices of type Taylor-Karman as well as FUZZY sets. Chapter seven is a speciality in the treatment of an overjet. This second edition adds three new chapters:
(1)  Chapter on integer least squares that covers (i) model for positioning as a mixed integer linear  model which includes integer parameters. (ii) The general integer least squares problem is formulated, and the optimality of the least squares solution  is shown. (iii) The relation to the closest vector problem is considered, and the notion of reduced lattice basis is introduced. (iv) The famous LLL algorithm for generating a Lovasz reduced basis is explained.
(2) Bayes methods  that covers (i) general principle of Bayesian modeling. Explain the notion of prior distribution and posterior distribution. Choose the pragmatic approach  for exploring the advantages of iterative Bayesian calculations and hierarchical modeling. (ii) Present the Bayes methods for linear models with normal distributed errors, including noninformative priors, conjugate priors, normal gamma distributions and (iii) short outview to modern application of Bayesian modeling. Useful in case of nonlinear models or linear models with no normal distribution: Monte Carlo (MC), Markov chain Monte Carlo (MCMC), approximative Bayesian computation (ABC)  methods.
(3) Error-in-variables models, which cover: (i) Introduce the error-in-variables (EIV) model, discuss the difference to least squares estimators (LSE), (ii) calculate the total least squares (TLS) estimator. Summarize the properties of TLS, (iii) explain the idea of simulation extrapolation (SIMEX) estimators, (iv) introduce the symmetrized SIMEX (SYMEX) estimator and its relation to TLS, and (v) short outview to nonlinear EIV models. 
The chapter on algebraic solution of nonlinear system of equations has also been updated in line with the new emerging field of hybrid numeric-symbolic solutions to systems of nonlinear equations, ermined system of nonlinear equations on curved manifolds. The von Mises-Fisher distribution is characteristic for circular or (hyper) spherical data. Our last chapter is devoted to probabilistic regression, the special Gauss-Markov model with random effects leading to estimators of type BLIP and VIP including Bayesian estimation.
A great part of the work is presented in four appendices. Appendix A is a treatment, of tensor algebra, namely linear algebra, matrix algebra, and multilinear algebra. Appendix B is devoted to sampling distributions and their use in terms of confidence intervals and confidence regions. Appendix C reviews the elementary notions of statistics, namely random events and stochastic processes. Appendix D introduces the basics of Groebner basis algebra, its careful definition, the Buchberger algorithm, especially the C. F. Gauss combinatorial algorithm.

Brugerbedømmelser af Applications of Linear and Nonlinear Models



Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.