Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This book, written in honor of Arno Tausch, presents cutting-edge research on globalization, development, and global values. Internationally renowned authors cover topics such as global economic and political cycles, global values, and support for terrorism. Over the last five decades, the Austrian political scientist Arno Tausch was a pioneer in studies on globalization, development and global values. This collection of essays takes up the issues dealt with by Tausch and presents perspectives for the 21st Century. Throughout his work, Tausch applied quantitative methods to study the fundamental issues of the global political economy and the global political system, like dependency, economic and political cycles, and global values, based on a rigorous study of available social scientific data, like the World Values Survey and the Arab Barometer.
The interaction between mathematicians and statisticians has been shown to be an ei ective approach for dealing with actuarial, insurance and financial problems, both from an academic perspective and from an operative one. The collection of original papers presented in this volume pursues precisely this purpose. It covers a wide variety of subjects in actuarial, insurance and finance fields, all treated in the light of the successful cooperation between the above two quantitative approaches. The papers published in this volume present theoretical and methodological contributions and their applications to real contexts. With respect to the theoretical and methodological contributions, some of the considered areas of investigation are: actuarial models; alternative testing approaches; behavioral finance; clustering techniques; coherent and non-coherent risk measures; credit scoring approaches; data envelopment analysis; dynamic stochastic programming; financial contagion models; financial ratios; intelligent financial trading systems; mixture normality approaches; Monte Carlo-based methods; multicriteria methods; nonlinear parameter estimation techniques; nonlinear threshold models; particle swarm optimization; performance measures; portfolio optimization; pricing methods for structured and non-structured derivatives; risk management; skewed distribution analysis; solvency analysis; stochastic actuarial valuation methods; variable selection models; time series analysis tools. As regards the applications, they are related to real problems associated, among the others, to: banks; collateralized fund obligations; credit portfolios; defined benefit pension plans; double-indexed pension annuities; efficient-market hypothesis; exchange markets; financial time series; firms; hedge funds; non-life insurance companies; returns distributions; socially responsible mutual funds; unit-linked contracts. This book is aimed at academics, Ph.D. students, practitioners, professionals and researchers. But it will also be of interest to readers with some quantitative background knowledge.
This text describes a comprehensive adjoint sensitivity analysis methodology (C-ASAM), developed by the author, enabling the efficient and exact computation of arbitrarily high-order functional derivatives of model responses to model parameters in large-scale systems. The model¿s responses can be either scalar-valued functionals of the model¿s parameters and state variables (as customarily encountered, e.g., in optimization problems) or general function-valued responses, which are often of interest but are currently not amenable to efficient sensitivity analysis. The C-ASAM framework is set in linearly increasing Hilbert spaces, each of state-function-dimensionality, as opposed to exponentially increasing parameter-dimensional spaces, thereby breaking the so-called ¿curse of dimensionality¿ in sensitivity and uncertainty analysis. The C-ASAM applies to any model; the larger the number of model parameters, the more efficient the C-ASAM becomes for computing arbitrarily high-order response sensitivities. The text includes illustrative paradigm problems which are fully worked-out to enable the thorough understanding of the C-ASAM¿s principles and their practical application. The book will be helpful to those working in the fields of sensitivity analysis, uncertainty quantification, model validation, optimization, data assimilation, model calibration, sensor fusion, reduced-order modelling, inverse problems and predictive modelling. It serves as a textbook or as supplementary reading for graduate course on these topics, in academic departments in the natural, biological, and physical sciences and engineering.This Volume Three, the third of three, covers systems that are nonlinear in the state variables, model parameters and associated responses. The selected illustrative paradigm problems share these general characteristics. A separate Volume One covers systems that are linear in the state variables.
The purpose of this book is to thoroughly prepare diverse areas of researchers in quantification theory. As is well known, quantification theory has attracted the attention of a countless number of researchers, some mathematically oriented and others not, but all of them are experts in their own disciplines. Quantifying non-quantitative (qualitative) data requires a variety of mathematical and statistical strategies, some of which are quite complicated. Unlike many books on quantification theory, the current book places more emphasis on preliminary requisites of mathematical tools than on details of quantification theory. As such, the book is primarily intended for readers whose specialty is outside mathematical sciences. The book was designed to offer non-mathematicians a variety of mathematical tools used in quantification theory in simple terms. Once all the preliminaries are fully discussed, quantification theory is then introduced in the last section as a simple application of those mathematical procedures fully discussed so far. The book opens up further frontiers of quantification theory as simple applications of basic mathematics.
This edited book is the first one written in English that deals comprehensively with behavior metrics. The term ¿behaviormetrics¿ comprehends the research including all sorts of quantitative approaches to disclose human behavior. Researchers in behavior metrics have developed, extended, and improved methods such as multivariate statistical analysis, survey methods, cluster analysis, machine learning, multidimensional scaling, corresponding analysis or quantification theory, network analysis, clustering, factor analysis, test theory, and related factors. In the spirit of behavior metrics, researchers applied these methods to data obtained by surveys, experiments, or websites from a diverse range of fields. The purpose of this book is twofold. One is to represent studies that display how the basic elements of behavior metrics have developed into present-day behavior metrics. The other is to represent studies performed mainly by those who would like to pioneer new fieldsof behavior metrics and studies that display elements of future behavior metrics. These studies consist of various characteristics such as those dealing with theoretical or conceptual subjects, the algorithm, the model, the method, and the application to a wide variety of fields. This book helps readers to understand the present and future of behavior metrics.
In einer VUCA-Welt, die sich als immer unbeständiger, unsicherer und komplexer erweist, gilt es für Unternehmen, Organisation und Staaten zeitnah und adäquat auf die jeweiligen Situationen zu reagieren. Entscheidungen basierend auf in der Vergangenheit gemachten Erfahrungen zu treffen ist in diesen Zeiten weniger erfolgreich als ein akkurates Verständnis der gegenwärtigen Bedingungen. Die Bedeutung von empirischen Wissenschaften, das permanente Beobachten der Umwelt, die zeitnahe Analyse von Wirkungszusammenhängen und das daraus abgeleitete Gewinnen neuer Erkenntnissen, nimmt zu. Daraus lässt sich ableiten, welche Maßnahmen mit einer vorhersagbaren Wahrscheinlichkeit zur Erreichung der eigenen Ziele geeignet sind, z.B. welcher Preis für ein Angebot die gewünschte Nachfrage erzeugt oder welche Marketingmaßnahme eine gewünschte Zielgruppe erreicht.Wo früher klassische Statistik für Berechnungen und Vorhersagen herangezogen wurde, da erlaubenheute kostenlose (Open Source) Werkzeuge wie R Daten in unterschiedlichsten Formaten und aus beliebig vielen Quellen für die Analyse einzulesen, aufzubereiten und mit Hilfe von Methoden der Künstlichen Intelligenz und des Machine Learning zu analysieren. Die Ergebnisse können dann anschließend perfekt visuell dargestellt werden, so dass die Entscheider schnell und effektiv davon profitieren können.Das Zeitalter von Data Science ist erreicht. Digitalisierung ist mehr als ein Schlagwort oder ein Versprechen, es ist für jeden umsetzbar und nutzbar.Dieses Buch vermittelt Ihnen auf Basis der zum Zeitpunkt der Publikation aktuellsten Version von R, wie Sie Künstliche Intelligenz und Machine Learning in der Industrie 4.0 nutzen können.
This textbook offers a comprehensive theory of medical decision making under uncertainty, combining informative test theory with the expected utility hypothesis. The book shows how the parameters of Bayes' theorem can be combined with a value function of health states to arrive at informed test and treatment decisions. The authors distinguish between risk neutral, risk averse and prudent decision makers and demonstrate the effects of risk preferences on physicians' decisions. They analyze individual tests, multiple tests and endogenous tests where the test result is determined by the decision maker. Finally, the topic is examined in the context of health economics by introducing a trade-off between enjoying health and consuming other goods, so that the extent of treatment and thus the potential improvement in the patient's health become endogenous.
This book presents a selection of peer-reviewed contributions on the latest developments in time series analysis and forecasting, presented at the 7th International Conference on Time Series and Forecasting, ITISE 2021, held in Gran Canaria, Spain, July 19-21, 2021. It is divided into four parts. The first part addresses general modern methods and theoretical aspects of time series analysis and forecasting, while the remaining three parts focus on forecasting methods in econometrics, time series forecasting and prediction, and numerous other real-world applications. Covering a broad range of topics, the book will give readers a modern perspective on the subject.The ITISE conference series provides a forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the foundations, theory, models and applications of time series analysis and forecasting. It focuses on interdisciplinary research encompassing computer science, mathematics,statistics and econometrics.
This book provides a comprehensive guidance for the use of sound statistical methods and for the evaluation of fatigue data of welded components and structures obtained under constant amplitude loading and used to produce S-N curves. Recommendations for analyzing fatigue data are available, although they do not deal with all the statistical treatments that may be required to utilize fatigue test results, and none of them offers specific guidelines for analyzing fatigue data obtained from tests on welded specimens. For an easy use, working sheets are provided to assist in the proper statistical assessment of experimental fatigue data concerning practical problems giving the procedure and a numerical application as illustration.
This book presents a study of statistical inferences based on the kernel-type estimators of distribution functions. The inferences involve matters such as quantile estimation, nonparametric tests, and mean residual life expectation, to name just some. Convergence rates for the kernel estimators of density functions are slower than ordinary parametric estimators, which have root-n consistency. If the appropriate kernel function is used, the kernel estimators of the distribution functions recover the root-n consistency, and the inferences based on kernel distribution estimators have root-n consistency. Further, the kernel-type estimator produces smooth estimation results. The estimators based on the empirical distribution function have discrete distribution, and the normal approximation cannot be improved¿that is, the validity of the Edgeworth expansion cannot be proved. If the support of the population density function is bounded, there is a boundary problem, namely the estimator does not have consistency near the boundary. The book also contains a study of the mean squared errors of the estimators and the Edgeworth expansion for quantile estimators.
Sergio Albeverio gave important contributions to many fields ranging from Physics to Mathematics, while creating new research areas from their interplay. Some of them are presented in this Volume that grew out of the Random Transformations and Invariance in Stochastic Dynamics Workshop held in Verona in 2019. To understand the theory of thermo- and fluid-dynamics, statistical mechanics, quantum mechanics and quantum field theory, Albeverio and his collaborators developed stochastic theories having strong interplays with operator theory and functional analysis. His contribution to the theory of (non Gaussian)-SPDEs, the related theory of (pseudo-)differential operators, and ergodic theory had several impacts to solve problems related, among other topics, to thermo- and fluid dynamics. His scientific works in the theory of interacting particles and its extension to configuration spaces lead, e.g., to the solution of open problems in statistical mechanics and quantum fieldtheory. Together with Raphael Hoegh Krohn he introduced the theory of infinite dimensional Dirichlet forms, which nowadays is used in many different contexts, and new methods in the theory of Feynman path integration. He did not fear to further develop different methods in Mathematics, like, e.g., the theory of non-standard analysis and p-adic numbers.
Statistik - dieses Wort weckt unangenehme Erinnerungen an Tabellen, unuberschaubares Zahlenmaterial und lastige Fragebogen. Auch die Erinnerung an die Geschichte von der Steigerungsform der Luge, nach der es drei Arten der Luge gibt: einfache Luge, gemeine Luge, Statistik, wird geweckt. Und dennoch kann man sich der Qualitat der Argumentation nicht entziehen, wenn Zahlen, mit dem Anschein des unumstolich Faktischen verbunden, in die Diskussion geworfen werden. Allgemein wird anerkannt, da exakte Kenntnis des Wirtschafts- und Gesellschafts- lebens weitgehend auf statistischen Erhebungen beruht, die zur beschreibenden Bestandsauf- nahme, zur Klarung von Kausalbeziehungen und als Entscheidungshilfe dienen. Die zunehmende Bedeutung der Statistik im weitesten Sinne hat zur Folge, da grundlegende Kenntnisse der statistischen Methodenlehre notwendig sind, um gesellschaftliche wie betriebliche Zusammenhange erkennen und darstellen zu konnen. Obwohl die betriebliche Statistik im allgemeinen nach Sachgebieten im Betrieb unterteilt ist, sei in diesem Buch die statistische Methodenlehre starker hervorgehoben und gezeigt, wie sie im Betrieb eingesetzt werden kann. Speziell die Beispiele und Ubungsaufgaben mit jeweils ausfuhrlichem Losungsgang und eingefugten Kontroll- und Verstandnisfragen sollen den betrieblichen Bezug verdeutlichen. Wo es notwendig erscheint und sich betriebsinterne Daten mit betriebsexternen Daten verbinden, wird die Beziehung zur amtlichen Statistik aufgezeigt.
Engaging and accessible to students from a wide variety of mathematical backgrounds, Statistics Using Stata combines the teaching of statistical concepts with the acquisition of the popular Stata software package. It closely aligns Stata commands with numerous examples based on real data, enabling students to develop a deep understanding of statistics in a way that reflects statistical practice. Capitalizing on the fact that Stata has both a menu-driven 'point and click' and program syntax interface, the text guides students effectively from the comfortable 'point and click' environment to the beginnings of statistical programming. Its comprehensive coverage of essential topics gives instructors flexibility in curriculum planning and provides students with more advanced material to prepare them for future work. Online resources - including complete solutions to exercises, PowerPoint slides, and Stata syntax (do-files) for each chapter - allow students to review independently and adapt codes to solve new problems, reinforcing their programming skills.
Wissenschaftlicher Aufsatz aus dem Fachbereich Mathematik - Statistik, , Sprache: Deutsch, Abstract: Nichtparametrische statistische Methoden sind gegenüber parametrischen statistischen Methoden insbesondere dann im Vorteil, wenn nur nominal oder ordinal skalierte Daten vorliegen oder die Verteilungsannahmen parametrischer Verfahren verletzt sind. Nichtsdestotrotz sind nichtparametrische Verfahren seltener in statistischen Programmpaketen implementiert. Die nachfolgende Ausarbeitung stellt daher einen studentischen Versuch dar, kostenlose statistische Programmalternativen für kleinere Aufgaben bereitzustellen und individuell zu Übungszwecken zu entwickeln. Als Programmiersprache dient Oberon, welche von Niklaus Wirth entwickelt und im Jahr 1988 veröffentlicht wurde. Behandelt werden nichtparametrische Verfahren für den Ein- (Binomial-, Vorzeichen- und Wilcoxon-Vorzeichen-Rangtest) und Zweistichprobenfall (Vorzeichen-, Siegel-Tukey-, Mood- und Wilcoxon- Rangsummentest). Auf eine kurze, theoretische Einführung in die entsprechenden Verfahren (hauptsächlich basierend auf Büning und Trenkler [1978] 1994) folgt der jeweilige Quellcode, welcher unter Verwendung des kostenlos erhältlichen Texteditors Emacs und Oberon-Compilers unmittelbar verwendet werden kann.
The main subject of the book is stochastic analysis and its various applications to mathematical finance and statistics of random processes. The main purpose of the book is to present, in a short and sufficiently self-contained form, the methods and results of the contemporary theory of stochastic analysis and to show how these methods and results work in mathematical finance and statistics of random processes. The book can be considered as a textbook for both senior undergraduate and graduate courses on this subject. The book can be helpful for undergraduate and graduate students, instructors and specialists on stochastic analysis and its applications.
The adoption of multilayer analysis techniques is rapidly expanding across all areas of knowledge, from social sciences (the first facing the complexity of such structures, decades ago) to computer science, from biology to engineering. However, until now, no book has dealt exclusively with the analysis and visualization of multilayer networks. Multilayer Networks: Analysis and Visualization provides a guided introduction to one of the most complete computational frameworks, named muxViz, with introductory information about the underlying theoretical aspects and a focus on the analytical side. Dozens of analytical scripts and examples to use the muxViz library in practice, by means of the Graphical User Interface or by means of the R scripting language, are provided. In addition to researchers in the field of network science, as well as practitioners interested in network visualization and analysis, this book will appeal to researchers without strong technical or computer science background who want to learn how to use muxViz software, such as researchers from humanities, social science and biology: audiences which are targeted by case studies included in the book. Other interdisciplinary audiences include computer science, physics, neuroscience, genetics, urban transport and engineering, digital humanities, social and computational social science.Readers will learn how to use, in a very practical way (i.e., without focusing on theoretical aspects), the algorithms developed by the community and implemented in the free and open-source software muxViz. The data used in the book is available on a dedicated (open and free) site.
This book is a convenient and comprehensive guide to statistics. A resource for quality technicians and engineers in any industry, this second edition provides even more equations and examples for the reader-with a continued focus on algebra-based math. Those preparing for ASQ certification examinations, such as the Certified Quality Technician (CQT), Certified Six Sigma Green Belt (CSSGB), Certified Quality Engineer (CQE), Certified Six Sigma Black Belt (CSSBB), Certified Reliability Engineer (CRE), and Certified Supplier Quality Professional (CSQP), will find this book helpful as well. Inside you'll find: ¿ Complete calculations for determining confidence intervals, tolerances, sample size, outliers, process capability, and system reliability ¿ Newly added equations for hypothesis tests (such as the Kruskal-Wallis test and Levene's test for equality of variances), the Taguchi method, and Weibull and log-normal distributions ¿ Hundreds of completed examples to demonstrate practical use of each equation ¿ 20+ appendices, including distribution tables, critical values tables, control charts, sampling plans, and a beta table
in die Statistik 5., durchgesehene Auflage Bibliografische Information Der Deutschen Bibliothek Die Deutsche Bibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet uber abrufbar. Prof. Dr. rer. nat. Jurgen Lehn Geboren 1941 in Karlsruhe. Studium der Mathematik an den Universitaten Karlsruhe und Regensburg. 1968 Diplom in Karlsruhe, 1972 Promotion in Regensburg, 1978 Habilitation in Karlsruhe. 1978 Professor an der Technischen Hochschule Darmstadt. Prof. Dr. rer. nat. Helmut Wegmann Geboren 1938 in Worms. Studium der Mathematik und Physik an den Universitaten Mainz und Tubingen. Wiss. Assistent an den Universitaten Mainz und Stuttgart. 1962 Staats- amen in Mainz, 1964 Promotion in Mainz, 1969 Habilitation in Stuttgart. 1970 Professor fur Mathematik an der Technischen Hochschule Darmstadt. 1. Auflage 1985 2. Auflage 1992 3. Auflage 2000 4. Auflage 2004 5., durchgesehene Auflage Juni 2006 Alle Rechte vorbehalten (c) B.G.Teubner Verlag / GWV Fachverlage GmbH, Wiesbaden 2006 Lektorat: Ulrich Sandten / Kerstin Hoffmann Der B. G. Teubner Verlag ist ein Unternehmen von Springer Science+Business Media. www.teubner.de Das Werk einschlielich aller seiner Teile ist urheberrechtlich geschutzt. Jede Verwertung auerhalb der engen Grenzen des Urheberrech- gesetzes ist ohne Zustimmung des Verlags unzulassig und strafbar. Das gilt insbesondere fur Vervielfaltigungen, Ubersetzungen, Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen.
Waste management is a topical issue worldwide. In recent years, several requests have been made by citizens and associations to political decision-makers regarding the need for a significant improvement in waste management methods. Particularly considering the significant increase in awareness of social and environmental impacts and the economic consequences of non-virtuous waste management. There is growing attention on legislation and regulation's role in the waste sector. Regulation can help companies and citizens achieve a faster, more effective, and more efficient transition from a linear economy, based on the take-make-dispose paradigm, to a circular economy, in which the potential of waste as resources and secondary raw materials is exploited.This book is set in the wake of economic literature that tackles the transition from the linear to the circular economy. It focuses on the downstream stages of the waste management process (i.e. the waste treatment phase). In this regard, it is proposed a journey through the history of European waste legislation to study the waste sector's transition dynamics from a selfish and no longer sustainable economic model based on rampant consumerism to a far-sighted sustainable model addressing the well-being of future generations. Studying the changes in European waste regulations leads us to ask ourselves the following questions: how has waste collection changed in recent years? What are the new regulatory challenges that must be addressed to achieve the objectives of a circular economy? How successful has the EU legislation been in fostering the transition from a linear to a circular economy? Finally, has the European environmental legislation sprung a convergence process among European countries towards the circular economy, or has the definition of targets fuelled the already marked differences between EU countries?
Your government warns that 10% of your neighbors have a deadly contagious virus. The producer of a diagnostic test advertises that 90% of its tests are correct for any population. The test indicates that you have the virus. This book's author claims your test has a 50% chance of being false, given your test's result. Who do you believe? This book gives you insights necessary to interpret metrics that make a difference in life's decisions.This book gives methods and software that are essential to analyze change and error. Change describes a phenomenon across time points. Error compares diagnoses with the truth. Other texts give insufficient attention to these topics. This book's novel ideas dispel popular misconceptions and replace previous methods. The author uses carefully designed graphics and high school mathematics to communicate easily with college students and advanced scientists. Applications include but are not limited to Remote Sensing, Land Change Science, and Geographic Information Science."e;A wide range of tools to aid understanding of land cover and its change has been used but scientific progress has sometimes been limited through misuse and misunderstanding. Professor Pontius seeks to rectify this situation by providing a book to accompany the researcher's toolbox. Metrics That Make a Difference addresses basic issues of relevance to a broad community in a mathematically friendly way and should greatly enhance the ability to elicit correct information. I wish this book existed while I was a grad student."e; - Giles Foody, Professor of Geographical Information Science, The University of Nottingham"e;Metrics That Make a Difference provides a comprehensive synthesis of over two decades of work during which Dr. Pontius researched, developed, and applied these metrics. The book meticulously and successfully guides the reader through the conceptual basis, computations, and proper interpretation of the many metrics derived for different types of variables. The book is not just a mathematical treatise but includes practical guidance to good data analysis and good science. Data scientists from many fields of endeavor will benefit substantially from Dr. Pontius' articulate review of traditionally used metrics and his presentation of the innovative and novel metrics he has developed. While reading this book, I had multiple 'aha' moments about metrics that I shouldn't be using and metrics that I should be using instead."e; - Stephen Stehman, Distinguished Teaching Professor, State University of New York
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.