Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This book presents novel statistics methods and reproducible software that helps to solve challenging problems in biomedicine. Specifically, it consists of a collection of 11 chapters contributed by some of the leading experts in the mathematical and statistical field which address new challenges in very disparate biomedical areas, such as genomics, cancer, circadian biology, microbiome, mental disorders, and more. The mathematical rigor is written in a user-friendly way to serve a general biomedical audience ranging from trainees or students to doctors, as well as scientific researchers, university departments, and PhD students.
The main goal of this comprehensive textbook is to cover the core techniques required to understand some of the basic and most popular model learning algorithms available for engineers, then illustrate their applicability directly with stationary time series. A multi-step approach is introduced for modeling time series which differs from the mainstream in the literature. Singular spectrum analysis of univariate time series, trend and seasonality modeling with least squares and residual analysis, and modeling with ARMA models are discussed in more detail. As applications of data-driven model learning become widespread in society, engineers need to understand its underlying principles, then the skills to develop and use the resulting data-driven model learning solutions. After reading this book, the users will have acquired the background, the knowledge and confidence to (i) read other model learning textbooks more easily, (ii) use linear algebra and statistics for data analysis and modeling, (iii) explore other fields of applications where model learning from data plays a central role. Thanks to numerous illustrations and simulations, this textbook will appeal to undergraduate and graduate students who need a first course in data-driven model learning. It will also be useful for practitioners, thanks to the introduction of easy-to-implement recipes dedicated to stationary time series model learning. Only a basic familiarity with advanced calculus, linear algebra and statistics is assumed, making the material accessible to students at the advanced undergraduate level.
This book develops alternative methods to estimate the unknown parameters in stochastic volatility models, offering a new approach to test model accuracy. While there is ample research to document stochastic differential equation models driven by Brownian motion based on discrete observations of the underlying diffusion process, these traditional methods often fail to estimate the unknown parameters in the unobserved volatility processes. This text studies the second order rate of weak convergence to normality to obtain refined inference results like confidence interval, as well as nontraditional continuous time stochastic volatility models driven by fractional Levy processes. By incorporating jumps and long memory into the volatility process, these new methods will help better predict option pricing and stock market crash risk. Some simulation algorithms for numerical experiments are provided.
Special Topics in Structural Dynamics & Experimental Techniques, Volume 5: Proceedings of the 40th MAC, A Conference and Exposition on Structural Dynamics, 2022, the fifth volume of nine from the Conference brings together contributions to this important area of research and engineering. The collection presents early findings and case studies on fundamental and applied aspects of Structural Dynamics, including papers on:Analytical MethodsEmerging Technologies for Structural DynamicsEngineering ExtremesExperimental TechniquesFinite Element Techniques
This volume presents extensive research devoted to a broad spectrum of mathematics with emphasis on interdisciplinary aspects of Optimization and Probability. Chapters also emphasize applications to Data Science, a timely field with a high impact in our modern society. The discussion presents modern, state-of-the-art, research results and advances in areas including non-convex optimization, decentralized distributed convex optimization, topics on surrogate-based reduced dimension global optimization in process systems engineering, the projection of a point onto a convex set, optimal sampling for learning sparse approximations in high dimensions, the split feasibility problem, higher order embeddings, codifferentials and quasidifferentials of the expectation of nonsmooth random integrands, adjoint circuit chains associated with a random walk, analysis of the trade-off between sample size and precision in truncated ordinary least squares, spatial deep learning, efficient location-based tracking for IoT devices using compressive sensing and machine learning techniques, and nonsmooth mathematical programs with vanishing constraints in Banach spaces.The book is a valuable source for graduate students as well as researchers working on Optimization, Probability and their various interconnections with a variety of other areas.Chapter 12 is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This is the first technical book that considers tests as public tools and examines how to engineer and process test data, extract the structure within the data to be visualized, and thereby make test results useful for students, teachers, and the society. The author does not differentiate test data analysis from data engineering and information visualization. This monograph introduces the following methods of engineering or processing test data, including the latest machine learning techniques: classical test theory (CTT), item response theory (IRT), latent class analysis (LCA), latent rank analysis (LRA), biclustering (co-clustering), and Bayesian network model (BNM). CTT and IRT are methods for analyzing test data and evaluating students' abilities on a continuous scale. LCA and LRA assess examinees by classifying them into nominal and ordinal clusters, respectively, where the adequate number of clusters is estimated from the data. Biclustering classifies examinees into groups (latent clusters) while classifying items into fields (factors). Particularly, the infinite relational model discussed in this book is a biclustering method feasible under the condition that neither the number of groups nor the number of fields is known beforehand. Additionally, the local dependence LRA, local dependence biclustering, and bicluster network model are methods that search and visualize inter-item (or inter-field) network structure using the mechanism of BNM. As this book offers a new perspective on test data analysis methods, it is certain to widen readers' perspective on test data analysis.
"This largely self-contained text introduces discrete probability and its applications, at a level suitable for beginning graduate students in mathematics, computer science, statistics and engineering. Each chapter includes exercises and pointers to the wider literature, covering a wide spectrum of essential techniques and key examples"--
Bayesian analysis is one of the important tools for statistical modelling and inference. Bayesian frameworks and methods have been successfully applied to solve practical problems in reliability and survival analysis, which have a wide range of real world applications in medical and biological sciences, social and economic sciences, and engineering. In the past few decades, significant developments of Bayesian inference have been made by many researchers, and advancements in computational technology and computer performance has laid the groundwork for new opportunities in Bayesian computation for practitioners.Because these theoretical and technological developments introduce new questions and challenges, and increase the complexity of the Bayesian framework, this book brings together experts engaged in groundbreaking research on Bayesian inference and computation to discuss important issues, with emphasis on applications to reliability and survival analysis. Topics covered are timely and have the potential to influence the interacting worlds of biostatistics, engineering, medical sciences, statistics, and more. The included chapters present current methods, theories, and applications in the diverse area of biostatistical analysis. The volume as a whole serves as reference in driving quality global health research.
This book extends the theory and applications of random evolutions to semi-Markov random media in discrete time, essentially focusing on semi-Markov chains as switching or driving processes. After giving the definitions of discrete-time semi-Markov chains and random evolutions, it presents the asymptotic theory in a functional setting, including weak convergence results in the series scheme, and their extensions in some additional directions, including reduced random media, controlled processes, and optimal stopping. Finally, applications of discrete-time semi-Markov random evolutions in epidemiology and financial mathematics are discussed. This book will be of interest to researchers and graduate students in applied mathematics and statistics, and other disciplines, including engineering, epidemiology, finance and economics, who are concerned with stochastic models of systems.
This book focuses on methods and models in classification and data analysis and presents real-world applications at the interface with data science. Numerous topics are covered, ranging from statistical inference and modelling to clustering and factorial methods, and from directional data analysis to time series analysis and small area estimation. The applications deal with new developments in a variety of fields, including medicine, finance, engineering, marketing, and cyber risk.The contents comprise selected and peer-reviewed contributions presented at the 13th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society, CLADAG 2021, held (online) in Florence, Italy, on September 9¿11, 2021. CLADAG promotes advanced methodological research in multivariate statistics with a special focus on data analysis and classification, and supports the exchange and dissemination of ideas, methodological concepts, numerical methods, algorithms, and computational and applied results at the interface between classification and data science.
"The first systematic treatment of model risk, this book provides the tools needed to quantify and assess the impact of model uncertainty. It will be essential for all those working in portfolio theory and the theory of financial and engineering risk, for practitioners in these areas, and for graduate courses on risk bounds and model uncertainty"--
This book connects predictive analytics and simulation analytics, with the end goal of providing Rich Information to stakeholders in complex systems to direct data-driven decisions. Readers will explore methods for extracting information from data, work with simple and complex systems, and meld multiple forms of analytics for a more nuanced understanding of data science. The methods can be readily applied to business problems such as demand measurement and forecasting, predictive modeling, pricing analytics including elasticity estimation, customer satisfaction assessment, market research, new product development, and more. The book includes Python examples in Jupyter notebooks, available at the book's affiliated Github.This volume is intended for current and aspiring business data analysts, data scientists, and market research professionals, in both the private and public sectors.
This book contains contributions from the participants of the international conference ¿Foundations of Modern Statistics¿ which took place at Weierstrass Institute for Applied Analysis and Stochastics (WIAS), Berlin, during November 6¿8, 2019, and at Higher School of Economics (HSE University), Moscow, during November 30, 2019. The events were organized in honor of Professor Vladimir Spokoiny on the occasion of his 60th birthday. Vladimir Spokoiny has pioneered the field of adaptive statistical inference and contributed to a variety of its applications. His more than 30 years of research in the field of mathematical statistics had a great influence on the development of the mathematical theory of statistics to its present state. It has inspired many young researchers to start their research in this exciting field of mathematics. The papers contained in this book reflect the broad field of interests of Vladimir Spokoiny: optimal rates and non-asymptotic bounds in nonparametrics, Bayes approaches from a frequentist point of view, optimization, signal processing, and statistical theory motivated by models in applied fields. Materials prepared by famous scientists contain original scientific results, which makes the publication valuable for researchers working in these fields. The book concludes by a conversation of Vladimir Spokoiny with Markus Rei¿ and Enno Mammen. This interview gives some background on the life of Vladimir Spokoiny and his many scientific interests and motivations.
This book presents the latest results related to one- and two-way models for time series data. Analysis of variance (ANOVA) is a classical statistical method for IID data proposed by R.A. Fisher to investigate factors and interactions of phenomena. In contrast, the methods developed in this book apply to time series data. Testing theory of the homogeneity of groups is presented under a wide variety of situations including uncorrelated and correlated groups, fixed and random effects, multi- and high-dimension, parametric and nonparametric spectral densities. These methods have applications in several scientific fields. A test for the existence of interactions is also proposed. The book deals with asymptotics when the number of groups is fixed and sample size diverges. This framework distinguishes the approach of the book from panel data and longitudinal analyses, which mostly deal with cases in which the number of groups is large. The usefulness of the theory in this book is illustratedby numerical simulation and real data analysis. This book is suitable for theoretical statisticians and economists as well as psychologists and data analysts.
This book introduces the concept of ¿bespoke learning¿, a new mechanistic approach that makes it possible to generate values of an output variable at each designated value of an associated input variable. Here the output variable generally provides information about the system¿s behaviour/structure, and the aim is to learn the input-output relationship, even though little to no information on the output is available, as in multiple real-world problems. Once the output values have been bespoke-learnt, the originally-absent training set of input-output pairs becomes available, so that (supervised) learning of the sought inter-variable relation is then possible. Three ways of undertaking such bespoke learning are offered: by tapping into system dynamics in generic dynamical systems, to learn the function that causes the system¿s evolution; by comparing realisations of a random graph variable, given multivariate time series datasets of disparate temporal coverage; and by designing maximally information-availing likelihoods in static systems. These methodologies are applied to four different real-world problems: forecasting daily COVID-19 infection numbers; learning the gravitational mass density in a real galaxy; learning a sub-surface material density function; and predicting the risk of onset of a disease following bone marrow transplants. Primarily aimed at graduate and postgraduate students studying a field which includes facets of statistical learning, the book will also benefit experts working in a wide range of applications. The prerequisites are undergraduate level probability and stochastic processes, and preliminary ideas on Bayesian statistics.
This book constitutes the refereed proceedings of the 8th International Conference on Business Intelligence, CBI 2023, which held in Istanbul, Turkey, during July 19¿21, 2023.The 15 full papers included in this book were carefully reviewed and selected from 50 submissions. They were organized in topical sections as follows: artificial intelligence and business intelligence; and optimization and decision support.
This book provides a friendly introduction to the paradigm and proposes a broad panorama of killing applications of the Infinity Computer in optimization: radically new numerical algorithms, great theoretical insights, efficient software implementations, and interesting practical case studies. This is the first book presenting to the readers interested in optimization the advantages of a recently introduced supercomputing paradigm that allows to numerically work with different infinities and infinitesimals on the Infinity Computer patented in several countries. One of the editors of the book is the creator of the Infinity Computer, and another editor was the first who has started to use it in optimization. Their results were awarded by numerous scientific prizes. This engaging book opens new horizons for researchers, engineers, professors, and students with interests in supercomputing paradigms, optimization, decision making, game theory, and foundations of mathematics and computer science."e;Mathematicians have never been comfortable handling infinities... But an entirely new type of mathematics looks set to by-pass the problem... Today, Yaroslav Sergeyev, a mathematician at the University of Calabria in Italy solves this problem... "e;MIT Technology Review"e;These ideas and future hardware prototypes may be productive in all fields of science where infinite and infinitesimal numbers (derivatives, integrals, series, fractals) are used."e; A. Adamatzky, Editor-in-Chief of the International Journal of Unconventional Computing."e;I am sure that the new approach ... will have a very deep impact both on Mathematics and Computer Science."e; D. Trigiante, Computational Management Science."e;Within the grossone framework, it becomes feasible to deal computationally with infinite quantities, in a way that is both new (in the sense that previously intractable problems become amenable to computation) and natural"e;. R. Gangle, G. Caterina, F. Tohme, Soft Computing."e;The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort."e; P. Amodio, L. Brugnano, F. Iavernaro & F. Mazzia, Soft Computing
This book offers an introduction to the field of stochastic analysis of Hermite processes. These selfsimilar stochastic processes with stationary increments live in a Wiener chaos and include the fractional Brownian motion, the only Gaussian process in this class. Using the Wiener chaos theory and multiple stochastic integrals, the book covers the main properties of Hermite processes and their multiparameter counterparts, the Hermite sheets. It delves into the probability distribution of these stochastic processes and their sample paths, while also presenting the basics of stochastic integration theory with respect to Hermite processes and sheets.The book goes beyond theory and provides a thorough analysis of physical models driven by Hermite noise, including the Hermite Ornstein-Uhlenbeck process and the solution to the stochastic heat equation driven by such a random perturbation. Moreover, it explores up-to-date topics central to current research in statistical inference for Hermite-driven models.
This book provides an analytical and computational approach to solving and simulating the Mahalanobis model and the papers surrounding it. The book comes up, perhaps for the first time, with a holistic examination of an important growth model that emerged out of India in the 1950s. It contains detailed derivations of the Mahalanobis model and the several critiques and extensions surrounding it with an organized synthesis of the main results. Computationally, the book simulates the model and its many variants, thus making it accessible to a wider audience. Advanced undergraduates and beginning graduate students in the fields of Economics, Mathematics, and Statistics will gain immensely from understanding both the mathematical aspects as well as the computational aspects of the Mahalanobis model. In the absence of a single 'go-to' source on all aspects of the model -- analytical and computational -- this book is a definitive volume on the Mahalanobis model that has allthe derivations of all the papers surrounding the model, its dissents and critiques, and extensions as in the wage goods model suggested by Vakil and Brahmananda.
An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance, marketing, and astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. This book is targeted at statisticians and non-statisticians alike, who wish to use cutting-edge statistical learning techniques to analyze their data. Four of the authors co-wrote An Introduction to Statistical Learning, With Applications in R(ISLR), which has become a mainstay of undergraduate and graduate classrooms worldwide, as well as an important reference book for data scientists. One of the keys to its success was that each chapter contains a tutorial on implementing the analyses and methods presented in the R scientific computing environment. However, in recent years Python has become a popular language for data science, and there has been increasing demand for a Python-based alternative to ISLR. Hence, this book (ISLP) covers the same materials as ISLR but with labs implemented in Python. These labs will be useful both for Python novices, as well as experienced users.
This book considers a broad range of areas from decision making methods applied in the contexts of Risk, Reliability and Maintenance (RRM). Intended primarily as an update of the 2015 book Multicriteria and Multiobjective Models for Risk, Reliability and Maintenance Decision Analysis, this edited work provides an integration of applied probability and decision making. Within applied probability, it primarily includes decision analysis and reliability theory, amongst other topics closely related to risk analysis and maintenance. In decision making, it includes multicriteria decision making/aiding (MCDM/A) methods and optimization models. Within MCDM, in addition to decision analysis, some of the topics related to mathematical programming areas are considered, such as multiobjective linear programming, multiobjective nonlinear programming, game theory and negotiations, and multiobjective optimization. Methods related to these topics have been applied to the context of RRM. In MCDA, several other methods are considered, such as outranking methods, rough sets and constructive approaches. The book addresses an innovative treatment of decision making in RRM, improving the integration of fundamental concepts from both areas of RRM and decision making. This is accomplished by presenting current research developments in decision making on RRM. Some pitfalls of decision models on practical applications on RRM are discussed and new approaches for overcoming those drawbacks are presented.
This book involves ideas/results from the topics of mathematical, information, and data sciences, in connection with the main research interests of Professor Pardo that can be summarized as Information Theory with Applications to Statistical Inference. This book is a tribute to Professor Leandro Pardo, who has chaired the Department of Statistics and OR of the Complutense University in Madrid, and he has been also President of the Spanish Society of Statistics and Operations Research. In this way, the contributions have been structured into three parts, which often overlap to a greater or lesser extent, namely Trends in Mathematical Sciences (Part I) Trends in Information Sciences (Part II) Trends in Data Sciences (Part III) The contributions gathered in this book have offered either new developments from a theoretical and/or computational and/or applied point of view, or reviews of recent literature of outstanding developments. They have been applied through nice examples in climatology, chemistry, economics, engineering, geology, health sciences, physics, pandemics, and socioeconomic indicators. Consequently, the intended audience of this book is mainly statisticians, mathematicians, computer scientists, and so on, but users of these disciplines as well as experts in the involved applications may certainly find this book a very interesting read.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.