Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
With the intriguing development of technologies in several industries along with the advent of accrescent and ubiquitous computational resources, it creates an ample number of opportunities to develop innovative intelligence technologies in order to solve the wide range of uncertainties, imprecision, and vagueness issues in various real-life problems. Hybridizing modern computational intelligence with traditional computing methods has attracted researchers and academicians to focus on developing innovative AI techniques using data science. International Conference on Data Science and Artificial Intelligence (ICDSAI) 2022, organized on April 23-24, 2022 by the Indian Institute of Technology, Patna at NITIE Mumbai (India) in collaboration with the International Association of Academicians (IAASSE) USA collected scientific and technical contributions with respect to models, tools, technologies, and applications in the field of modern Artificial Intelligence and Data Science, coveringthe entire range of concepts from theory to practice, including case studies, works-in-progress, and conceptual explorations.
This book contains plenary lectures given at the International Conference on Mathematical and Computational Modeling, Approximation and Simulation, dealing with three very different problems: reduction of Runge and Gibbs phenomena, difficulties arising when studying models that depend on the highly nonlinear behaviour of a system of PDEs, and data fitting with truncated hierarchical B-splines for the adaptive reconstruction of industrial models. The book includes nine contributions, mostly related to quasi-interpolation. This is a topic that continues to register a high level of interest, both for those working in the field of approximation theory and for those interested in its use in a practical context. Two chapters address the construction of quasi-interpolants, and three others focus on the use of quasi-interpolation in solving integral equations. The remaining four concern a problem related to the heat diffusion equation, new results on the notion of convexity in probabilistic metric spaces (which are applied to the study of the existence and uniqueness of the solution of a Volterra equation), the use of smoothing splines to address an economic problem and, finally, the analysis of poverty measures, which is a topic of increased interest to society. The book is addressed to researchers interested in Applied Mathematics, with particular reference to the aforementioned topics.
In recent years, extensive research has been conducted by eminent mathematicians and engineers whose results and proposed problems are presented in this new volume. It is addressed to graduate students, research mathematicians, physicists, and engineers. Individual contributions are devoted to topics of approximation theory, functional equations and inequalities, fixed point theory, numerical analysis, theory of wavelets, convex analysis, topology, operator theory, differential operators, fractional integral operators, integro-differential equations, ternary algebras, super and hyper relators, variational analysis, discrete mathematics, cryptography, and a variety of applications in interdisciplinary topics. Several of these domains have a strong connection with both theories and problems of linear and nonlinear optimization. The combination of results from various domains provides the reader with a solid, state-of-the-art interdisciplinary reference to theory and problems. Some of the works provide guidelines for further research and proposals for new directions and open problems with relevant discussions.
This book provides an overview of the emerging field of in situ visualization, i.e. visualizing simulation data as it is generated. In situ visualization is a processing paradigm in response to recent trends in the development of high-performance computers. It has great promise in its ability to access increased temporal resolution and leverage extensive computational power. However, the paradigm also is widely viewed as limiting when it comes to exploration-oriented use cases. Furthermore, it will require visualization systems to become increasingly complex and constrained in usage. As research efforts on in situ visualization are growing, the state of the art and best practices are rapidly maturing.Specifically, this book contains chapters that reflect state-of-the-art research results and best practices in the area of in situ visualization. Our target audience are researchers and practitioners from the areas of mathematics computational science, high-performance computing, and computer science that work on or with in situ techniques, or desire to do so in future.
The book provides a pedagogic and comprehensive introduction to homogenization theory with a special focus on problems set for non-periodic media. The presentation encompasses both deterministic and probabilistic settings. It also mixes the most abstract aspects with some more practical aspects regarding the numerical approaches necessary to simulate such multiscale problems. Based on lecture courses of the authors, the book is suitable for graduate students of mathematics and engineering.
Large sparse linear systems of equations are ubiquitous in science, engineering and beyond. This open access monograph focuses on factorization algorithms for solving such systems. It presents classical techniques for complete factorizations that are used in sparse direct methods and discusses the computation of approximate direct and inverse factorizations that are key to constructing general-purpose algebraic preconditioners for iterative solvers. A unified framework is used that emphasizes the underlying sparsity structures and highlights the importance of understanding sparse direct methods when developing algebraic preconditioners. Theoretical results are complemented by sparse matrix algorithm outlines.This monograph is aimed at students of applied mathematics and scientific computing, as well as computational scientists and software developers who are interested in understanding the theory and algorithms needed to tackle sparse systems. It is assumed that the reader has completed a basic course in linear algebra and numerical mathematics.
Dieses Lehrbuch führt konsequent algorithmisch orientiert in die Modellreduktion linearer zeitinvarianter Systeme ein; der Fokus liegt hierbei auf systemtheoretischen Methoden. Insbesondere werden modales und balanciertes Abschneiden eingehend behandelt. Darüber hinaus werden Methoden des Momentenabgleichs, basierend auf Krylovraumverfahren und rationaler Interpolation, diskutiert. Dabei werden alle notwendigen Grundlagen sowohl aus der Systemtheorie als auch aus der numerischen linearen Algebra vorgestellt. Die Illustration der in diesem Buch vorgestellten Verfahren der Modellreduktion, sowie einiger der notwendigen, verwendeten Konzepte aus unterschiedlichen mathematischen Bereichen, erfolgt anhand einer Reihe von numerischen Beispielen. Dazu werden die mathematische Software MATLAB® und einige frei verfügbare Software-Pakete eingesetzt, so dass alle Beispiele nachvollzogen werden können.
The book is designed for use in a graduate program in Numerical Analysis that includes a basic introductory course and subsequent more specialized courses. The latter is envisaged to cover numerical linear algebra, the numerical solution of ordinary and partial differential equations, and perhaps additional topics related to complex analysis, multidimensional analysis, in particular optimization, and functional analysis and related functional equations.Viewed in this context, the first four chapters of our book could serve as a text for the basic introductory course on the Python program, andthe remaining chapters could provide a text for an advanced course on the numerical solution of ordinary differential equations. Therefore, the book breaks with tradition in that it no longer attempts to deal with all major topics of numerical mathematics. Those dealing with linear algebra and partial differential equations have developed into major fields of study that have attained a degree of autonomy and identity that justifies their treatment in separate books and separate courses on the graduate level. The term "Numerical Analysis" as used in this book, therefore, is to be taken in the narrow sense of the numerical analog of Mathematical Analysis, comprising such topics as machine arithmetic, the approximation of functions, approximate differentiation and integration, and the approximate solution of nonlinear equations and ordinary differential equations.This book aims to provide a good understanding of Numerical engineering analysis and its applications and optimization. The book begins with studying the concept of Python fundamentals for scientific computing. It then presents their applications in the different configurations shown in lucid detail.For more details, please visit https://centralwestpublishing.com
This book provides readers with a deep understanding of the use of objective algorithms for integration of constitutive relations (CRs) for Hooke-like hypoelasticity based on the use of corotational stress rates. The purpose of objective algorithms is to perform the step-by-step integration of CRs using fairly large time steps that provide high accuracy of this integration in combination with the exact reproduction of superimposed rigid body motions. Since Hooke-like hypoelasticity is included as a component in CRs for elastic-inelastic materials (e.g., in CRs for elastic-plastic materials), the scope of these algorithms is not limited to hypoelastic materials, but extends to many other materials subjected to large deformations. The authors performed a comparative analysis of the performance of most currently available objective algorithms, provided some recommendations for improving the existing formulations of these algorithms, and presented new formulations of the so-called absolutely objective algorithms. The proposed book will be useful for beginner researchers in the development of economical methods for integrating elastic-inelastic CRs, as well as for experienced researchers, by providing a compact overview of existing objective algorithms and new formulations of these algorithms. The book will also be useful for developers of computer codes for implementing objective algorithms in FE systems. In addition, this book will also be useful for users of commercial FE codes, since often these codes are so-called black boxes and this book shows how to test accuracy of the algorithms of these codes for integrating elastic-inelastic CRs in modeling large rotations superimposed on the uniform deformation of any sample.
The book is very useful for researchers, graduate students and educators associated with or interested in recent advances in different aspects of modelling, computational methods and techniques necessary for solving problems arising in the real-world problems. The book includes carefully peer-reviewed research articles presented in the ¿5th International Conference on Mathematical Modelling, Applied Analysis and Computation¿, held at JECRC University, Jaipur, during 4¿6 August 2022 concentrating on current advances in mathematical modelling and computation via tools and techniques from mathematics and allied areas. It is focused on papers dealing with necessary theory and methods in a balanced manner and contributes towards solving problems arising in engineering, control systems, networking system, environment science, health science, physical and biological systems, social issues of current interest, etc.
This book is intended for a first-semester course in calculus, which begins by posing a question: how do we model an epidemic mathematically? The authors use this question as a natural motivation for the study of calculus and as a context through which central calculus notions can be understood intuitively. The book¿s approach to calculus is contextual and based on the principle that calculus is motivated and elucidated by its relevance to the modeling of various natural phenomena. The authors also approach calculus from a computational perspective, explaining that many natural phenomena require analysis through computer methods. As such, the book also explores some basic programming notions and skills.
This book provides an introduction to the fundamental theory, practical implementation, and core and emerging applications of the material point method (MPM) and its variants. The MPM combines the advantages of both finite element analysis (FEM) and meshless/meshfree methods (MMs) by representing the material by a set of particles overlaid on a background mesh that serves as a computational scratchpad.The book shows how MPM allows a robust, accurate, and efficient simulation of a wide variety of material behaviors without requiring overly complex implementations. MPM and its variants have been shown to be successful in simulating a large number of high deformation and complicated engineering problems such as densification of foam, sea ice dynamics, landslides, and energetic device explosions, to name a few, and have recently found applications in the movie industry. It is hoped that this comprehensive exposition on MPM variants and their applications will not only provide anopportunity to re-examine previous contributions, but also to re-organize them in a coherent fashion and in anticipation of new advances.Sample algorithms for the solutions of benchmark problems are provided online so that researchers and graduate students can modify these algorithms and develop their own solution algorithms for specific problems. The goal of this book is to provide students and researchers with a theoretical and practical knowledge of the material point method to analyze engineering problems, and it may help initiate and promote further in-depth studies on the subjects discussed.
This book demonstrates how to formally model various mathematical domains (including algorithms operating in these domains) in a way that makes them amenable to a fully automatic analysis by computer software.The presented domains are typically investigated in discrete mathematics, logic, algebra, and computer science; they are modeled in a formal language based on first-order logic which is sufficiently rich to express the core entities in whose correctness we are interested: mathematical theorems and algorithmic specifications. This formal language is the language of RISCAL, a ¿mathematical model checker¿ by which the validity of all formulas and the correctness of all algorithms can be automatically decided. The RISCAL software is freely available; all formal contents presented in the book are given in the form of specification files by which the reader may interact with the software while studying the corresponding book material.
Wavelets haben in den letzten zwölf Jahren eine stürmische Entwicklung in Forschung und Anwendungen genommen. Wie so oft war der Anfang ein ingenieursmäßiger Zu gang zu einem Anwendungsproblem, das mit den vorhandenen Mitteln nicht zufrie denstellend lösbar war. Im Falle der Wavelets war das Versagen klassischer Methoden zur Analyse geophysikalischer Daten Anlaß, "neue" Analyseverfahren zu entwickeln. Auch hier ist dann mit der Zeit deutlich geworden, daß die Wurzeln der Methode in mathematische Arbeiten hineinreichen. Dieses Zusammenspiel von Anwendungen und mathematischer Theorie hat erst den Erfolg gebracht. Ein Nachteil der Fourier-Transformation ist das Fehlen einer Lokalisierungseigenschaft: ändert sich ein Signal an einer Stelle, so ändert sich die Transformierte überall, ohne daß durch bloßes Hinschauen die Stelle der Änderung gefunden werden kann. Der Grund ist natürlich die Verwendung der immer periodisch schwingenden trigonome trischen Funktionen. Verwendet man dagegen räumlich begrenzte Wavelets, "kleine Wellen" oder "Wellchen" sind Versuche einer Übersetzung ins Deutsche, so kann durch das Verschieben eine Lokalisierung und durch Stauchen eine Frequenzauflösung an der entsprechenden Stelle erreicht werden. Schon früh bei der Entwicklung der Ondelettes, wie die Wavelets in ihrem Ursprungs land Frankreich genannt werden, sind sowohl die kontinuierliche als auch die diskrete Transformation untersucht worden. Die kontinuierliche Wavelet-Transformation kann als eine Phasenraumdarstellung in terpretiert werden. Ihre Filter- und Approximationseigenschaften werden untersucht.
This book is a self-guided tour of MATLAB for engineers and life scientists. It introduces the most commonly used programming techniques through biologically inspired examples. Although the text is written for undergraduates, graduate students and academics, as well as those in industry, will find value in learning MATLAB.The book takes the emphasis off of learning syntax so that the reader can focus more on algorithmic thinking. Although it is not assumed that the reader has taken differential equations or a linear algebra class, there are short introductions to many of these concepts. Following a short history of computing, the MATLAB environment is introduced. Next, vectors and matrices are discussed, followed by matrix-vector operations. The core programming elements of MATLAB are introduced in three successive chapters on scripts, loops, and conditional logic. The last three chapters outline how to manage the input and output of data, create professional quality graphics and find and use MATLAB toolboxes. Throughout, biomedical and life science examples are used to illustrate MATLAB's capabilities.
This book is an attempt to develop a guide for the user who is interested in learning the method by doing. There is enough discussion of some of the basic theory so that the user can get a broad understanding of the process. And there are many examples with step-by-step instructions for the user to quickly develop some proficiency in using FEA. We have used Matlab and its PDE toolbox for the examples in this text. The syntax and the modeling process are easy to understand and a new user can become productive very quickly. The PDE toolbox, just like any other commercial software, can solve certain classes of problems well but is not capable of solving every type of problem. For example, it can solve linear problems but is not capable of handling non-linear problems. Being aware of the capabilities of any tool is an important lesson for the user and we have, with this book, tried to highlight that lesson as well.
In this book, innovative research using artificial neural networks (ANNs) is conducted to automate the sizing task of RF IC design, which is used in two different steps of the automatic design process. The advances in telecommunications, such as the 5th generation broadband or 5G for short, open doors to advances in areas such as health care, education, resource management, transportation, agriculture and many other areas. Consequently, there is high pressure in today¿s market for significant communication rates, extensive bandwidths and ultralow-power consumption. This is where radiofrequency (RF) integrated circuits (ICs) come in hand, playing a crucial role. This demand stresses out the problem which resides in the remarkable difficulty of RF IC design in deep nanometric integration technologies due to their high complexity and stringent performances. Given the economic pressure for high quality yet cheap electronics and challenging time-to-market constraints, there is an urgent need for electronic design automation (EDA) tools to increase the RF designers¿ productivity and improve the quality of resulting ICs. In the last years, the automatic sizing of RF IC blocks in deep nanometer technologies has moved toward process, voltage and temperature (PVT)-inclusive optimizations to ensure their robustness. Each sizing solution is exhaustively simulated in a set of PVT corners, thus pushing modern workstations¿ capabilities to their limits.Standard ANNs applications usually exploit the model¿s capability of describing a complex, harder to describe, relation between input and target data. For that purpose, ANNs are a mechanism to bypass the process of describing the complex underlying relations between data by feeding it a significant number of previously acquired input/output data pairs that the model attempts to copy. Here, and firstly, the ANNs disrupt from the most recent trials of replacing the simulator in the simulation-based sizing with a machine/deep learning model, by proposing two different ANNs, the first classifies the convergence of the circuit for nominal and PVT corners, and the second predicts the oscillating frequencies for each case. The convergence classifier (CCANN) and frequency guess predictor (FGPANN) are seamlessly integrated into the simulation-based sizing loop, accelerating the overall optimization process. Secondly, a PVT regressor that inputs the circuit¿s sizing and the nominal performances to estimate the PVT corner performances via multiple parallel artificial neural networks is proposed. Two control phases prevent the optimization process from being misled by inaccurate performance estimates. As such, this book details the optimal description of the input/output data relation that should be fulfilled. The developed description is mainly reflected in two of the system¿s characteristics, the shape of the input data and its incorporation in the sizing optimization loop. An optimal description of thesecomponents should be such that the model should produce output data that fulfills the desired relation for the given training data once fully trained. Additionally, the model should be capable of efficiently generalizing the acquired knowledge in newer examples, i.e., never-seen input circuit topologies.
These are the proceedings of the 26th International Conference on Domain Decomposition Methods in Science and Engineering, which was hosted by the Chinese University of Hong Kong and held online in December 2020.Domain decomposition methods are iterative methods for solving the often very large systems of equations that arise when engineering problems are discretized, frequently using finite elements or other modern techniques. These methods are specifically designed to make effective use of massively parallel, high-performance computing systems.The book presents both theoretical and computational advances in this domain, reflecting the state of art in 2020.
The book integrates theory, numerical methods, and practical applications seamlessly. MATLAB and MathCad programs are provided for readers to master the theory, understand the approach, and to further develop and apply the methods to geological problems. Multiscale and multi-physics investigations of Earth and planetary processes have been an active trend of research in Earth Sciences, thanks to the development of scientific computation and computer software and hardware. Based on the author's research and teaching over the past 15 years, the book stands alone as the first comprehensive text in unifying fundamental continuum micromechanics theory, geometric/kinematic analysis, and applications. The book should appeal to a broad audience of students and researchers, particularly those in the fields of structural geology, tectonics, (natural and experimental) rock deformation, mineral physics and rheology, and numerical modeling of multiscale and coupling processes.
The once esoteric idea of embedding scientific computing into a probabilistic framework, mostly along the lines of the Bayesian paradigm, has recently enjoyed wide popularity and found its way into numerous applications. This book provides an insider¿s view of how to combine two mature fields, scientific computing and Bayesian inference, into a powerful language leveraging the capabilities of both components for computational efficiency, high resolution power and uncertainty quantification ability. The impact of Bayesian scientific computing has been particularly significant in the area of computational inverse problems where the data are often scarce or of low quality, but some characteristics of the unknown solution may be available a priori. The ability to combine the flexibility of the Bayesian probabilistic framework with efficient numerical methods has contributed to the popularity of Bayesian inversion, with the prior distribution being the counterpart of classical regularization. However, the interplay between Bayesian inference and numerical analysis is much richer than providing an alternative way to regularize inverse problems, as demonstrated by the discussion of time dependent problems, iterative methods, and sparsity promoting priors in this book. The quantification of uncertainty in computed solutions and model predictions is another area where Bayesian scientific computing plays a critical role. This book demonstrates that Bayesian inference and scientific computing have much more in common than what one may expect, and gradually builds a natural interface between these two areas.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.