Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This volume presents a practical and unified approach to categorical data analysis based on the Akaike Information Criterion (AIC) and the Akaike Bayesian Information Criterion (ABIC). Conventional procedures for categorical data analysis are often inappropriate because the classical test procedures employed are too closely related to specific models. The approach described in this volume enables actual problems encountered by data analysts to be handled much more successfully. Amongst various topics explicitly dealt with are the problem of variable selection for categorical data, a Bayesian binary regression, and a nonparametric density estimator and its application to nonparametric test problems. The practical utility of the procedure developed is demonstrated by considering its application to the analysis of various data. This volume complements the volume Akaike Information Criterion Statistics which has already appeared in this series. For statisticians working in mathematics, the social, behavioural, and medical sciences, and engineering.
Observation, Prediction and Simulation of Phase Transitions in Complex Fluids presents an overview of the phase transitions that occur in a variety of soft-matter systems: colloidal suspensions of spherical or rod-like particles and their mixtures, directed polymers and polymer blends, colloid--polymer mixtures, and liquid-forming mesogens. This modern and fascinating branch of condensed matter physics is presented from three complementary viewpoints. The first section, written by experimentalists, emphasises the observation of basic phenomena (by light scattering, for example). The second section, written by theoreticians, focuses on the necessary theoretical tools (density functional theory, path integrals, free energy expansions). The third section is devoted to the results of modern simulation techniques (Gibbs ensemble, free energy calculations, configurational bias Monte Carlo). The interplay between the disciplines is clearly illustrated. For all those interested in modern research in equilibrium statistical mechanics.
The problem of deriving irreversible thermodynamics from the re versible microscopic dynamics has been on the agenda of theoreti cal physics for a century and has produced more papers than can be digested by any single scientist. Why add to this too long list with yet another work? The goal is definitely not to give a gen eral review of previous work in this field. My ambition is rather to present an approach differing in some key aspects from the stan dard treatments, and to develop it as far as possible using rather simple mathematical tools (mainly inequalities of various kinds). However, in the course of this work I have used a large number of results and ideas from the existing literature, and the reference list contains contributions from many different lines of research. As a consequence the reader may find the arguments a bit difficult to follow without some previous exposure to this set of problems.
Computational neuroscience is best defined by its focus on understanding the nervous systems as a computational device rather than by a particular experimental technique. Accordinlgy, while the majority of the papers in this book describe analysis and modeling efforts, other papers describe the results of new biological experiments explicitly placed in the context of computational issues. The distribution of subjects in Computation and Neural Systems reflects the current state of the field. In addition to the scientific results presented here, numerous papers also describe the ongoing technical developments that are critical for the continued growth of computational neuroscience. Computation and Neural Systems includes papers presented at the First Annual Computation and Neural Systems meeting held in San Francisco, CA, July 26--29, 1992.
Categorical data arise often in many fields, including biometrics, economics, management, manufacturing, marketing, psychology, and sociology. This book provides an introduction to the analysis of such data. The coverage is broad, using the loglinear Poisson regression model and logistic binomial regression models as the primary engines for methodology. Topics covered include count regression models, such as Poisson, negative binomial, zero-inflated, and zero-truncated models; loglinear models for two-dimensional and multidimensional contingency tables, including for square tables and tables with ordered categories; and regression models for two-category (binary) and multiple-category target variables, such as logistic and proportional odds models.All methods are illustrated with analyses of real data examples, many from recent subject area journal articles. These analyses are highlighted in the text, and are more detailed than is typical, providing discussion of the context and background of the problem, model checking, and scientific implications. More than 200 exercises are provided, many also based on recent subject area literature. Data sets and computer code are available at a web site devoted to the text. Adopters of this book may request a solutions manual from: textbooks@springer-ny.com. Jeffrey S. Simonoff is Professor of Statistics at New York University. He is author of Smoothing Methods in Statistics and coauthor of A Casebook for a First Course in Statistics and Data Analysis, as well as numerous articles in scholarly journals. He is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics, and an Elected Member of the International Statistical Institute.
Maximum entropy and Bayesian methods have fundamental, central roles in scientific inference, and, with the growing availability of computer power, are being successfully applied in an increasing number of applications in many disciplines. This volume contains selected papers presented at the Thirteenth International Workshop on Maximum Entropy and Bayesian Methods. It includes an extensive tutorial section, and a variety of contributions detailing application in the physical sciences, engineering, law, and economics. Audience: Researchers and other professionals whose work requires the application of practical statistical inference.
Applications of Neural Networks gives a detailed description of 13 practical applications of neural networks, selected because the tasks performed by the neural networks are real and significant. The contributions are from leading researchers in neural networks and, as a whole, provide a balanced coverage across a range of application areas and algorithms. The book is divided into three sections. Section A is an introduction to neural networks for nonspecialists. Section B looks at examples of applications using `Supervised Training'. Section C presents a number of examples of `Unsupervised Training'. For neural network enthusiasts and interested, open-minded sceptics. The book leads the latter through the fundamentals into a convincing and varied series of neural success stories -- described carefully and honestly without over-claiming. Applications of Neural Networks is essential reading for all researchers and designers who are tasked with using neural networks in real life applications.
Statische Berechungsverfahren sicher geubt! Das Buch enthalt eine umfangreiche Aufgabensammlung zur Vertiefung erlernter Kenntnisse auf dem Gebiet der Baustatik. Die Aufgaben sind in die verschiedenen Teilbereiche der Statik aufgegliedert. Zu jedem Themengebiet werden Aufgaben mit ausfuhrlichen Losungswegen angeboten, bei weiteren sind Zwischenlosungen und Kontrollwerte angegeben, damit Losungsweg und Ergebnis stets nachzuvollziehen sind. Alle Aufgaben dienen der Prufungsvorbereitung im Lehrfach Statik. Die 2. Auflage wurde unter anderem auf schwer verstandliche Darstellungen durchgesehen und vereinfacht. Jetzt kann ein noch groerer Lernerfolg garantiert werden.
This is the first half of a text for a two semester course in mathematical statistics at the senior/graduate level for those who need a strong background in statistics as an essential tool in their career. To study this text, the reader needs a thorough familiarity with calculus including such things as Jacobians and series but somewhat less intense familiarity with matrices including quadratic forms and eigenvalues. For convenience, these lecture notes were divided into two parts: Volume I, Probability for Statistics, for the first semester, and Volume II, Statistical Inference, for the second. We suggest that the following distinguish this text from other introductions to mathematical statistics. 1. The most obvious thing is the layout. We have designed each lesson for the (U.S.) 50 minute class; those who study independently probably need the traditional three hours for each lesson. Since we have more than (the U.S. again) 90 lessons, some choices have to be made. In the table of contents, we have used a * to designate those lessons which are "e;interesting but not essential"e; (INE) and may be omitted from a general course; some exercises and proofs in other lessons are also "e;INE"e;. We have made lessons of some material which other writers might stuff into appendices. Incorporating this freedom of choice has led to some redundancy, mostly in definitions, which may be beneficial.
This book is about estimation in situations where we believe we have enough knowledge to model some features of the data parametrically, but are unwilling to assume anything for other features. Such models have arisen in a wide variety of contexts in recent years, particularly in economics, epidemiology, and astronomy. The complicated structure of these models typically requires us to consider nonlinear estimation procedures which often can only be implemented algorithmically. The theory of these procedures is necessarily based on asymptotic approximations.
Pattern recognition and other chemometrical techniques are important tools in interpreting environmental data. This volume presents authoritatively state-of-the-art applications of measuring and handling environmental data. The chapters are written by leading experts.
The software has been developed in Smalltalk80 [1] on SUN and Apple Macintosh computers. Smalltalk80 is an object-oriented programming system which permits rapid prototyping. The need for prototyping in the specification of general practitioner systems was highlighted as long ago as 1980 [4] and is essential to the user -centred philosophy of the project. The goal is a hardware independent system usable on any equipment capable of supporting an integrated environment for handling both textual and graphics and 'point and select' interaction. The architecture is extensible and provides a platform for future experimention with technical advances such as touch screens and voice technology. User Interface Management Systems (UIMS) technology is developing rapidly offering a number of techniques which allow the abstract design of the interface to be separated from the screen/display management on one hand and the internal workings of the application on the other. [2] The importance of this 'layered' approach is that such techniques enable the user to tailor the application to his/her individual preferences and the design team has included and developed many of these ideas into the design. 7. Conclusion: Value Added to Health.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.