Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Provides a high-level overview about the existing literature on clustering stability. In addition to presenting the results in a slightly informal but accessible way, the authors of this book relate them to each other and discuss their different implications.
Provides a comprehensive introduction to generating advanced data analytics on graphs that allows us to move beyond the standard regular sampling in time and space to facilitate modelling in many important areas.
Provides a simple and clear description of explicit-duration modelling by categorizing the different approaches into three main groups, which differ in encoding in the explicit-duration variables different information about regime switching/reset boundaries.
Examines the topic of information processing over graphs. The presentation is largely self-contained and covers results that relate to the analysis and design of multi-agent networks for the distributed solution of optimization, adaptation, and learning problems from streaming data through localized interactions among agents.
Describes recent advances in our understanding of the theoretical benefits of active learning, and implications for the design of effective active learning algorithms. Much of the book focuses on a particular technique - disagreement-based active learning. It also briefly surveys several alternative approaches from the literature.
Covers several aspects of the "optimism in the face of uncertainty" principle for large scale optimization problems under finite numerical budget. The book lays out the theoretical foundations of the field by characterizing the complexity of the optimization problems and designing efficient algorithms with performance guarantees.
Presents the theory of submodular functions in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, it describes how submodular function minimization is equivalent to solving a variety of convex optimization problems.
Reviews a branch of Monte Carlo methods that are based on the forward-backward idea, and that are referred to as backward simulators. In recent years, the theory and practice of backward simulation algorithms have undergone a significant development, and the algorithms keep finding new applications.
Presents an overview of existing research in this topic, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. The book presents as unified a framework as possible under which existing research on metric learning can be cast.
Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs.
Provides a comprehensive tutorial aimed at application-oriented practitioners seeking to apply CRFs. The monograph does not assume previous knowledge of graphical modeling, and so is intended to be useful to practitioners in a wide variety of fields.
Explores different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and regularization methods. The book is aimed at researchers interested in the theory and application of kernels for vector-valued functions.
Randomized algorithms for very large matrix problems have received much attention in recent years. Much of this work was motivated by problems in large-scale data analysis, largely since matrices are popular structures with which to model data drawn from a wide range of application domains. This book provides a detailed overview of this work.
Provides a comprehensible introduction to determinantal point processes (DPPs), focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications.
Provides a tutorial overview of several foundational methods for dimension reduction. The authors divide the methods into projective methods and methods that model the manifold on which the data lies.
Provides an overview of online learning. The aim is to provide the reader with a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms.
Provides an overview of the historical development of statistical network modelling and then introduces a number of examples that have been studied in the network literature. Subsequent discussions focus on a number of prominent static and dynamic network models and their interconnections.
Discusses the motivations for and principles of learning algorithms for deep architectures. By analysing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.
Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, this book develops general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations.
Provides a systematic and example-rich guide to the basic properties and applications of tensor network methodologies, and demonstrates their promise as a tool for the analysis of extreme-scale multidimensional data. The book demonstrates the ability of tensor networks to provide linearly or even super-linearly, scalable solutions.
Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. In this volume, the authors extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types.
Discusses models and methods for Bayesian inference in the simple single-step Bandit model. The book then reviews the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model.
Offers an invitation to the field of matrix concentration inequalities. The book begins with some history of random matrix theory; describes a flexible model for random matrices that is suitable for many problems; and discusses the most important matrix concentration results.
A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms.
Presents some new concentration inequalities for Feynman-Kac particle processes. The book analyses different types of stochastic particle models, including particle profile occupation measures, genealogical tree based evolution models, particle free energies, as well as backward Markov chain particle models.
Presents optimization tools and techniques dedicated to sparsity-inducing penalties from a general perspective. The book covers proximal methods, block-coordinate descent, working-set and homotopy methods, and non-convex formulations and extensions, and provides a set of experiments to compare algorithms from a computational point of view.
Argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Describes methods for automatically compressing Markov decision processes (MDPs) by learning a low-dimensional linear approximation defined by an orthogonal set of basis functions. A unique feature of the text is the use of Laplacian operators, whose matrix representations have non-positive off-diagonal elements and zero row sums.
Surveys recent progress in using spectral methods, including matrix and tensor decomposition techniques, to learn many popular latent variable models. The focus is on a special type of tensor decomposition called CP decomposition. The authors cover a wide range of algorithms to find the components of such tensor decomposition.
Sequential Monte Carlo is a technique for solving statistical inference problems recursively. This book shows how this powerful technique can be applied to machine learning problems such as probabilistic programming, variational inference and inference evaluation.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.