Vi bøger
Levering: 1 - 2 hverdage
Forlænget returret til d. 31. januar 2025

Matematisk modellering

Her finder du spændende bøger om Matematisk modellering. Nedenfor er et flot udvalg af over 259 bøger om emnet.
Vis mere
Filter
Filter
Sorter efterSorter Populære
  • af Heidar A. Talebi, Farzaneh Abdollahi & Kasra Esfandiari
    1.320,95 kr.

  • af Tim Fingscheidt
    656,95 kr.

    This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence.Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety?This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and,last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above.

  • af Alexander E. Hramov, Alexey A. Koronovskii, Valeri A. Makarov, mfl.
    2.022,95 kr.

  • af Nita H. Shah
    1.943,95 kr.

    This work presents the guiding principles of Integral Transforms needed for many applications when solving engineering and science problems. As a modern approach to Laplace Transform, Fourier series and Z-Transforms it is a valuable reference for professionals and students alike.

  • af Philip Osborne
    620,95 kr.

    Reinforcement learning is a powerful tool in artificial intelligence in which virtual or physical agents learn to optimize their decision making to achieve long-term goals. In some cases, this machine learning approach can save programmers time, outperform existing controllers, reach super-human performance, and continually adapt to changing conditions. This book argues that these successes show reinforcement learning can be adopted successfully in many different situations, including robot control, stock trading, supply chain optimization, and plant control. However, reinforcement learning has traditionally been limited to applications in virtual environments or simulations in which the setup is already provided. Furthermore, experimentation may be completed for an almost limitless number of attempts risk-free. In many real-life tasks, applying reinforcement learning is not as simple as (1) data is not in the correct form for reinforcement learning, (2) data is scarce, and (3) automation has limitations in the real-world. Therefore, this book is written to help academics, domain specialists, and data enthusiast alike to understand the basic principles of applying reinforcement learning to real-world problems. This is achieved by focusing on the process of taking practical examples and modeling standard data into the correct form required to then apply basic agents. To further assist with readers gaining a deep and grounded understanding of the approaches, the book shows hand-calculated examples in full and then how this can be achieved in a more automated manner with code. For decision makers who are interested in reinforcement learning as a solution but are not technically proficient we include simple, non-technical examples in the introduction and case studies section. These provide context of what reinforcement learning offer but also the challenges and risks associated with applying it in practice. Specifically, the book illustrates the differences between reinforcement learning and other machine learning approaches as well as how well-known companies have found success using the approach to their problems.

  • af Sarath Sreedharan
    628,95 kr.

    From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans-swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic humanAI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.

  • af Dominique Jeulin
    2.266,95 kr.

  • af William L. Hamilton
    625,95 kr.

    Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis.This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.

  • af Boi Mirsky
    575,95 kr.

    Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide high-quality data are rewarded while those that do not are discouraged by low rewards.We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the game-theoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.

  • af Patrik Haslum
    677,95 kr.

    Planning is the branch of Artificial Intelligence (AI) that seeks to automate reasoning about plans, most importantly the reasoning that goes into formulating a plan to achieve a given goal in a given situation. AI planning is model-based: a planning system takes as input a description (or model) of the initial situation, the actions available to change it, and the goal condition to output a plan composed of those actions that will accomplish the goal when executed from the initial situation.The Planning Domain Definition Language (PDDL) is a formal knowledge representation language designed to express planning models. Developed by the planning research community as a means of facilitating systems comparison, it has become a de-facto standard input language of many planning systems, although it is not the only modelling language for planning. Several variants of PDDL have emerged that capture planning problems of different natures and complexities, with a focus on deterministic problems.The purpose of this book is two-fold. First, we present a unified and current account of PDDL, covering the subsets of PDDL that express discrete, numeric, temporal, and hybrid planning. Second, we want to introduce readers to the art of modelling planning problems in this language, through educational examples that demonstrate how PDDL is used to model realistic planning problems. The book is intended for advanced students and researchers in AI who want to dive into the mechanics of AI planning, as well as those who want to be able to use AI planning systems without an in-depth explanation of the algorithms and implementation techniques they use.

  • af Ganesh Ram Liu
    477,95 kr.

    This book provides a tutorial introduction to modern techniques for representing and reasoning about qualitative preferences with respect to a set of alternatives. The syntax and semantics of several languages for representing preference languages, including CP-nets, TCP-nets, CI-nets, and CP-theories, are reviewed. Some key problems in reasoning about preferences are introduced, including determining whether one alternative is preferred to another, or whether they are equivalent, with respect to a given set of preferences. These tasks can be reduced to model checking in temporal logic. Specifically, an induced preference graph that represents a given set of preferences can be efficiently encoded using a Kripke Structure for Computational Tree Logic (CTL). One can translate preference queries with respect to a set of preferences into an equivalent set of formulae in CTL, such that the CTL formula is satisfied whenever the preference query holds. This allows us to use a model checker to reason about preferences, i.e., answer preference queries, and to obtain a justification as to why a preference query is satisfied (or not) with respect to a set of preferences. This book defines the notions of the equivalence of two sets of preferences, including what it means for one set of preferences to subsume another, and shows how to answer preferential equivalence and subsumption queries using model checking. Furthermore, this book demontrates how to generate alternatives ordered by preference, along with providing ways to deal with inconsistent preference specifications. A description of CRISNER-an open source software implementation of the model checking approach to qualitative preference reasoning in CP-nets, TCP-nets, and CP-theories is included, as well as examples illustrating its use.

  • af Yevgeniy Tu
    676,95 kr.

    The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicius objects they develop.The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research.Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.

  • af Aurelien Muise
    674,95 kr.

    Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learning literature that covers algorithms, theory and applications for both numerical and structured data. We first introduce relevant definitions and classic metric functions, as well as examples of their use in machine learning and data mining. We then review a wide range of metric learning algorithms, starting with the simple setting of linear distance and similarity learning. We show how one may scale-up these methods to very large amounts of training data. To go beyond the linear case, we discuss methods that learn nonlinear metrics or multiple linear metrics throughout the feature space, and review methods for more complex settings such as multi-task and semi-supervised learning. Although most of the existing work has focused on numerical data, we cover the literature on metric learning for structured data like strings, trees, graphs and time series. In the more technical part of the book, we present some recent statistical frameworks for analyzing the generalization performance in metric learning and derive results for some of the algorithms presented earlier. Finally, we illustrate the relevance of metric learning in real-world problems through a series of successful applications to computer vision, bioinformatics and information retrieval. Table of Contents: Introduction / Metrics / Properties of Metric Learning Algorithms / Linear Metric Learning / Nonlinear and Local Metric Learning / Metric Learning for Special Settings / Metric Learning for Structured Data / Generalization Guarantees for Metric Learning / Applications / Conclusion / Bibliography / Authors' Biographies

  • af Cheng Yang
    683,95 kr.

    heterogeneous graphs. Further, the book introduces different applications of NE such as recommendation and information diffusion prediction. Finally, the book concludes the methods and applications and looks forward to the future directions.

  • af Reuth Mirsky
    621,95 kr.

    Plan recognition, activity recognition, and goal recognition all involve making inferences about other actors based on observations of their interactions with the environment and other agents. This synergistic area of research combines, unites, and makes use of techniques and research from a wide range of areas including user modeling, machine vision, automated planning, intelligent user interfaces, human-computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. It plays a crucial role in a wide variety of applications including assistive technology, software assistants, computer and network security, human-robot collaboration, natural language processing, video games, and many more.This wide range of applications and disciplines has produced a wealth of ideas, models, tools, and results in the recognition literature. However, it has also contributed to fragmentation in the field, with researchers publishing relevant results in a wide spectrum of journals and conferences.This book seeks to address this fragmentation by providing a high-level introduction and historical overview of the plan and goal recognition literature. It provides a description of the core elements that comprise these recognition problems and practical advice for modeling them. In particular, we define and distinguish the different recognition tasks. We formalize the major approaches to modeling these problems using a single motivating example. Finally, we describe a number of state-of-the-art systems and their extensions, future challenges, and some potential applications.

  • af Yang Liu, Han Yu, Qiang Yang, mfl.
    729,95 kr.

    How is it possible to allow multiple data owners to collaboratively train and use a shared prediction model while keeping all the local training data private?Traditional machine learning approaches need to combine all data at one location, typically a data center, which may very well violate the laws on user privacy and data confidentiality. Today, many parts of the world demand that technology companies treat user data carefully according to user-privacy laws. The European Union's General Data Protection Regulation (GDPR) is a prime example. In this book, we describe how federated machine learning addresses this problem with novel solutions combining distributed machine learning, cryptography and security, and incentive mechanism design based on economic principles and game theory. We explain different types of privacy-preserving machine learning solutions and their technological backgrounds, and highlight some representative practical use cases. We show how federated learning can become the foundation of next-generation machine learning that caters to technological and societal needs for responsible AI development and application.

  • af Rina Sreedharan
    630,95 kr.

    Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well known that the tasks are computationally hard, but research during the past three decades has yielded a variety of principles and techniques that significantly advanced the state of the art.This book provides comprehensive coverage of the primary exact algorithms for reasoning with such models. The main feature exploited by the algorithms is the model's graph. We present inference-based, message-passing schemes (e.g., variable-elimination) and search-based, conditioning schemes (e.g., cycle-cutset conditioning and AND/OR search). Each class possesses distinguished characteristics and in particular has different time vs. space behavior. We emphasize the dependence of both schemes on few graph parameters such as the treewidth, cycle-cutset, and (the pseudo-tree) height. The new edition includes the notion of influence diagrams, which focus on sequential decision making under uncertainty. We believe the principles outlined in the book would serve well in moving forward to approximation and anytime-based schemes. The target audience of this book is researchers and students in the artificial intelligence and machine learning area, and beyond.

  • af Diederik M. Zhou
    405,95 kr.

    Many real-world decision problems have multiple objectives. For example, when choosing a medical treatment plan, we want to maximize the efficacy of the treatment, but also minimize the side effects. These objectives typically conflict, e.g., we can often increase the efficacy of the treatment, but at the cost of more severe side effects. In this book, we outline how to deal with multiple objectives in decision-theoretic planning and reinforcement learning algorithms. To illustrate this, we employ the popular problem classes of multi-objective Markov decision processes (MOMDPs) and multi-objective coordination graphs (MO-CoGs).First, we discuss different use cases for multi-objective decision making, and why they often necessitate explicitly multi-objective algorithms. We advocate a utility-based approach to multi-objective decision making, i.e., that what constitutes an optimal solution to a multi-objective decision problem should be derived from the available information about user utility. We show how different assumptions about user utility and what types of policies are allowed lead to different solution concepts, which we outline in a taxonomy of multi-objective decision problems.Second, we show how to create new methods for multi-objective decision making using existing single-objective methods as a basis. Focusing on planning, we describe two ways to creating multi-objective algorithms: in the inner loop approach, the inner workings of a single-objective method are adapted to work with multi-objective solution concepts; in the outer loop approach, a wrapper is created around a single-objective method that solves the multi-objective problem as a series of single-objective problems. After discussing the creation of such methods for the planning setting, we discuss how these approaches apply to the learning setting.Next, we discuss three promising application domains for multi-objective decision making algorithms: energy, health, and infrastructure and transportation. Finally, we conclude by outlining important open problems and promising future directions.

  • af Zhiyuan Liu
    671,95 kr.

    Graphs are useful data structures in complex real-life applications such as modeling physical systems, learning molecular fingerprints, controlling traffic networks, and recommending friends in social networks. However, these tasks require dealing with non-Euclidean graph data that contains rich relational information between elements and cannot be well handled by traditional deep learning models (e.g., convolutional neural networks (CNNs) or recurrent neural networks (RNNs)). Nodes in graphs usually contain useful feature information that cannot be well addressed in most unsupervised representation learning methods (e.g., network embedding methods). Graph neural networks (GNNs) are proposed to combine the feature information and the graph structure to learn better representations on graphs via feature propagation and aggregation. Due to its convincing performance and high interpretability, GNN has recently become a widely applied graph analysis tool.This book provides a comprehensive introduction to the basic concepts, models, and applications of graph neural networks. It starts with the introduction of the vanilla GNN model. Then several variants of the vanilla model are introduced such as graph convolutional networks, graph recurrent networks, graph attention networks, graph residual networks, and several general frameworks. Variants for different graph types and advanced training methods are also included. As for the applications of GNNs, the book categorizes them into structural, non-structural, and other scenarios, and then it introduces several typical models on solving these tasks. Finally, the closing chapters provide GNN open resources and the outlook of several future directions.

  • af Zhiyuan Sun
    729,95 kr.

    Lifelong Machine Learning, Second Edition is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent.Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks-which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning-most notably, multi-task learning, transfer learning, and meta-learning-because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields.

  • af Michael Genesereth
    416,95 kr.

    General game players are computer systems able to play strategy games based solely on formal game descriptions supplied at "e;runtime"e; (n other words, they don't know the rules until the game starts). Unlike specialized game players, such as Deep Blue, general game players cannot rely on algorithms designed in advance for specific games; they must discover such algorithms themselves. General game playing expertise depends on intelligence on the part of the game player and not just intelligence of the programmer of the game player. GGP is an interesting application in its own right. It is intellectually engaging and more than a little fun. But it is much more than that. It provides a theoretical framework for modeling discrete dynamic systems and defining rationality in a way that takes into account problem representation and complexities like incompleteness of information and resource bounds. It has practical applications in areas where these features are important, e.g., in business and law. More fundamentally, it raises questions about the nature of intelligence and serves as a laboratory in which to evaluate competing approaches to artificial intelligence. This book is an elementary introduction to General Game Playing (GGP). (1) It presents the theory of General Game Playing and leading GGP technologies. (2) It shows how to create GGP programs capable of competing against other programs and humans. (3) It offers a glimpse of some of the real-world applications of General Game Playing.

  • af Sonia Dechter
    405,95 kr.

    Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 gives a brief survey of the psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 is devoted to interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects in this domain.

  • af Lirong Costa
    625,95 kr.

    The ubiquitous challenge of learning and decision-making from rank data arises in situations where intelligent systems collect preference and behavior data from humans, learn from the data, and then use the data to help humans make efficient, effective, and timely decisions. Often, such data are represented by rankings.This book surveys some recent progress toward addressing the challenge from the considerations of statistics, computation, and socio-economics. We will cover classical statistical models for rank data, including random utility models, distance-based models, and mixture models. We will discuss and compare classical and state-of-the-art algorithms, such as algorithms based on Minorize-Majorization (MM), Expectation-Maximization (EM), Generalized Method-of-Moments (GMM), rank breaking, and tensor decomposition. We will also introduce principled Bayesian preference elicitation frameworks for collecting rank data. Finally, we will examine socio-economic aspects of statistically desirable decision-making mechanisms, such as Bayesian estimators.This book can be useful in three ways: (1) for theoreticians in statistics and machine learning to better understand the considerations and caveats of learning from rank data, compared to learning from other types of data, especially cardinal data; (2) for practitioners to apply algorithms covered by the book for sampling, learning, and aggregation; and (3) as a textbook for graduate students or advanced undergraduate students to learn about the field.This book requires that the reader has basic knowledge in probability, statistics, and algorithms. Knowledge in social choice would also help but is not required.

  • af Michael Genesereth
    681,95 kr.

    Logic Programming is a style of programming in which programs take the form of sets of sentences in the language of Symbolic Logic. Over the years, there has been growing interest in Logic Programming due to applications in deductive databases, automated worksheets, Enterprise Management (business rules), Computational Law, and General Game Playing. This book introduces Logic Programming theory, current technology, and popular applications. In this volume, we take an innovative, model-theoretic approach to logic programming. We begin with the fundamental notion of datasets, i.e., sets of ground atoms. Given this fundamental notion, we introduce views, i.e., virtual relations; and we define classical logic programs as sets of view definitions, written using traditional Prolog-like notation but with semantics given in terms of datasets rather than implementation. We then introduce actions, i.e., additions and deletions of ground atoms; and we define dynamic logic programs as sets of action definitions. In addition to the printed book, there is an online version of the text with an interpreter and a compiler for the language used in the text and an integrated development environment for use in developing and deploying practical logic programs.

  • af Amarnag Lipovetzky
    405,95 kr.

    While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer vision, natural language processing, and other areas of Artificial Intelligence. Recognizing this promising and emerging area of research, this synthesis lecture focuses on graph-based SSL algorithms (e.g., label propagation methods). Our hope is that after reading this book, the reader will walk away with the following: (1) an in-depth knowledge of the current state-of-the-art in graph-based SSL algorithms, and the ability to implement them; (2) the ability to decide on the suitability of graph-based SSL methods for a problem; and (3) familiarity with different applications where graph-based SSL methods have been successfully applied. Table of Contents: Introduction / Graph Construction / Learning and Inference / Scalability / Applications / Future Work / Bibliography / Authors' Biographies / Index

  • af Felipe Leno Da Silva
    671,95 kr.

    Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.

  • af Roman Barták, Robert A. Morris & K. Brent Venable
    405,95 kr.

    Solving challenging computational problems involving time has been a critical component in the development of artificial intelligence systems almost since the inception of the field. This book provides a concise introduction to the core computational elements of temporal reasoning for use in AI systems for planning and scheduling, as well as systems that extract temporal information from data. It presents a survey of temporal frameworks based on constraints, both qualitative and quantitative, as well as of major temporal consistency techniques. The book also introduces the reader to more recent extensions to the core model that allow AI systems to explicitly represent temporal preferences and temporal uncertainty. This book is intended for students and researchers interested in constraint-based temporal reasoning. It provides a self-contained guide to the different representations of time, as well as examples of recent applications of time in AI systems.

  • af Burr Chen
    404,95 kr.

    The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "e;queries,"e; usually in the form of unlabeled data instances to be labeled by an "e;oracle"e; (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "e;query selection frameworks."e; We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. Table of Contents: Automating Inquiry / Uncertainty Sampling / Searching Through the Hypothesis Space / Minimizing Expected Error and Variance / Exploiting Structure in Data / Theory / Practical Considerations

  • af Michael Hexmoor
    323,95 kr.

    Artificial systems that think and behave intelligently are one of the most exciting and challenging goals of Artificial Intelligence. Action Programming is the art and science of devising high-level control strategies for autonomous systems which employ a mental model of their environment and which reason about their actions as a means to achieve their goals. Applications of this programming paradigm include autonomous software agents, mobile robots with high-level reasoning capabilities, and General Game Playing. These lecture notes give an in-depth introduction to the current state-of-the-art in action programming. The main topics are knowledge representation for actions, procedural action programming, planning, agent logic programs, and reactive, behavior-based agents. The only prerequisite for understanding the material in these lecture notes is some general programming experience and basic knowledge of classical first-order logic. Table of Contents: Introduction / Mathematical Preliminaries / Procedural Action Programs / Action Programs and Planning / Declarative Action Programs / Reactive Action Programs / Suggested Further Reading

Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.