Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This monograph covers the topic of Wireless for Machine Learning (ML). Although the general intersection of ML and wireless communications is currently a prolific field of research that has already generated multiple publications, there is little review work on Wireless for ML. As data generation increasingly takes place on devices without a wired connection, ML related traffic will be ubiquitous in wireless networks. Research has shown that traditional wireless protocols are highly inefficient or unsustainable to support ML, which creates the need for new wireless communication methods. This monograph gives an exhaustive review of the state-of-the-art wireless methods that are specifically designed to support ML services over distributed datasets. Currently, there are two clear themes within the literature, analog over-the-air computation and digital radio resource management optimized for ML. A comprehensive introduction to these methods is presented, reviews are made of the most important works, open problems are highlighted and application scenarios are discussed.
Personal mobile devices like smartphones and tablets are ubiquitous. People use mobile devices for fun, for work, and for organizing and managing their lives, including their finances. This has become possible because over the past two decades, mobile phones evolved from closed platforms intended for voice calls and messaging to open platforms whose functionality can be extended in myriad ways by third party developers. Such wide-ranging scope of use also means widely different security and privacy requirements for those uses. As mobile platforms gradually opened, platform security mechanisms were incorporated into their architectures so that the security and privacy requirements of all stakeholders could be met. The time is therefore right to take a new look at mobile platform security, which is the intent of this monograph. The monograph is divided into four parts: firstly, the authors look at the how and why of mobile platform security, and this is followed by a discussion on vulnerabilities and attacks. The monograph concludes by looking forward and discussing emerging research that explores ways of dealing with hardware compromise, and building blocks for the next generation of hardware platform security. The authors have intended to provide a broad overview of the current state of practice and a glimpse of possible research directions that can be of use to practitioners, decision makers, and researchers. The focus of this monograph is on hardware platform security in mobile devices. Other forms of Security, such as OS Security, are briefly covered, but from the perspective of motivating hardware platform security. Also, specific high-level attacks such as jail-breaking or rooting are not covered, though the basic attacks described in Part III can, and often are, used as stepping stones for these high-level attacks.
Hamming distance and rank metric have long been used in coding theory. The sum-rank metric naturally extends these over fields. They have attracted significant attention for their applications in distributed storage systems, multishot network coding, streaming over erasure channels, and multi-antenna wireless communication. In this monograph, the authors provide a tutorial introduction to the theory and applications of sum-rank metric codes over finite fields. At the heart of the monograph is the construction of linearized Reed-Solomon codes, a general construction of maximum sum-rank distance (MSRD) codes with polynomial field sizes. These specialize to classical Reed-Solomon and Gabidulin code constructions in the Hamming and rank metrics, respectively and produce an efficient Welch-Berlekamp decoding algorithm. The authors proceed to develop applications of these codes in distributed storage systems, network coding, and multi-antenna communication before surveying other families of codes in the sum-rank metric, including convolutional codes and subfield subcodes, and recent results in the general theory of codes in the sum-rank metric. This tutorial on the topic provides the reader with a comprehensive introduction to both the theory and practice of this important class of codes used in many storage and communication systems. It will be a valuable resource for students, researchers and practising engineers alike.
Moving toward green energy technologies will introduce more technical challenges to the modern interconnected energy systems with power systems. To address these challenges, it is necessary to understand the basics of power systems and the new technologies integrated to the power systems. Among the emerging technologies, power electronics play a significant role in various applications. Depending on how to design, control and operate the power electronics, they can strengthen or deteriorate the performance of the whole system. This monograph provides an overview of the modern electric energy systems with more power electronics integration in generation, operation and control perspectives. The basics of power systems are introduced and the fundamentals of transition from traditional centralized power systems to modern power systems are discussed. Thereafter, dominant clean energy generations are introduced, and the basics of power converter topologies and control structure are explained. Lastly, the concept of reliability assessment in power electronics-dominated power systems is covered. Also, major technical challenges that are deteriorating the overall system performance and reliability are addressed and feasible solutions are explained.
One of the open questions in neuroscience is the function of the cerebellum, a major brain region involved in regulation of the motor systems, speech, emotion, and other cognitive functions of the body. In this monograph the author makes and tests a hypothesis that the primary function of the cerebellum is disturbance rejection of exogenous reference and disturbance signals. In achieving this goal, the author provides a brief historical overview of computational theories of cerebellar function and of the relevant parts of control theory in the area of regulator theory, and then presents a chronological review of subjects in control theory related to the hypothesis. The author begins with classical regulator theory and highlight some aspects that are not suited to the modeling of the cerebellum. Then adaptive control theory is reviewed in terms of error models. To test the hypothesis on cerebellar function, the author applies adaptive internal model designs to several motor systems regulated by the cerebellum. These include the slow eye movement systems: the vestibulo-ocular reflex, gaze holding, smooth pursuit, and the optokinetic system. Finally, discrete time behaviors regulated by the cerebellum are investigated. In all, this monograph provides a unifying framework to explain how the cerebellum can contribute to so many different systems in the body. This monograph is an important comprehensive study of modeling the cerebellum using control theory techniques. It will be of interest to neuroscientists and control theorists working on understanding the function of the human brain.
Data storage has grown such that distributed storage over a number of systems is now commonplace. This has given rise to an increase in the complexity of ensuring data loss does not occur, particularly where failure is due to the failure of individual nodes within the storage system. Redundancy was the main tool to combat this, but with huge increases in data, minimization of the overhead associated with this technique caused major concern. In a large data center, a third concern arose, namely the need for efficient recovery from the failure of a single storage unit. In this monograph, the authors give a comprehensive overview of the role of differing types of codes in addressing the issues in large distributed storage systems. They introduce the reader to regenerative codes, locally recoverable codes and locally regenerative codes; the three main classes of codes used in such systems. They give an exhaustive overview of how these codes were created, their uses and the developments and improvements of the codes in the last decade. This in-depth review gives the reader an accessible and complete overview of the modern codes used in distributed storage systems today. It is a one-stop source for students, researchers and practitioners working on any such system.
Over the last decade, Approximate Message Passing (AMP) algorithms have become extremely popular in various structured high-dimensional statistical problems. Many of the original ideas of AMP were developed in the physics and engineering literature and have recently been extended for use in computer science and machine learning. In this tutorial the authors give a comprehensive and rigorous introduction to what AMP can offer, as well as to unifying and formalizing the core concepts within the large body of recent work in the area. They lead the reader through the basic concepts of AMP before introducing the concept of low-rank matrix estimation. The authors conclude by covering generalized models. To complete the picture for researchers, proofs, technical remarks and mathematical background are also provided. This tutorial is an in depth introduction to Approximate Message Passing for students and researchers new to the topic.
Common information measures the amount of matching variables in two or more information sources. It is ubiquitous in information theory and related areas such as theoretical computer science and discrete probability. However, because there are multiple notions of common information, a unified understanding of the deep interconnections between them is lacking. In this monograph the authors fill this gap by leveraging a small set of mathematical techniques that are applicable across seemingly disparate problems. The reader is introduced in Part I to the operational tasks and properties associated with the two main measures of common information, namely Wyner's and Gács-Körner-Witsenhausen's (GKW). In the subsequent two Parts, the authors take a deeper look at each of these. In Part II they discuss extensions to Wyner's common information from the perspective of distributed source simulation, including the Rényi common information. In Part III, GKW common information comes under the spotlight. Having laid the groundwork, the authors seamlessly transition to discussing their connections to various conjectures in information theory and discrete probability. This monograph provides students and researchers in Information Theory with a comprehensive resource for understanding common information and points the way forward to creating a unified set of techniques applicable over a wide range of problems.
Natural language interfaces provide an easy way to query and interact with data and enable non-technical users to investigate data sets without the need to know a query language. Recent advances in natural language understanding and processing have resulted in a renewed interest in natural language interfaces to data. The main challenges in natural language querying are identifying the entities involved in the user utterance, connecting the different entities in a meaningful way over the underlying data source to interpret user intents, and generating a structured query. There are two main approaches in the literature for interpreting a user's natural language query. The first are rule-based systems that make use of semantic indices, ontologies, and knowledge graphs to identify the entities in the query, understand the intended relationships between those entities, and utilize grammars to generate the target queries. Second are hybrid approaches that utilize both rule-based techniques as well as deep learning models. Conversational interfaces are the next natural step to one-shot natural language querying by exploiting query context between multiple turns of conversation for disambiguation. In this monograph, the authors review the rule-based and hybrid technologies that are used in natural language interfaces and survey the different approaches to natural language querying. They also describe conversational interfaces for data analytics and discuss several benchmarks used for natural language querying research and evaluation. The monograph concludes with discussion on challenges that need to be addressed before these systems can be widely adopted.
This monograph introduces tracking on the web to readers with little or no previous knowledge of the topic. Tracking is the collection of data about an individual's activity in multiple contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This work covers the topic primarily from the perspective of computer science and human-computer interaction but also includes relevant law and policy aspects. It primarily focuses on tracking as a near-ubiquitous commercial practice that emerged through a symbiotic relationship with websites, mobile apps, and other internet-based services. It aims to provide an overarching narrative spanning this large research space. The monograph starts by introducing the concept of tracking, and provides a short history of the major developments of tracking on the web. It presents research covering the detection, measurement and analysis of web tracking technologies, and delves into the countermeasures against web tracking as well as studies into end-user perspectives on tracking. The work also focuses on tracking on smart devices including smartphones and the Internet of Things, and concludes with emerging issues affecting the future of tracking across these different platforms.
In this monograph, an overview of recent developments and the state-of-the-art in image/video restoration and super-resolution (SR) using deep learning is presented. Deep learning has made a significant impact, not only on computer vision and natural language processing but also on classical signal processing problems such as image/video restoration/SR and compression. Recent advances in neural architectures led to significant improvements in the performance of learned image/video restoration and SR. An important benefit of data-driven deep learning approaches is that neural models can be optimized for any differentiable loss function, including visual perceptual loss functions, leading to perceptual video restoration and SR, which cannot be easily handled by traditional model-based approaches. The publication starts with a problem statement and a short discussion on traditional vs. data-driven solutions. Thereafter, recent advances in neural architectures are considered, and the loss functions and evaluation criteria for image/video restoration and SR are discussed. Also considered are the learned image restoration and SR, as learning either a mapping from the space of degraded images to ideal images based on the universal approximation theorem, or a generative model that captures the probability distribution of ideal images. Practical problems in applying supervised training to real-life restoration and SR are also included, as well as the solution models. In the section on learned video SR, approaches to exploit temporal correlations in learned video processing are covered, and then the perceptual optimization of the network parameters to obtain natural texture and motion is discussed. A comparative discussion of various approaches concludes the publication.
Methods for image recovery and reconstruction aim to estimate a good-quality image from noisy, incomplete, or indirect measurements. Such methods are also known as computational imaging. New methods for image reconstruction attempt to lower complexity, decrease data requirements, or improve image quality for a given input data quality. Image reconstruction typically involves optimizing a cost function to recover a vector of unknown variables that agrees with collected measurements and prior assumptions. State-of-the-art image reconstruction methods learn these prior assumptions from training data using various machine learning techniques, such as bilevel methods. This review discusses methods for learning parameters for image reconstruction problems using bilevel formulations, and it lies at the intersection of a specific machine learning method, bilevel, and a specific application, filter learning for image reconstruction. The review discusses multiple perspectives to motivate the use of bilevel methods and to make them more easily accessible to different audiences. Various ways to optimize the bilevel problem are covered, providing pros and cons of the variety of proposed approaches. Finally, an overview of bilevel applications in image reconstruction is provided.
Rank-metric codes date back to the 1970s and today play a vital role in many areas of coding theory and cryptography. In this survey the authors provide a comprehensive overview of the known properties of rank-metric codes and their applications. The authors begin with an accessible and complete introduction to rank-metric codes, their properties and their decoding. They then discuss at length rank-metric code-based quantum resistant encryption and authentication schemes. The application of rank-metric codes to distributed data storage is also outlined. Finally, the constructions of network codes based on MRD codes, constructions of subspace codes by lifting rank-metric codes, bounds on the cardinality, and the list decoding capability of subspace codes is covered in depth. Rank-Metric Codes and Their Applications provides the reader with a concise, yet complete, general introduction to rank-metric codes, explains their most important applications, and highlights their relevance to these areas of research.
Image Alignment and Stitching: A Tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. Image Alignment and Stitching: A Tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixel-based) and feature-based alignment algorithms, and describes blending algorithms used to produce seamless mosaics. It closes with a discussion of open research problems in the area. Image Alignment and Stitching: A Tutorial is an invaluable resource for anyone planning or conducting research in this particular area, or computer vision generally. The essentials of the topic are presented in a tutorial style and an extensive bibliography guides towards further reading.
Offers new perspectives on advanced (cyber) security innovation (eco) systems covering key different perspectives. The book provides insights on new security technologies and methods for advanced cyber threat intelligence, detection and mitigation.
Reviews the information relaxation approach which works by reducing a complex stochastic Dynamic Programming to a series of scenario-specific deterministic optimization problems solved within a Monte Carlo simulation.
Lists the seminal and pioneering research efforts conducted by a limited group of scholars from different disciplines that challenged traditional thought on small business and entrepreneurship - these pioneers and their specific contributions transformed our thinking about entrepreneurs.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.