Vi bøger
Levering: 1 - 2 hverdage

Bøger i Advances in Database Systems serien

Filter
Filter
Sorter efterSorter Serie rækkefølge
  • af Xu Yang
    878,95 kr.

    Access to Mobile Services focuses on methods for accessing broadcast based M-services from multiple wireless channels. This book presents a novel infrastructure that provides a multi-channel broadcast framework for mobile users to effectively discover and access composite M-services. Multi-channel algorithms are proposed for efficiently accessing composite services.Access to Mobile Services provides an in-depth survey of wireless data access and motivates the need to treat mobile services differently. A wireless adaptation of Service Oriented Architecture (SOA) is also covered.Designed for researchers and practitioners who work in the general area of mobile services, this book is also suitable for advanced-level students in computer science.Forewords by:   Michael P. Papazoglou, Tilburg University, The NetherlandsFabio Casati, University of Trento, Italy 

  • af Antonio Badia
    1.064,95 kr.

    The database industry is a multi-billion, world-wide, all-encompassing part of the software world. Quantifiers in Action: Generalized Quantification in Query, Logical and Natural Languages introduces a query language called GQs-Generalized Quantification in Query. Most query languages are simply versions of First Order Logic (FOL). GQs are an extension of the idea of quantifier in FOL. GQs are a perfect example of a practical theory within databases.This book provides a brief background in logic and introduces the concept of GQs, and then develops a query language based on GQs. Using Query Language with Generalized Quantifiers, the reader explores the efficient implementation of the concept, always a primary consideration in databases. This professional book also includes several extensions for use with documents employing question and answer techniques.Designed for practitioners and researchers within the database management field; also suitable for advanced-level students in computer science.

  • af Ioannis Vlahavas
    1.304,95 kr.

    Knowledge Base Systems are an integration of conventional database systems with Artificial Intelligence techniques. They provide inference capabilities to the database system by encapsulating the knowledge of the application domain within the database. Knowledge is the most valuable of all corporate resources that must be captured, stored, re-used and continuously improved, in much the same way as database systems were important in the previous decade. Flexible, extensible, and yet efficient Knowledge Base Systems are needed to capture the increasing demand for knowledge-based applications which will become a significant market in the next decade. Knowledge can be expressed in many static and dynamic forms; the most prominent being domain objects, their relationships, and their rules of evolution and transformation. It is important to express and seamlessly use all types of knowledge in a single Knowledge Base System. Parallel, Object-Oriented, and Active Knowledge Base Systems presents in detail features that a Knowledge Base System should have in order to fulfill the above requirements. Parallel, Object-Oriented, and Active Knowledge Base Systems covers in detail the following topics: Integration of deductive, production, and active rules in sequential database systems. Integration and inter-operation of multiple rule types into the same Knowledge Base System. Parallel rule matching and execution, for deductive, production, and active rules, in parallel Export, Knowledge Base, and Database Systems. In-depth description of a Parallel, Object-Oriented, and Active Knowledge Base System that integrates all rule paradigms into a single database system without hindering performance. Parallel, Object-Oriented, and Active Knowledge Base Systems is intended as a graduate-level text for a course on Knowledge Base Systems and as a reference for researchers and practitioners in the areas of database systems, knowledge base systems and Artificial Intelligence.

  • af Guozhu Dong & Jian Pei
    1.105,95 kr.

  • af Pavel Zezula
    856,95 - 1.271,95 kr.

    The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.

  • af Alexander Thomasian
    1.304,95 kr.

    Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.

  • af Bruce McNutt
    1.304,95 kr.

  • af Vipul Kashyap
    1.304,95 kr.

    Information intermediation is the foundation stone of some of the most successful Internet companies, and is perhaps second only to the Internet Infrastructure companies. On the heels of information integration and interoperability, this book on information brokering discusses the next step in information interoperability and integration. The emerging Internet economy based on burgeoning B2B and B2C trading will soon demand semantics-based information intermediation for its feasibility and success. B2B ventures are involved in the `rationalization' of new vertical markets and construction of domain specific product catalogs. This book provides approaches for re-use of existing vocabularies and domain ontologies as a basis for this rationalization and provides a framework based on inter-ontology interoperation. Infrastructural trade-offs that identify optimizations in performance and scalability of web sites will soon give way to information based trade-offs as alternate rationalization schemes come into play and the necessity of interoperating across these schemes is realized. Information Brokering Across Heterogeneous Digital Data's intended readers are researchers, software architects and CTOs, advanced product developers dealing with information intermediation issues in the context of e-commerce (B2B and B2C), information technology professionals in various vertical markets (e.g., geo-spatial information, medicine, auto), and all librarians interested in information brokering.

  • af W. Eric Wong
    1.304,95 kr.

    Extensive research and development has produce mutation tools for languages such as Fortran, Ada, C, and IDL; empirical evaluations comparing mutation with other test adequacy criteria; empirical evidence and theoretical justification for the coupling effect; and techniques for speeding up mutation testing using various types of high performance architectures. Mutation has received the attention of software developers and testers in such diverse areas as network protocols and nuclear simulation. Mutation Testing for the New Century brings together cutting edge research results in mutation testing from a wide range of researchers. This book provides answers to key questions related to mutation and raises questions yet to be answered. It is an excellent resource for researchers, practitioners, and students of software engineering.

  • af Nauman Chaudhry
    1.066,95 kr.

  • af Wei Wang
    878,95 kr.

    In many applications, e.g., bioinformatics, web access traces, system u- lization logs, etc., the data is naturally in the form of sequences. It has been of great interests to analyze the sequential data to find their inherent char- teristics. The sequential pattern is one of the most widely studied models to capture such characteristics. Examples of sequential patterns include but are not limited to protein sequence motifs and web page navigation traces. In this book, we focus on sequential pattern mining. To meet different needs of various applications, several models of sequential patterns have been proposed. We do not only study the mathematical definitions and application domains of these models, but also the algorithms on how to effectively and efficiently find these patterns. The objective of this book is to provide computer scientists and domain - perts such as life scientists with a set of tools in analyzing and understanding the nature of various sequences by : (1) identifying the specific model(s) of - quential patterns that are most suitable, and (2) providing an efficient algorithm for mining these patterns. Chapter 1 INTRODUCTION Data Mining is the process of extracting implicit knowledge and discovery of interesting characteristics and patterns that are not explicitly represented in the databases. The techniques can play an important role in understanding data and in capturing intrinsic relationships among data instances. Data mining has been an active research area in the past decade and has been proved to be very useful.

  • af Zongmin Ma
    1.069,95 kr.

    Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases.   This volume is the outgrowth of research the author has conducted in recent years.  Fuzzy Database Modeling with XML introduces state of the art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models.   With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems.  It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Second, this book maps fuzzy XML model to the fuzzy databases.  Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry.   This book is also suitable for graduate-level students in computer science.

  • af Nauman Chaudhry
    878,95 kr.

    Researchers in data management have recently recognized the importance of a new class of data-intensive applications that requires managing data streams, i.e., data composed of continuous, real-time sequence of items. Streaming applications pose new and interesting challenges for data management systems. Such application domains require queries to be evaluated continuously as opposed to the one time evaluation of a query for traditional applications. Streaming data sets grow continuously and queries must be evaluated on such unbounded data sets. These, as well as other challenges, require a major rethink of almost all aspects of traditional database management systems to support streaming applications. Stream Data Management comprises eight invited chapters by researchers active in stream data management. The collected chapters provide exposition of algorithms, languages, as well as systems proposed and implemented for managing streaming data. Stream Data Management is designed to appeal to researchers or practitioners already involved in stream data management, as well as to those starting out in this area. This book is also suitable for graduate students in computer science interested in learning about stream data management.

  • - Technology, Human Factors, and Policy
    af William J. McIver Jr.
    1.730,95 kr.

    Advances In Digital Government presents a collection of in-depth articles that addresses a representative cross-section of the matrix of issues involved in implementing digital government systems. These articles constitute a survey of both the technical and policy dimensions related to the design, planning and deployment of digital government systems. The research and development projects within the technical dimension represent a wide range of governmental functions, including the provisioning of health and human services, management of energy information, multi-agency integration, and criminal justice applications. The technical issues dealt with in these projects include database and ontology integration, distributed architectures, scalability, and security and privacy. The human factors research emphasizes compliance with access standards for the disabled and the policy articles contain both conceptual models for developing digital government systems as well as real management experiences and results in deploying them. Advances In Digital Government presents digital government issues from the perspectives of different communities and societies. This geographic and social diversity illuminates a unique array of policy and social perspectives, exposing practitioners to new and useful ways of thinking about digital government.

  • af Abdelsalam A. Helal, Abdelsalam A. Heddaya & Bharat B. Bhargava
    1.730,95 kr.

  • af Richard Y. Wang
    1.730,95 kr.

    Data Quality provides an expose of research and practice in the data quality field for technically oriented readers. It is based on the research conducted at the MIT Total Data Quality Management (TDQM) program and work from other leading research institutions. This book is intended primarily for researchers, practitioners, educators and graduate students in the fields of Computer Science, Information Technology, and other interdisciplinary areas. It forms a theoretical foundation that is both rigorous and relevant for dealing with advanced issues related to data quality. Written with the goal to provide an overview of the cumulated research results from the MIT TDQM research perspective as it relates to database research, this book is an excellent introduction to Ph.D. who wish to further pursue their research in the data quality area. It is also an excellent theoretical introduction to IT professionals who wish to gain insight into theoretical results in the technically-oriented data quality area, and apply some of the key concepts to their practice.

  • af Kian-Lee Tan & Beng Chin Ooi
    1.304,95 kr.

  • af Shu-Ching Chen
    1.304,95 kr.

    Semantic Models for Multimedia Database Searching and Browsing begins with the introduction of multimedia information applications, the need for the development of the multimedia database management systems (MDBMSs), and the important issues and challenges of multimedia systems. The temporal relations, the spatial relations, the spatio-temporal relations, and several semantic models for multimedia information systems are also introduced. In addition, this book discusses recent advances in multimedia database searching and multimedia database browsing. More specifically, issues such as image/video segmentation, motion detection, object tracking, object recognition, knowledge-based event modeling, content-based retrieval, and key frame selections are presented for the first time in a single book. Two case studies consisting of two semantic models are included in the book to illustrate how to use semantic models to design multimedia information systems. Semantic Models for Multimedia Database Searching and Browsing is an excellent reference and can be used in advanced level courses for researchers, scientists, industry professionals, software engineers, students, and general readers who are interested in the issues, challenges, and ideas underlying the current practice of multimedia presentation, multimedia database searching, and multimedia browsing in multimedia information systems.

  • af Ahmed K. Elmagarmid, Haitao Jiang, Abdelsalam A. Helal, mfl.
    1.304,95 kr.

  • af Thomas A. Mueck
    1.304,95 kr.

    Object-oriented database management systems (OODBMS) are used to imple- ment and maintain large object databases on persistent storage. Regardless whether the underlying database model follows the object-oriented, the rela- tional or the object-relational paradigm, a key feature of any DBMS product is content based access to data sets. On the one hand this feature provides user-friendly query interfaces based on predicates to describe the desired data. On the other hand it poses challenging questions regarding DBMS design and implementation as well as the application development process on top of the DBMS. The reason for the latter is that the actual query performance depends on a technically meaningful use of access support mechanisms. In particular, if chosen and applied properly, such a mechanism speeds up the execution of predicate based queries. In the object-oriented world, such queries may involve arbitrarily complex terms referring to inheritance hierarchies and aggregation paths. These features are attractive at the application level, however, they increase the complexity of appropriate access support mechanisms which are known to be technically non-trivial in the relational world.

  • af Gerd Wagner
    1.304,95 kr.

  • af Vijay Kumar & Sang Hyuk Son
    1.304,95 kr.

  • af Evaggelia Pitoura
    1.730,95 kr.

    Earth date, August 11, 1997 "e;Beam me up Scottie!"e; "e;We cannot do it! This is not Star Trek's Enterprise. This is early years Earth."e; True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision.

  • af Alex A. Freitas & Simon H. Lavington
    1.730,95 kr.

  • af Guoqing Chen
    1.304,95 kr.

  • af Justin Zobel, Elisa Bertino, Kian-Lee Tan, mfl.
    1.304,95 kr.

  • af Vijay Atluri
    1.304,95 kr.

    Information security is receiving a great deal of attention as computers increasingly process more and more sensitive information. A multilevel secure database management system (MLS DBMS) is designed to store, retrieve and process information in compliance with certain mandatory security requirements, essential for protecting sensitive information from unauthorized access, modification and abuse. Such systems are characterized by data objects labeled at different security levels and accessed by users cleared to those levels. Unless transaction processing modules for these systems are designed carefully, they can be exploited to leak sensitive information to unauthorized users. In recent years, considerable research has been devoted to the area of multilevel secure transactions that has impacted the design and development of trusted MLS DBMS products. Multilevel Secure Transaction Processing presents the progress and achievements made in this area. The book covers state-of-the-art research in developing secure transaction processing for popular MLS DBMS architectures, such as kernelized, replicated, and distributed architectures, and advanced transaction models such as workflows, long duration and nested models. Further, it explores the technical challenges that require future attention. Multilevel Secure Transaction Processing is an excellent reference for researchers and developers in the area of multilevel secure database systems and may be used in advanced level courses in database security, information security, advanced database systems, and transaction processing.

  • af Nandit R. Soparkar
    878,95 kr.

    Transaction processing is an established technique for the concurrent and fault- tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg- ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan- tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "e;legal"e; (i.e., logically correct) man- ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man- agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod- els, and we briefly outline how logical and temporal constraints may be ex- pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist- ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules.

  • af Aris Gkoulalas-Divanis
    878,95 kr.

    Privacy and security risks arising from the application of different data miningtechniques to large institutional data repositories have been solely investigated by anew research domain, the so-called privacy preserving data mining. Association rulehiding is a new technique on data mining, which studies the problem of hiding sensitiveassociation rules from within the data. Association Rule Hiding for Data Mining addresses the optimization problem of"e;hiding"e; sensitive association rules which due to its combinatorial nature admitsa number of heuristic solutions that will be proposed and presented in this book. Exact solutions of increased time complexity that have been proposed recently arealso presented as well as a number of computationally efficient (parallel) approachesthat alleviate time complexity problems, along with a discussion regarding unsolvedproblems and future directions. Specific examples are provided throughout this bookto help the reader study, assimilate and appreciate the important aspects of this challengingproblem. Association Rule Hiding for Data Mining is designed for researchers, professorsand advanced-level students in computer science studying privacy preserving datamining, association rule mining, and data mining. This book is also suitable forpractitioners working in this industry.

  • af Antonio Badia
    878,95 kr.

Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.