Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Most of the papers in this volume were first presented at the Workshop on Cross-Linguistic Information Retrieval that was held August 22, 1996 dur- ing the SIGIR'96 Conference. Alan Smeaton of Dublin University and Paraic Sheridan of the ETH, Zurich, were the two other members of the Scientific Committee for this workshop. SIGIR is the Association for Computing Ma- chinery (ACM) Special Interest Group on Information Retrieval, and they have held conferences yearly since 1977. Three additional papers have been added: Chapter 4 Distributed Cross-Lingual Information retrieval describes the EMIR retrieval system, one of the first general cross-language systems to be implemented and evaluated; Chapter 6 Mapping Vocabularies Using Latent Semantic Indexing, which originally appeared as a technical report in the Lab- oratory for Computational Linguistics at Carnegie Mellon University in 1991, is included here because it was one of the earliest, though hard-to-find, publi- cations showing the application of Latent Semantic Indexing to the problem of cross-language retrieval; and Chapter 10 A Weighted Boolean Model for Cross- Language Text Retrieval describes a recent approach to solving the translation term weighting problem, specific to Cross-Language Information Retrieval. Gregory Grefenstette CONTRIBUTORS Lisa Ballesteros David Hull W, Bruce Croft Gregory Grefenstette Center for Intelligent Xerox Research Centre Europe Information Retrieval Grenoble Laboratory Computer Science Department University of Massachusetts Thomas K. Landauer Department of Psychology Mark W. Davis and Institute of Cognitive Science Computing Research Lab University of Colorado, Boulder New Mexico State University Michael L. Littman Bonnie J.
Explosive growth in the size of spatial databases has highlighted the need for spatial data mining techniques to mine the interesting but implicit spatial patterns within these large databases. This book explores computational structure of the exact and approximate spatialautoregression (SAR) model solutions. Estimation of the parameters of the SAR model using Maximum Likelihood (ML) theory is computationally very expensive because of the need to compute the logarithm of the determinant (log-det) of a large matrix in the log-likelihood function. The second part of the book introduces theory on SAR model solutions. The third part of the book applies parallel processing techniques to the exact SAR model solutions. Parallel formulations of the SAR model parameter estimation procedure based on ML theory are probed using data parallelism with load-balancing techniques. Although this parallel implementation showed scalability up to eight processors, the exact SAR model solution still suffers from high computational complexity and memory requirements. These limitations have led the book to investigate serial and parallel approximate solutions for SAR model parameter estimation. In the fourth and fifth parts of the book, two candidate approximate-semi-sparse solutions of the SAR model based on Taylor's Series expansion and Chebyshev Polynomials are presented. Experiments show that the differences between exact and approximate SAR parameter estimates have no significant effect on the prediction accuracy. In the last part of the book, we developed a new ML based approximate SAR model solution and its variants in the next part of the thesis. The new approximate SAR model solution is called the Gauss-Lanczos approximated SAR model solution. We algebraically rank the error of the Chebyshev Polynomial approximation, Taylor's Series approximation and the Gauss-Lanczos approximation to the solution of the SAR model and its variants. In other words, we established a novel relationship between the error in the log-det term, which is the approximated term in the concentrated log-likelihood function and the error in estimating the SAR parameter for all of the approximate SAR model solutions.
Trusting a computer for a security-sensitive task (such as checking email or banking online) requires the user to know something about the computer's state. We examine research on securely capturing a computer's state, and consider the utility of this information both for improving security on the local computer (e.g., to convince the user that her computer is not infected with malware) and for communicating a remote computer's state (e.g., to enable the user to check that a web server will adequately protect her data). Although the recent "e;Trusted Computing"e; initiative has drawn both positive and negative attention to this area, we consider the older and broader topic of bootstrapping trust in a computer. We cover issues ranging from the wide collection of secure hardware that can serve as a foundation for trust, to the usability issues that arise when trying to convey computer state information to humans. This approach unifies disparate research efforts and highlights opportunities for additional work that can guide real-world improvements in computer security.
This book constitutes the proceedings of the 16th International Conference on Research Challenges in Information Sciences, RCIS 2022, which took place in Barcelona, Spain, during May 17¿20, 2022. It focused on the special theme "Ethics and Trustworthiness in Information Science".The scope of RCIS is summarized by the thematic areas of information systems and their engineering; user-oriented approaches; data and information management; business process management; domain-specific information systems engineering; data science; information infrastructures, and reflective research and practice.The 35 full papers presented in this volume were carefully reviewed and selected from a total 100 submissions. The 18 Forum papers are based on 11 Forum submissions, from which 5 were selected, and the remaining 13 were transferred from the regular submissions. The 6 Doctoral Consortium papers were selected from 10 submissions to the consortium. The contributions were organized in topical sections named: Data Science and Data Management; Information Search and Analysis; Business Process Management; Business Process Mining; Digital Transformation and Smart Life; Conceptual Modelling and Ontologies; Requirements Engineering; Model-Driven Engineering; Machine Learning Applications. In addition, two-page summaries of the tutorials can be found in the back matter.
This SpringerBrief provides the first systematic review of the existing works of cohesive subgraph search (CSS) over large heterogeneous information networks (HINs). It also covers the research breakthroughs of this area, including models, algorithms and comparison studies in recent years. This SpringerBrief offers a list of promising future research directions of performing CSS over large HINs.The authors first classify the existing works of CSS over HINs according to the classic cohesiveness metrics such as core, truss, clique, connectivity, density, etc., and then extensively review the specific models and their corresponding search solutions in each group. Note that since the bipartite network is a special case of HINs, all the models developed for general HINs can be directly applied to bipartite networks, but the models customized for bipartite networks may not be easily extended for other general HINs due to their restricted settings. The authors also analyze and compare these cohesive subgraph models (CSMs) and solutions systematically. Specifically, the authors compare different groups of CSMs and analyze both their similarities and differences, from multiple perspectives such as cohesiveness constraints, shared properties, and computational efficiency. Then, for the CSMs in each group, the authors further analyze and compare their model properties and high-level algorithm ideas.This SpringerBrief targets researchers, professors, engineers and graduate students, who are working in the areas of graph data management and graph mining. Undergraduate students who are majoring in computer science, databases, data and knowledge engineering, and data science will also want to read this SpringerBrief.
It is our great pleasure to welcome you to the proceedings of the 10th annual event of the International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP). ICA3PP is recognized as the main regular event covering the many dimensions of parallel algorithms and architectures, encompassing fundamental theoretical - proaches, practical experimental projects, and commercial components and systems. As applications of computing systems have permeated every aspect of daily life, the power of computing systems has become increasingly critical. Therefore, ICA3PP 2010 aimed to permit researchers and practitioners from industry to exchange inf- mation regarding advancements in the state of the art and practice of IT-driven s- vices and applications, as well as to identify emerging research topics and define the future directions of parallel processing. We received a total of 157 submissions this year, showing by both quantity and quality that ICA3PP is a premier conference on parallel processing. In the first stage, all papers submitted were screened for their relevance and general submission - quirements. These manuscripts then underwent a rigorous peer-review process with at least three reviewers per paper. In the end, 47 papers were accepted for presentation and included in the main proceedings, comprising a 30% acceptance rate.
These proceedings contain the refereed papers and posters presented at the ?rst Information Retrieval Facility Conference (IRFC), which was held in Vienna on 31 May 2010. The conference provides a multi-disciplinary, scienti?c forum that aims to bring young researchers into contact with industry at an early stage. IRFC 2010 received 20 high-quality submissions, of which 11 were accepted and appear here. The decision whether a paper was presented orally or as poster was solely based on what we thought was the most suitable form of communi- tion, considering we had only a single day for the event. In particular, the form of presentation bears no relation to the quality of the accepted papers, all of which were thoroughly peer reviewed and had to be endorsed by at least three independent reviewers. The Information Retrieval Facility (IRF) is an open IR research institution, managedby a scienti?c board drawnfrom a panel of internationalexperts in the ?eldwhoseroleistopromotethehighestqualityintheresearchsupportedbythe facility. As a non-pro?t research institution, the IRF provides services to IR s- ence in the form of a reference laboratory,hardwareand softwareinfrastructure. Committed to Open Science concepts, the IRF promotes publication of recent scienti?c results and newly developed methods, both in traditional paper form and as data sets freely available to IRF members. Such transparency ensures objective evaluation and comparabilityof results and consequently diversity and sustainability of their further development.
The present volume contains the proceedings of the 5th International Workshop on Formal Aspects in Security and Trust (FAST 2008), held in Malaga, Spain, October 9-10, 2008. FAST is an event a?liated with the 13th European Sym- sium on Research in Computer Security (ESORICS 2008). FAST 2008 was held under the auspices of the IFIP WG 1.7 on Foundations of Security Analysis and Design. The 5th International Workshop on Formal Aspects in Security and Trust (FAST 2008) aimed at continuing the successful e?ort of the previous three FAST workshop editions for fostering the cooperation among researchers in the areas of security and trust. As computing and network infrastructures become increasingly pervasive, and as they carry increasing economic activity, society needs well-matched security and trust mechanisms. These interactions incre- ingly span several enterprises and involve loosely structured communities of - dividuals. Participants in these activities must control interactions with their partners based on trust policies and business logic. Trust-based decisions - fectively determine the security goals for shared information and for access to sensitive or valuable resources. FAST sought for original papers focusing on formal aspects in: security and trust policy models; security protocol design and analysis; formal models of trustand reputation;logicsfor security andtrust;distributed trust management systems;trust-basedreasoning;digitalassetsprotection;dataprotection;privacy and ID issues; information ?ow analysis; language-based security; security and trust aspects in ubiquitous computing; validation/analysis tools; Web service security/trust/privacy; GRID security; security risk assessment; case studies.
Similarity-based learning methods have a great potential as an intuitive and ?exible toolbox for mining, visualization,and inspection of largedata sets. They combine simple and human-understandable principles, such as distance-based classi?cation, prototypes, or Hebbian learning, with a large variety of di?erent, problem-adapted design choices, such as a data-optimum topology, similarity measure, or learning mode. In medicine, biology, and medical bioinformatics, more and more data arise from clinical measurements such as EEG or fMRI studies for monitoring brain activity, mass spectrometry data for the detection of proteins, peptides and composites, or microarray pro?les for the analysis of gene expressions. Typically, data are high-dimensional, noisy, and very hard to inspect using classic (e. g. , symbolic or linear) methods. At the same time, new technologies ranging from the possibility of a very high resolution of spectra to high-throughput screening for microarray data are rapidly developing and carry thepromiseofane?cient,cheap,andautomaticgatheringoftonsofhigh-quality data with large information potential. Thus, there is a need for appropriate - chine learning methods which help to automatically extract and interpret the relevant parts of this information and which, eventually, help to enable und- standingofbiologicalsystems,reliablediagnosisoffaults,andtherapyofdiseases such as cancer based on this information. Moreover, these application scenarios pose fundamental and qualitatively new challenges to the learning systems - cause of the speci?cs of the data and learning tasks. Since these characteristics are particularly pronounced within the medical domain, but not limited to it and of principled interest, this research topic opens the way toward important new directions of algorithmic design and accompanying theory.
Packed with examples, Simply SQL is a step-by-step introduction to learning SQL. You'll discover how easy it is to use SQL to interact with best-practice, robust databases. Rather than bore you with theory, it focuses on the practical use of SQL with common databases and uses plenty of diagrams, easy-to-read text, and examples to help make learning SQL easy and fun.* Step through the basic SQL syntax* Learn how to use best practices in database design * Master advanced syntax like inner joins, groups, and subqueries* Understand the SQL datatypes* And much more...
Database research and development has been remarkably successful over the past three decades. Now the field is facing new challenges posted by the rapid advances of technology, especially the penetration of the Web and Internet into everyone's daily life. The economical and financial environment where database systems are used has been changing dramatically. In addition to being able to efficiently manage a large volume of operational data generated internally, the ability to manage data in cyberspace, extract relevant information, and discover knowledge to support decision making is critical to the success of any organization. In order to provide researchers and practitioners with a forum to share their experiences in tackling problems in managing and using data, information, and knowledge in the age of the Internet and Web, the First International Conference on Web-Age Information Management (WAIM 2000) was held in Shanghai, China, June 21-23. The inaugural conference in its series was well received. Researchers from 17 countries and regions, including Austria, Australia, Bahrain, Canada, China, France, Germany, Japan, Korea, Malaysia, The Netherlands, Poland, Singapore, Spain, Taiwan, UK, and USA submitted their recent work. Twenty-seven regular and 14 short papers contained in these proceedings were presented during the two-day conference. These papers cover a large spectrum of issues, from classical data management such as object-oriented modeling, spatial and temporal databases to recent hits like data mining, data warehousing, semi-structured data, and XML.
This book constitutes the refereed proceedings of the 22nd Annual Symposium on Combinatorial Pattern Matching, CPM 2011, held in Palermi, Italy, in June 2011. The 36 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 70 submissions. The papers address issues of searching and matching strings and more complicated patterns such as trees, regular expressions, graphs, point sets, and arrays. The goal is to derive non-trivial combinatorial properties of such structures and to exploit these properties in order to either achieve superior performance for the corresponding computational problems or pinpoint conditions under which searches cannot be performed efficiently. The meeting also deals with problems in computational biology, data compression and data mining, coding, information retrieval, natural language processing and pattern recognition.
This book constitutes the refereed proceedings of the Second International Symposium on Computational Life Sciences, CompLife 2006. The 25 revised full papers presented were carefully reviewed and selected from 56 initial submissions. The papers are organized in topical sections on genomics, data mining, molecular simulation, molecular informatics, systems biology, biological networks/metabolism, and computational neuroscience.
Here are the proceedings of the 4th International Workshop on Principles and Practice of Semantic Web Reasoning, PPSWR 2006. The book presents 14 revised full papers together with 1 invited talk and 6 system demonstrations, addressing major aspects of semantic Web research, namely forms of reasoning with a strong interest in rule-based languages and methods. Coverage includes theoretical work on reasoning methods, concrete reasoning methods and query languages, and practical applications.
This volume contains the papers selected for presentation at the First Int- national Conference on Rough Sets and Knowledge Technology (RSKT 2006) organized in Chongqing, P. R. China, July 24-26, 2003. There were 503 s- missions for RSKT 2006 except for 1 commemorative paper, 4 keynote papers and 10 plenary papers. Except for the 15 commemorative and invited papers, 101 papers were accepted by RSKT 2006 and are included in this volume. The acceptance rate was only 20%. These papers were divided into 43 regular oral presentation papers (each allotted 8 pages), and 58 short oral presentation - pers (each allotted 6 pages) on the basis of reviewer evaluation. Each paper was reviewed by two to four referees. Since the introduction of rough sets in 1981 by Zdzis law Pawlak, many great advances in both the theory and applications have been introduced. Rough set theory is closely related to knowledge technology in a variety of forms such as knowledge discovery, approximate reasoning, intelligent and multiagent systems design, and knowledge intensive computations that signal the emergence of a knowledge technology age. The essence of growth in cutting-edge, state-of-t- art and promising knowledge technologies is closely related to learning, pattern recognition,machine intelligence and automation of acquisition, transformation, communication, exploration and exploitation of knowledge. A principal thrust of such technologies is the utilization of methodologies that facilitate knowledge processing.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.