Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Video segmentation is the most fundamental process for appropriate index- ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte- gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu- ally, it is a semantically meaningful interval that most users are interested in re- trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter- vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi- interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
This book constitutes the Proceedings of the IFIP Working Conference PRO- COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus- sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro- gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3.
Automatic Indexing and Abstracting of Document Texts summarizes the latest techniques of automatic indexing and abstracting, and the results of their application. It also places the techniques in the context of the study of text, manual indexing and abstracting, and the use of the indexing descriptions and abstracts in systems that select documents or information from large collections. Important sections of the book consider the development of new techniques for indexing and abstracting. The techniques involve the following: using text grammars, learning of the themes of the texts including the identification of representative sentences or paragraphs by means of adequate cluster algorithms, and learning of classification patterns of texts. In addition, the book is an attempt to illuminate new avenues for future research. Automatic Indexing and Abstracting of Document Texts is an excellent reference for researchers and professionals working in the field of content management and information retrieval.
The Center for Intelligent Information Retrieval (CIIR) was formed in the Computer Science Department ofthe University ofMassachusetts, Amherst in 1992. The core support for the Center came from a National Science Foun- tion State/Industry/University Cooperative Research Center(S/IUCRC) grant, although there had been a sizeable information retrieval (IR) research group for over 10 years prior to that grant. Thebasic goal ofthese Centers is to combine basic research, applied research, and technology transfer. The CIIR has been successful in each of these areas, in that it has produced over 270 research papers, has been involved in many successful government and industry collaborations, and has had a significant role in high-visibility Internet sites and start-ups. As a result of these efforts, the CIIR has become known internationally as one of the leading research groups in the area of information retrieval. The CIIR focuses on research that results in more effective and efficient access and discovery in large, heterogeneous, distributed, text and multimedia databases. The scope of the work that is done in the CIIR is broad and goes significantly beyond "e;traditional"e; areas of information retrieval such as retrieval models, cross-lingual search, and automatic query expansion. The research includes both low-level systems issues such as the design of protocols and architectures for distributed search, as well as more human-centered topics such as user interface design, visualization and data mining with text, and multimedia retrieval.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects describes original research results, practical experiences and innovative ideas, all focused on maintaining security and privacy in information processing systems and applications that pervade cyberspace. The areas of coverage include: -Information Warfare, -Information Assurance, -Security and Privacy, -Authorization and Access Control in Distributed Systems, -Security Technologies for the Internet, -Access Control Models and Technologies, -Digital Forensics. This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
Smart cards or IC cards offer a huge potential for information processing purposes. The portability and processing power of IC cards allow for highly secure conditional access and reliable distributed information processing. IC cards that can perform highly sophisticated cryptographic computations are already available. Their application in the financial services and telecom industries are well known. But the potential of IC cards go well beyond that. Their applicability in mainstream Information Technology and the Networked Economy is limited mainly by our imagination; the information processing power that can be gained by using IC cards remains as yet mostly untapped and is not well understood. Here lies a vast uncovered research area which we are only beginning to assess, and which will have a great impact on the eventual success of the technology. The research challenges range from electrical engineering on the hardware side to tailor-made cryptographic applications on the software side, and their synergies. This volume comprises the proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications (CARDIS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held at the Hewlett-Packard Labs in the United Kingdom in September 2000. CARDIS conferences are unique in that they bring together researchers who are active in all aspects of design of IC cards and related devices and environments, thus stimulating synergy between different research communities from both academia and industry. This volume presents the latest advances in smart card research and applications, and will be essential reading for smart card developers, smart card application developers, and computer science researchers involved in computer architecture, computer security, and cryptography.
Most of the papers in this volume were first presented at the Workshop on Cross-Linguistic Information Retrieval that was held August 22, 1996 dur- ing the SIGIR'96 Conference. Alan Smeaton of Dublin University and Paraic Sheridan of the ETH, Zurich, were the two other members of the Scientific Committee for this workshop. SIGIR is the Association for Computing Ma- chinery (ACM) Special Interest Group on Information Retrieval, and they have held conferences yearly since 1977. Three additional papers have been added: Chapter 4 Distributed Cross-Lingual Information retrieval describes the EMIR retrieval system, one of the first general cross-language systems to be implemented and evaluated; Chapter 6 Mapping Vocabularies Using Latent Semantic Indexing, which originally appeared as a technical report in the Lab- oratory for Computational Linguistics at Carnegie Mellon University in 1991, is included here because it was one of the earliest, though hard-to-find, publi- cations showing the application of Latent Semantic Indexing to the problem of cross-language retrieval; and Chapter 10 A Weighted Boolean Model for Cross- Language Text Retrieval describes a recent approach to solving the translation term weighting problem, specific to Cross-Language Information Retrieval. Gregory Grefenstette CONTRIBUTORS Lisa Ballesteros David Hull W, Bruce Croft Gregory Grefenstette Center for Intelligent Xerox Research Centre Europe Information Retrieval Grenoble Laboratory Computer Science Department University of Massachusetts Thomas K. Landauer Department of Psychology Mark W. Davis and Institute of Cognitive Science Computing Research Lab University of Colorado, Boulder New Mexico State University Michael L. Littman Bonnie J.
Explosive growth in the size of spatial databases has highlighted the need for spatial data mining techniques to mine the interesting but implicit spatial patterns within these large databases. This book explores computational structure of the exact and approximate spatialautoregression (SAR) model solutions. Estimation of the parameters of the SAR model using Maximum Likelihood (ML) theory is computationally very expensive because of the need to compute the logarithm of the determinant (log-det) of a large matrix in the log-likelihood function. The second part of the book introduces theory on SAR model solutions. The third part of the book applies parallel processing techniques to the exact SAR model solutions. Parallel formulations of the SAR model parameter estimation procedure based on ML theory are probed using data parallelism with load-balancing techniques. Although this parallel implementation showed scalability up to eight processors, the exact SAR model solution still suffers from high computational complexity and memory requirements. These limitations have led the book to investigate serial and parallel approximate solutions for SAR model parameter estimation. In the fourth and fifth parts of the book, two candidate approximate-semi-sparse solutions of the SAR model based on Taylor's Series expansion and Chebyshev Polynomials are presented. Experiments show that the differences between exact and approximate SAR parameter estimates have no significant effect on the prediction accuracy. In the last part of the book, we developed a new ML based approximate SAR model solution and its variants in the next part of the thesis. The new approximate SAR model solution is called the Gauss-Lanczos approximated SAR model solution. We algebraically rank the error of the Chebyshev Polynomial approximation, Taylor's Series approximation and the Gauss-Lanczos approximation to the solution of the SAR model and its variants. In other words, we established a novel relationship between the error in the log-det term, which is the approximated term in the concentrated log-likelihood function and the error in estimating the SAR parameter for all of the approximate SAR model solutions.
Trusting a computer for a security-sensitive task (such as checking email or banking online) requires the user to know something about the computer's state. We examine research on securely capturing a computer's state, and consider the utility of this information both for improving security on the local computer (e.g., to convince the user that her computer is not infected with malware) and for communicating a remote computer's state (e.g., to enable the user to check that a web server will adequately protect her data). Although the recent "e;Trusted Computing"e; initiative has drawn both positive and negative attention to this area, we consider the older and broader topic of bootstrapping trust in a computer. We cover issues ranging from the wide collection of secure hardware that can serve as a foundation for trust, to the usability issues that arise when trying to convey computer state information to humans. This approach unifies disparate research efforts and highlights opportunities for additional work that can guide real-world improvements in computer security.
This book constitutes the proceedings of the 16th International Conference on Research Challenges in Information Sciences, RCIS 2022, which took place in Barcelona, Spain, during May 17¿20, 2022. It focused on the special theme "Ethics and Trustworthiness in Information Science".The scope of RCIS is summarized by the thematic areas of information systems and their engineering; user-oriented approaches; data and information management; business process management; domain-specific information systems engineering; data science; information infrastructures, and reflective research and practice.The 35 full papers presented in this volume were carefully reviewed and selected from a total 100 submissions. The 18 Forum papers are based on 11 Forum submissions, from which 5 were selected, and the remaining 13 were transferred from the regular submissions. The 6 Doctoral Consortium papers were selected from 10 submissions to the consortium. The contributions were organized in topical sections named: Data Science and Data Management; Information Search and Analysis; Business Process Management; Business Process Mining; Digital Transformation and Smart Life; Conceptual Modelling and Ontologies; Requirements Engineering; Model-Driven Engineering; Machine Learning Applications. In addition, two-page summaries of the tutorials can be found in the back matter.
This SpringerBrief provides the first systematic review of the existing works of cohesive subgraph search (CSS) over large heterogeneous information networks (HINs). It also covers the research breakthroughs of this area, including models, algorithms and comparison studies in recent years. This SpringerBrief offers a list of promising future research directions of performing CSS over large HINs.The authors first classify the existing works of CSS over HINs according to the classic cohesiveness metrics such as core, truss, clique, connectivity, density, etc., and then extensively review the specific models and their corresponding search solutions in each group. Note that since the bipartite network is a special case of HINs, all the models developed for general HINs can be directly applied to bipartite networks, but the models customized for bipartite networks may not be easily extended for other general HINs due to their restricted settings. The authors also analyze and compare these cohesive subgraph models (CSMs) and solutions systematically. Specifically, the authors compare different groups of CSMs and analyze both their similarities and differences, from multiple perspectives such as cohesiveness constraints, shared properties, and computational efficiency. Then, for the CSMs in each group, the authors further analyze and compare their model properties and high-level algorithm ideas.This SpringerBrief targets researchers, professors, engineers and graduate students, who are working in the areas of graph data management and graph mining. Undergraduate students who are majoring in computer science, databases, data and knowledge engineering, and data science will also want to read this SpringerBrief.
It is our great pleasure to welcome you to the proceedings of the 10th annual event of the International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP). ICA3PP is recognized as the main regular event covering the many dimensions of parallel algorithms and architectures, encompassing fundamental theoretical - proaches, practical experimental projects, and commercial components and systems. As applications of computing systems have permeated every aspect of daily life, the power of computing systems has become increasingly critical. Therefore, ICA3PP 2010 aimed to permit researchers and practitioners from industry to exchange inf- mation regarding advancements in the state of the art and practice of IT-driven s- vices and applications, as well as to identify emerging research topics and define the future directions of parallel processing. We received a total of 157 submissions this year, showing by both quantity and quality that ICA3PP is a premier conference on parallel processing. In the first stage, all papers submitted were screened for their relevance and general submission - quirements. These manuscripts then underwent a rigorous peer-review process with at least three reviewers per paper. In the end, 47 papers were accepted for presentation and included in the main proceedings, comprising a 30% acceptance rate.
These proceedings contain the refereed papers and posters presented at the ?rst Information Retrieval Facility Conference (IRFC), which was held in Vienna on 31 May 2010. The conference provides a multi-disciplinary, scienti?c forum that aims to bring young researchers into contact with industry at an early stage. IRFC 2010 received 20 high-quality submissions, of which 11 were accepted and appear here. The decision whether a paper was presented orally or as poster was solely based on what we thought was the most suitable form of communi- tion, considering we had only a single day for the event. In particular, the form of presentation bears no relation to the quality of the accepted papers, all of which were thoroughly peer reviewed and had to be endorsed by at least three independent reviewers. The Information Retrieval Facility (IRF) is an open IR research institution, managedby a scienti?c board drawnfrom a panel of internationalexperts in the ?eldwhoseroleistopromotethehighestqualityintheresearchsupportedbythe facility. As a non-pro?t research institution, the IRF provides services to IR s- ence in the form of a reference laboratory,hardwareand softwareinfrastructure. Committed to Open Science concepts, the IRF promotes publication of recent scienti?c results and newly developed methods, both in traditional paper form and as data sets freely available to IRF members. Such transparency ensures objective evaluation and comparabilityof results and consequently diversity and sustainability of their further development.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.