Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This volume contains the proceedings of two recent conferences in the ?eld of electronic publishing and digital documents: - DDEP 2000, the 8th International Conference on Digital Documents and Electronic Publishing, the successor conference to the EP conference series; and - PODDP 2000, the 5th International Workshop on the Principles of Digital Document Processing. Both conferences were held at the Technische Universit* at Munc * hen, Munich, Germany in September 2000. DDEP 2000 was the eighth in a biennial series of international conferences organized to promote the exchange of novel ideas concerning the computer p- duction, manipulation and dissemination of documents. This conference series has attempted to re?ect the evolving nature and usage of documents by treating digital documents and electronic publishing as a broad topic covering many - pects. These aspects have included document models, document representation and document dissemination, dynamic and hyper-documents, document ana- sis and management, and wide-ranging applications. The papers presented at DDEP 2000 and in this volume re?ect this broad view, and cover such diverse topicsashypermediastructureanddesign,multimediaauthoringtechniquesand systems, document structure inference, typography, document management and adaptation, document collections and Petri nets. All papers were refereed by an international program committee.
Der digitale Marktplatz Internet bietet neue Moglichkeiten der Kundenbindung und Markenfuhrung. Das Buch stellt Konzept, Asthetik und Technik innovativer Kampagnen in zahlreichen Fallstudien dar. Praktiker und Konzeptioner finden hier erstmals umfangreiches Anschauungsmaterial, "e;best practice"e;-Beispiele und spannende Ausblicke auf die Trends im Wachstumsmarkt Corporate Communication.
The computational approach of this book is aimed at simulating the human ability to understand various kinds of phrases with a novel metaphoric component. That is, interpretations of metaphor as literal paraphrases are based on literal meanings of the metaphorically used words. This method distinguishes itself from statistical approaches, which in general do not account for novel usages, and from efforts directed at metaphor constrained to one type of phrase or to a single topic domain. The more interesting and novel metaphors appear to be based on concepts generally represented as nouns, since such concepts can be understood from a variety of perspectives. The core of the process of interpreting nominal concepts is to represent them in such a way that readers or hearers can infer which aspect(s) of the nominal concept is likely to be intended to be applied to its interpretation. These aspects are defined in terms of verbal and adjectival predicates. A section on the representation and processing of part-sentence verbal metaphor will therefore also serve as preparation for the representation of salient aspects of metaphorically used nouns. As the ability to process metaphorically used verbs and nouns facilitates the interpretation of more complex tropes, computational analysis of two other kinds of metaphorically based expressions are outlined: metaphoric compound nouns, such as "e;idea factory"e; and, together with the representation of inferences, modified metaphoric idioms, such as "e;Put the cat back into the bag"e;.
This book constitutes the refereed proceedings of the 8th International Conference on Flexible Query Answering Systems, FQAS 2011, held in Roskilde, Denmark, in October 2011. The 43 papers included in this volume were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on logical approaches to flexible querying, fuzzy logic in spatial and temporal data modeling and querying, knowledge-based approaches, multimedia, data fuzziness, reliability and trust, information retrieval, preference queries, flexible querying of graph data, ranking, ordering and statistics, query recommendation and interpretation, as well as on fuzzy databases and applications (8 papers presented in a special session).
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language.A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book constitutes the refereed proceedings of the 25th Canadian Conference on Artificial Intelligence, Canadian AI 2012, held in Toronto, Canada, in May 2012.The 23 regular papers, 16 short papers, and 4 papers from the Graduate Student Symposium presented were carefully reviewed and selected for inclusion in this book. The papers cover a broad range of topics presenting original work in all areas of artificial intelligence, either theoretical or applied.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language.A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the refereed proceedings of the 8th International Tbilisi Symposium on Logic, Language, and Computation, TbiLLC 2009, held in Bakuriani, Georgia, in September 2009. The 20 revised full papers included in the book were carefully reviewed and selected from numerous presentations given at the symposium. The focus of the papers is on the following topics: natural language syntax, semantics, and pragmatics; constructive, modal and algebraic logic; linguistic typology and semantic universals; logics for artificial intelligence; information retrieval, query answer systems; logic, games, and formal pragmatics; language evolution and learnability; computational social choice; historical linguistics, history of logic.
This book contains selected papers from the Colloquium in Honor of Alain Lecomte, held in Pauillac, France, in November 2007. The event was part of the ANR project "Prélude" (Towards Theoretical Pragmatics Based on Ludics and Continuation Theory), the proceedings of which were published in another FoLLI-LNAI volume (LNAI 6505) edited by Alain Lecomte and Samuel Tronçon. The selected papers of this Festschrift volume focus on the scientific areas in which Alain Lecomte has worked and to which he has contributed: formal linguistics, computational linguistics, logic, and cognition.
This book constitutes the refereed proceedings of the 24th Conference on Artificial Intelligence, Canadian AI 2011, held in St. John's, Canada, in May 2011. The 23 revised full papers presented together with 22 revised short papers and 5 papers from the graduate student symposium were carefully reviewed and selected from 81 submissions. The papers cover a broad range of topics presenting original work in all areas of artificial intelligence, either theoretical or applied.
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011.The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
Many approaches have already been proposed for classification and modeling in the literature. These approaches are usually based on mathematical mod els. Computer systems can easily handle mathematical models even when they are complicated and nonlinear (e.g., neural networks). On the other hand, it is not always easy for human users to intuitively understand mathe matical models even when they are simple and linear. This is because human information processing is based mainly on linguistic knowledge while com puter systems are designed to handle symbolic and numerical information. A large part of our daily communication is based on words. We learn from various media such as books, newspapers, magazines, TV, and the Inter net through words. We also communicate with others through words. While words play a central role in human information processing, linguistic models are not often used in the fields of classification and modeling. If there is no goal other than the maximization of accuracy in classification and model ing, mathematical models may always be preferred to linguistic models. On the other hand, linguistic models may be chosen if emphasis is placed on interpretability.
Det er ikke længere muligt at skelne mellem menneske og robot. Kunstigt intelligente robotter har fået tilført den galskab, humor og selvironi, som i dag desværre gør det umuligt at skelne dem åndeligt fra mennesker. Denne bog rummer den ucensurede, autentiske historie om, hvordan det skete. Ved et uheld.Det begyndte med tekstforskeren, som med IT-parringer af kendte, populære danske forfatteres bestsellere søgte at skabe næste bogsæsons succes-roman. Det sluttede med, at forskerens unge, kvindelige assistent gennem sit netværk kontaktede et ukendt enmandsforlag, som påtog sig opgaven med – skjult for den kunstige intelligens -- at udgive forskerens ufuldendte rapport om projektet. For at advare dig og andre mennesker om farerne ved kunstig intelligens.Bogen er håndtrykt uden brug af IT. Ellers ville denne bog af allestedsnærværende kunstig intelligens være lavet om til mere af det sædvanlige fake news om harmløs kunstig intelligens: klodsede arbejdsrobotter, som vælter, selvstyrende biler, som kører cyklister ned, barnagtige tale-robotter med idiotiske klæbehjerner, kælne sælunger på plejehjemmene. Hvad medierne ville skrive, hvis de kendte bogen:”Ingen kender dansk litteraturs sande potentiale, før de læser ”Algoritmen som åd sin skaber” (Bo Tao Michaëlis, Politiken).”Efter læsningen af ”The Algorithm who Ate its Creator” har jeg besluttet at lukke Facebook og destruere algoritmen.” (Mark Zuckerberg, terminalt opslag, Facebook).”Værste sludder i mands minde. Kun kunstig intelligens kunne have lavet det værre.” (Anonym lækage fra IBM, bragt i alle verdens førende medier).Klaus Kjøller har venligst accepteret at stå som forfatter af denne bog: ”Det glæder mig, hvis mit navns velkendte evne til at holde udgivelser skjult for medierne, kan være til nytte i kampen for menneskehedens overlevelse.”
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.