Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
La gran extensión geográfica del español ha producido numerosas obras lexicográficas que se enfrentan al reto de dar respuesta a problemas derivados de la internacionalización lograda por esta lengua. Por una parte, los diccionarios bilingües aportan una perspectiva diferente para reflexionar sobre nuestro léxico y sobre cómo representarlo de forma eficaz en los diccionarios; por otra parte, dentro de los países hispanohablantes, el colorido mosaico de variedades regionales, nacionales y supranacionales suscita debates sobre la manera de tratar tamaña diversidad para conseguir una representación ecuánime para todos. En este volumen, especialistas de diferentes orígenes contrastan problemas, métodos y soluciones lexicográficas para abordar esta dimensión internacional.
This book explores the evolution of sentiment in economic terms in the press during financial crises applying a combination of sentiment analysis techniques and usage fluctuation analysis on a diachronic corpus derived from editorials in quality newspapers during the Great Recession. The book uncovers two key findings: first, certain economic terms become event words during times of crisis due to their increased use in the press and the general public, revealing rapid semantic changes in economic terms caused by major socio-economic events. Second, sentiment-laden collocations are found to be influenced by culture, highlighting language¿s adaptability to financial upheavals. This work proposes an innovative methodology that combines lexicon-based Sentiment Analysis, Corpus Linguistics, and qualitative Discourse Analysis to shed light on how language shapes economic discourse, making it a valuable resource for scholars exploring the relationship between language and historic events.
This book provides an analysis of acoustic features of polysemous strings and an implementation of a speech disambiguation program based on the phonetic information. Throughout the book, the term ¿polysemous string¿ refers to idioms with plausible literal interpretations, restrictive and non¿restrictive relative clauses, and the same expressions used as quotations and appearing in a non¿quotational context. The author explains how, typically, context is sufficient to determine the intended meaning. But there is enough evidence in psycholinguistic and phonetic literature to suspect that these superficially identical strings exhibit different acoustic features. In the experiment presented in the book, the participants were asked to read short excerpts containing corresponding elements of polysemous strings placed in the same intonational position. The acoustic analyses of ditropic pairs and subsequent statistical tests revealed that there is almost no difference in the duration, pitch, or intensity in literal and figurative interpretations. However, the analysis of relative clauses and quotations demonstrated that speakers are more likely to use acoustic cues to differentiate between the two possible readings. The book argues that the acoustic analysis of polysemous phrases could be successfully implemented in designing automatic speech recognition systems in order to improve their performance in disambiguating polysemous phrases.Analyzes acoustic features of polysemous strings and an implementation of a speech disambiguation programIncludes evidence that superficially identical strings exhibit different acoustic featuresArgues that acoustic analysis of polysemous phrases can be successfully implemented in automatic speech recognition
This book provides readers with a brief account of the history of Language Identification (LI) research and a survey of the features and methods most used in LI literature. LI is the problem of determining the language in which a document is written and is a crucial part of many text processing pipelines. The authors use a unified notation to clarify the relationships between common LI methods. The book introduces LI performance evaluation methods and takes a detailed look at LI-related shared tasks. The authors identify open issues and discuss the applications of LI and related tasks and proposes future directions for research in LI.
This book explores the cognitive plausibility of computational language models and why it¿s an important factor in their development and evaluation. The authors present the idea that more can be learned about cognitive plausibility of computational language models by linking signals of cognitive processing load in humans to interpretability methods that allow for exploration of the hidden mechanisms of neural models. The book identifies limitations when applying the existing methodology for representational analyses to contextualized settings and critiques the current emphasis on form over more grounded approaches to modeling language. The authors discuss how novel techniques for transfer and curriculum learning could lead to cognitively more plausible generalization capabilities in models. The book also highlights the importance of instance-level evaluation and includes thorough discussion of the ethical considerations that may arise throughout the various stages of cognitive plausibility research.
This book draws from graph theory and a semiotic comparison between language and distributed ledger technologies (also known as Blockchains) to motivate three experiments on language and network structure. The work explores the importance of this concept in different areas of linguistic research and establishes elements of a tentative linguistics of networks. Its empirical investigation is based on data from threads posted to the imageboard Hispachan, which often displays radicalized language and hate speech. The experiments (based on topic modeling and sentiment analysis) reveal an impact of the network structure of interaction on the interaction itself as well as the use of ingroup signalling and emotionally charged vocabulary to expand the network of interaction..
Contemporary data analytics involves extracting insights from large volumes of data and translating these insights into action to enhance knowledge and practice. Combining tutorial-style chapters and empirical studies, this collection of papers explains the distinction between data analytics and statistics as typically conceived, and shows how data analytic approaches can inform different areas of cognitive linguistic research and application.
Computational Methods for Communication Science showcases the use of innovative computational methods in the study of communication.This book discusses the validity of using big data in communication science and showcases a number of new methods and applications in the fields of text and network analysis. Computational methods have the potential to greatly enhance the scientific study of communication because they allow us to move towards collaborative large-N studies of actual behavior in its social context. This requires us to develop new skills and infrastructure and meet the challenges of open, valid, reliable, and ethical "big data" research. This volume brings together a number of leading scholars in this emerging field, contributing to the increasing development and adaptation of computational methods in communication science.The chapters in this book were originally published as a special issue of the journal Communication Methods and Measures.
This book provides a comprehensive overview of methods to build comparable corpora and of their applications, including machine translation, cross-lingual transfer, and various kinds of multilingual natural language processing. The authors begin with a brief history on the topic followed by a comparison to parallel resources and an explanation of why comparable corpora have become more widely used. In particular, they provide the basis for the multilingual capabilities of pre-trained models, such as BERT or GPT. The book then focuses on building comparable corpora, aligning their sentences to create a database of suitable translations, and using these sentence translations to produce dictionaries and term banks. Then, it is explained how comparable corpora can be used to build machine translation engines and to develop a wide variety of multilingual applications.
Didactique du lexique, terminologie et linguistique de corpus sont les axes autour desquels s¿articule cette étude. Nous nous penchons, en particulier, sur l¿enseignement explicite et structuré du lexique spécialisé pour le français L2, en relation avec le développement non seulement de la compétence strictement terminologique, mais aussi avec la compétence « méta », relevant de la maîtrise du fonctionnement des termes. Après l¿identifi cation d¿un noyau de notions métaterminologiques, notre objectif principal est de proposer des pistes méthodologiques pour leur enseignement basé sur l¿exploitation des corpus spécialisés, dans la conviction qüune synergie effi cace peut s¿instaurer entre la compétence métaterminologique et l¿exploitation d¿une ressource numérique pour des fi ns didactiques. Des applications possibles sont proposées relativement au domaine du commerce.
This book brings together selected revised papers representing a multidisciplinary approach to language and literature. The collection presents studies performed using the methods of computational linguistics in accordance with the traditions of Russian linguistic and literary studies, primarily in line with the Leningrad (Petersburg) philological school. The book comprises the papers allocated into 2 sections discussing the study of corpora in language, translation, and literary studies and the use of computing in language teaching and translation and in emotional text processing. A unique feature of the presented collection is that the papers, compiled in one volume, allow readers to get an understanding of a wide range of research conducted in Saint Petersburg State University and other Russian leading scientific institutions. Both the classical tradition of Saint Petersburg philology and the results obtained with the help of new computer technologies as a sample of the symbiosisof technologies and traditions, which bring research to a qualitatively new level, arouse interest.
This book will show how generative technology works and the drivers. It will also look at the applications - showing what various startups and large companies are doing in the space. There will also be a look at the challenges and risk factors. During the past decade, companies have spent billions on AI. But the focus has been on applying the technology to predictions - which is known as analytical AI. It can mean that you receive TikTok videos that you cannot resist. Or analytical AI can fend against spam or fraud or forecast when a package will be delivered. While such things are beneficial, there is much more to AI. The next megatrend will be leveraging the technology to be creative. For example, you could take a book and an AI model will turn it into a movie - at very little cost. This is all part of generative AI. It's still in the nascent stages but it is progressing quickly. Generative AI can already create engaging blog posts, social media messages, beautiful artwork and compelling videos. The potential for this technology is enormous. It will be useful for many categories like sales, marketing, legal, product design, code generation, and even pharmaceutical creation. What You Will Learn The importance of understanding generative AI The fundamentals of the technology, like the foundation and diffusion models How generative AI apps work How generative AI will impact various categories like the law, marketing/sales, gaming, product development, and code generation. The risks, downsides and challenges. Who This Book is For Professionals that do not have a technical background. Rather, the audience will be mostly those in Corporate America (such as managers) as well as people in tech startups, who will need an understanding of generative AI to evaluate the solutions.
Computers are essential for the functioning of our society. Despite the incredible power of existing computers, computing technology is progressing beyond today's conventional models. Quantum Computing (QC) is surfacing as a promising disruptive technology. QC is built on the principles of quantum mechanics. QC can run algorithms that are not trivial to run on digital computers. QC systems are being developed for the discovery of new materials and drugs and improved methods for encoding information for secure communication over the Internet. Unprecedented new uses for this technology are bound to emerge from ongoing research.The development of conventional digital computing technology for the arts and humanities has been progressing in tandem with the evolution of computers since the 1950s. Today, computers are absolutely essential for the arts and humanities. Therefore, future developments in QC are most likely to impact on the way in which artists will create and perform, and how research in the humanities will be conducted.This book presents a comprehensive collection of chapters by pioneers of emerging interdisciplinary research at the crossroads of quantum computing, and the arts and humanities, from philosophy and social sciences to visual arts and music.Prof. Eduardo Reck Miranda is a composer and a professor in Computer Music at Plymouth University, UK, where he is a director of the Interdisciplinary Centre for Computer Music Research (ICCMR). His previous publications include the Springer titles Handbook of Artificial Intelligence for Music, Guide to Unconventional Computing for Music, Guide to Brain-Computer Music Interfacing and Guide to Computing for Expressive Music Performance.
This book introduces how to enhance the context capture ability of the model, improve the position information perception ability of the pretrained models, and identify and denoise the unlabeled entities. The Chinese medical named entity recognition is an important branch of the intelligent medicine, which is beneficial to mine the information hidden in medical texts and provide the medical entity information for clinical medical decision-making and medical classification. Researchers, engineers and post-graduate students in the fields of medicine management and software engineering.
Latin paradigms are almost proverbially known, and they have often been used as a test case for different theoretical approaches to morphological complexity. This book analyses them in a completely word-based perspective, using a recently developed information-theoretic methodology, making entropy-based techniques of analysis available to a wider readership. By doing so, it shows the relevance of traditional notions like principal parts, giving them a more principled, data-driven formulation. Furthermore, it suggests enhancements to the standard information-theoretic methodology, allowing to account for the role of external factors ¿ like gender and derivational information ¿ in improving predictability between inflected word forms. This book is useful to morphologists, that will see ideas and techniques taken from the current debate on morphological theory tested on complex phenomena of a language as renowned as Latin. It is also helpful for scholars working in both Latin and Romance linguistics: the former will find a freely available lexical resource and a novel description of Latin paradigms, that can be exploited by the latter to draw a comparison with recent analyses of the inflectional morphology of several Romance languages.
Esta monografía ofrece un modelo de integración de terminología y ontología para la representación y organización de una parcela de la comunicación especializada, concretamente del dominio de la seguridad de la navegación marítima a bordo del buque, con el objetivo de elaborar un recurso término-ontológico bilingüe (inglés-español). Este estudio viene a cubrir la carencia detectada de recursos terminográficos sobre la seguridad de la navegación marítima y aporta un procedimiento para la aplicación de las ontologías a las lenguas de especialidad. La obra consta de cuatro partes: en la primera, se exponen los conceptos clave en terminología y ontología y la metodología de elaboración de la término-ontología, la cual se describe pormenorizadamente en la segunda y tercera parte. Tras la formalización ontológica en la cuarta parte, se describe el tránsito del recurso ontológico a un glosario término-ontológico electrónico bilingüe, disponible en línea, que se acompaña. Este glosario basado en corpus está dirigido a profesionales de la navegación, el transporte marítimo y la industria naval, a docentes y estudiantes de titulaciones náuticas, así como a profesionales de las lenguas especializadas (profesores, traductores, intérpretes, redactores técnicos, etc.).
This book assesses the place of logic, mathematics, and computer science in present day, interdisciplinary areas of computational linguistics. Computational linguistics studies natural language in its various manifestations from a computational point of view, both on the theoretical level (modeling grammar modules dealing with natural language form and meaning and the relation between these two) and on the practical level (developing applications for language and speech technology). It is a collection of chapters presenting new and future research. The book focuses mainly on logical approaches to computational processing of natural language and on the applicability of methods and techniques from the study of formal languages, programming, and other specification languages. It presents work from other approaches to linguistics, as well, especially because they inspire new work and approaches.
In this research monograph, two empirical studies are presented, whose aim is to explore the linguistic cues to deception in written English and Spanish using computational tools like ALIAS WISER and LIWC. The tools have been tested on ground-truth data. After the automated text analysis, statistical classifiers are used to determine the best protocol for computational classification of true and false statements, and the role of emotional involvement is analyzed in low-stakes deception. The results demonstrate that, in our corpora, there is a real difference between "laboratory-produced" lies told in an experimental setting and high-stakes lies told in a police investigation.
This book presents recent advances in NLP and speech technology, a topic attracting increasing interest in a variety of fields through its myriad applications, such as the demand for speech guided touchless technology during the Covid-19 pandemic. The authors present results of recent experimental research that provides contributions and solutions to different issues related to speech technology and speech in industry. Technologies include natural language processing, automatic speech recognition (for under-resourced dialects) and speech synthesis that are useful for applications such as intelligent virtual assistants, among others. Applications cover areas such as sentiment analysis and opinion mining, Arabic named entity recognition, and language modelling. This book is relevant for anyone interested in the latest in language and speech technology.
What do 'bimbo,' 'glitch,' 'savvy,' and 'shtick' all have in common? They are all expressions used in informal American English that have been taken from other languages. This pioneering book provides a comprehensive description of borrowings in informal American English, based on a large database of citations from thousands of contemporary sources, including the press, film, and TV. It presents the United States as a linguistic 'melting pot,' with words from a diverse range of languages now frequently appearing in the lexicon. It examines these borrowings from various perspectives, including discussions of terms, donors, types, changes, functions, and themes. It also features an alphabetical glossary of 1,200 representative expressions, defined and illustrated by 5,500 usage examples, providing an insightful and practical resource for readers. Combining scholarship with readability, this book is a fascinating storehouse of information for students and researchers in linguistics as well as anyone interested in lexical variation in contemporary English.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.