Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
An argument that information exists at different levels of analysis—syntactic, semantic, and pragmatic—and an exploration of the implications.Although this is the Information Age, there is no universal agreement about what information really is. Different disciplines view information differently; engineers, computer scientists, economists, linguists, and philosophers all take varying and apparently disconnected approaches. In this book, Antonio Badia distinguishes four levels of analysis brought to bear on information: syntactic, semantic, pragmatic, and network-based. Badia explains each of these theoretical approaches in turn, discussing, among other topics, theories of Claude Shannon and Andrey Kolomogorov, Fred Dretske's description of information flow, and ideas on receiver impact and informational interactions. Badia argues that all these theories describe the same phenomena from different perspectives, each one narrower than the previous one. The syntactic approach is the more general one, but it fails to specify when information is meaningful to an agent, which is the focus of the semantic and pragmatic approaches. The network-based approach, meanwhile, provides a framework to understand information use among agents.Badia then explores the consequences of understanding information as existing at several levels. Humans live at the semantic and pragmatic level (and at the network level as a society), computers at the syntactic level. This sheds light on some recent issues, including "fake news” (computers cannot tell whether a statement is true or not, because truth is a semantic notion) and "algorithmic bias” (a pragmatic, not syntactic concern). Humans, not computers, the book argues, have the ability to solve these issues.
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.The research evaluation market is booming. "Ranking,” "metrics,” "h-index,” and "impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure "research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an "objective” measure of research quality, a quantitative measure more useful than "subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
A call to redirect the intellectual focus of information retrieval and science (IR&S) toward the phenomenon of technology-mediated experience.
A historical-conceptual account of the different genres, technologies, modes of inscription, and innate powers of expression by which something becomes evident.In this book, Ronald Day offers a historical-conceptual account of how something becomes evident. Crossing philosophical ontology with documentary ontology, Day investigates the different genres, technologies, modes of inscription, and innate powers of expression by which something comes into presence and makes itself evident. He calls this philosophy of evidence documentarity, and it is through this theoretical lens that he examines documentary evidence (and documentation) within the tradition of Western philosophy, largely understood as representational in its epistemology, ontology, aesthetics, and politics.Day discusses the expression of beings or entities as evidence of what exists through a range of categories and modes, from Plato's notion that ideas are universal types expressed in evidential particulars to the representation of powerful particulars in social media and machine learning algorithms. He considers, among other topics, the contrast between positivist and anthropological documentation traditions; the ontological and epistemological importance of the documentary index; the nineteenth-century French novel's documentary realism and the avant-garde's critique of representation; performative literary genres; expression as a form of self evidence; and the "post-documentation” technologies of social media and machine learning, described as a posteriori, real-time technologies of documentation. Ultimately, the representational means are not only information and knowledge technologies but technologies of judgment, judging entities both descriptively and prescriptively.
"Explores typography as a medium that we understand very little, even as we consume vast amounts of information through it"--
"This book seeks to bridge information science and classification systems in the life sciences, specifically those related to biodiversity"--
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.