Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.Ved tilmelding accepterer du vores persondatapolitik.
Du kan altid afmelde dig igen.
Hvorfor skal vi tegne mere sammen?Mange organisationer oplever et behov for nye arbejdsmetoder til at håndtere og navigere i den stigende kompleksitet, som de indgår i. Når vi arbejder visuelt, kan vi skabe fælles billeder og derigennem opnå fælles forståelse på tværs af faggrupper, afdelinger og kulturer. Derfor skal vi tegne mere, sammen. Når vi tegner, bliver vi konkrete og gør verden mere håndgribelig. Tegn mere sammen tilbyder ledere, forandringsagenter og iværksættere en effektiv tilgang til at tænke, kommunikere og samarbejde på.giver en praktisk, nem og overskuelig indføring i, hvordan vi alle kan tegne - det drejer sig blot om at mestre nogle simple grundformer.viser, hvordan du kan designe og facilitere værdifulde processer, møder, workshops og konferencer gennem FEM DESIGNLOOPS.På tegnmeresammen.dk finder du bogens digitale univers. Her kan du downloade redskaber og hente endnu mere inspiration til at arbejde mere visuelt.
This book provides an overview of the emerging field of in situ visualization, i.e. visualizing simulation data as it is generated. In situ visualization is a processing paradigm in response to recent trends in the development of high-performance computers. It has great promise in its ability to access increased temporal resolution and leverage extensive computational power. However, the paradigm also is widely viewed as limiting when it comes to exploration-oriented use cases. Furthermore, it will require visualization systems to become increasingly complex and constrained in usage. As research efforts on in situ visualization are growing, the state of the art and best practices are rapidly maturing.Specifically, this book contains chapters that reflect state-of-the-art research results and best practices in the area of in situ visualization. Our target audience are researchers and practitioners from the areas of mathematics computational science, high-performance computing, and computer science that work on or with in situ techniques, or desire to do so in future.
When you picture human-data interactions (HDI), what comes to mind? The datafication of modern life, along with open data initiatives advocating for transparency and access to current and historical datasets, has fundamentally transformed when, where, and how people encounter data. People now rely on data to make decisions, understand current events, and interpret the world. We frequently employ graphs, maps, and other spatialized forms to aid data interpretation, yet the familiarity of these displays causes us to forget that even basic representations are complex, challenging inscriptions and are not neutral; they are based on representational choices that impact how and what they communicate. This book draws on frameworks from the learning sciences, visualization, and human-computer interaction to explore embodied HDI. This exciting sub-field of interaction design is based on the premise that every day we produce and have access to quintillions of bytes of data, the exploration and analysis of which are no longer confined within the walls of research laboratories. This volume examines how humans interact with these data in informal (not work or school) environments, paritcularly in museums. The first half of the book provides an overview of the multi-disciplinary, theoretical foundations of HDI (in particular, embodied cognition, conceptual metaphor theory, embodied interaction, and embodied learning) and reviews socio-technical theories relevant for designing HDI installations to support informal learning. The second half of the book describes strategies for engaging museum visitors with interactive data visualizations, presents methodologies that can inform the design of hand gestures and body movements for embodied installations, and discusses how HDI can facilitate people's sensemaking about data. This cross-disciplinary book is intended as a resource for students and early-career researchers in human-computer interaction and the learning sciences, as well as for more senior researchers and museum practitioners who want to quickly familiarize themselves with HDI.
This book focusses on techniques for automating the procedure of creating external labelings, also known as callout labelings. In this labeling type, the features within an illustration are connected by thin leader lines (called leaders) with their labels, which are placed in the empty space surrounding the image. In general, textual labels describing graphical features in maps, technical illustrations (such as assembly instructions or cutaway illustrations), or anatomy drawings are an important aspect of visualization that convey information on the objects of the visualization and help the reader understand what is being displayed. Most labeling techniques can be classified into two main categories depending on the "e;distance"e; of the labels to their associated features. Internal labels are placed inside or in the direct neighborhood of features, while external labels, which form the topic of this book, are placed in the margins outside the illustration, where they do not occlude the illustration itself. Both approaches form well-studied topics in diverse areas of computer science with several important milestones. The goal of this book is twofold. The first is to serve as an entry point for the interested reader who wants to get familiar with the basic concepts of external labeling, as it introduces a unified and extensible taxonomy of labeling models suitable for a wide range of applications. The second is to serve as a point of reference for more experienced people in the field, as it brings forth a comprehensive overview of a wide range of approaches to produce external labelings that are efficient either in terms of different algorithmic optimization criteria or in terms of their usability in specific application domains. The book mostly concentrates on algorithmic aspects of external labeling, but it also presents various visual aspects that affect the aesthetic quality and usability of external labeling.
The emergence of multilayer networks as a concept from the field of complex systems provides many new opportunities for the visualization of network complexity, and has also raised many new exciting challenges. The multilayer network model recognizes that the complexity of relationships between entities in real-world systems is better embraced as several interdependent subsystems (or layers) rather than a simple graph approach. Despite only recently being formalized and defined, this model can be applied to problems in the domains of life sciences, sociology, digital humanities, and more. Within the domain of network visualization there already are many existing systems, which visualize data sets having many characteristics of multilayer networks, and many techniques, which are applicable to their visualization. In this Synthesis Lecture, we provide an overview and structured analysis of contemporary multilayer network visualization. This is not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those solving problems within application domains. We have explored the visualization literature to survey visualization techniques suitable for multilayer network visualization, as well as tools, tasks, and analytic techniques from within application domains. We also identify the research opportunities and examine outstanding challenges for multilayer network visualization along with potential solutions and future research directions for addressing them.
There is ample evidence in the visualization community that individual differences matter. These prior works highlight various personality traits and cognitive abilities that can modulate the use of the visualization systems and demonstrate a measurable influence on speed, accuracy, process, and attention. Perhaps the most important implication of this body of work is that we can use individual differences as a mechanism for estimating when a design is effective or to identify when people may struggle with visualization designs.These effects can have a critical impact on consequential decision-making processes. One study that appears in this book investigated the impact of visualization on medical decision-making showed that visual aides tended to be most beneficial for people with high spatial ability, a metric that measures a person's ability to represent and manipulate two- or three-dimensional representations of objects mentally. The results showed that participants with low spatial ability had difficulty interpreting and analyzing the underlying medical data when they use visual aids. Overall, approximately 50% of the studied population were unsupported by the visualization tools when making a potentially life-critical decision. As data fluency continues to become an essential skill for our everyday lives, we must embrace the growing need to understand the factors that may render our tools ineffective and identify concrete steps for improvement.This book presents my current understanding of how individual differences in personality interact with visualization use and draws from recent research in the Visualization, Human-Computer Interaction, and Psychology communities. We focus on the specific designs and tasks for which there is concrete evidence of performance divergence due to personality. Additionally, we highlight an exciting research agenda that is centered around creating tailored visualization systems that are aligned with people's abilities. The purpose of this book is to underscore the need to consider individual differences when designing and evaluating visualization systems and to call attention to this critical research direction.
At the 2016 IEEE VIS Conference in Baltimore, Maryland, a panel of experts from the Scientific Visualization (SciVis) community gathered to discuss why the SciVis component of the conference had been shrinking significantly for over a decade. As the panelists concluded and opened the session to questions from the audience, Annie Preston, a Ph.D. student at the University of California, Davis, asked whether the panelists thought diversity or, more specifically, the lack of diversity was a factor.This comment ignited a lively discussion of diversity: not only its impact on Scientific Visualization, but also its role in the visualization community at large. The goal of this book is to expand and organize the conversation. In particular, this book seeks to frame the diversity and inclusion topic within the Visualization community, illuminate the issues, and serve as a starting point to address how to make this community more diverse and inclusive. This book acknowledges that diversity is a broad topic with many possible meanings. Expanded definitions of diversity that are relevant to the Visualization community and to computing at large are considered. The broader conversation of inclusion and diversity is framed within the broader sociological context in which it must be considered. Solutions to recruit and retain a diverse research community and strategies for supporting inclusion efforts are presented. Additionally, community members present short stories detailing their "e;"e;non-inclusive"e;"e; experiences in an effort to facilitate a community-wide conversation surrounding very difficult situations.It is important to note that this is by no means intended to be a comprehensive, authoritative statement on the topic. Rather, this book is intended to open the conversation and begin to build a framework for diversity and inclusion in this specific research community. While intended for the Visualization community, ideally, this book will provide guidance for any computing community struggling with similar issues and looking for solutions.
Visual analytics has come a long way since its inception in 2005. The amount of data in the world today has increased significantly and experts in many domains are struggling to make sense of their data. Visual analytics is helping them conduct their analyses. While software developers have worked for many years to develop software that helps users do their tasks, this task is becoming more and more onerous, as understanding the needs and data used by expert users requires more than some simple usability testing during the development process. The need for a user-centered evaluation process was envisioned in Illuminating the Path, the seminal work on visual analytics by James Thomas and Kristin Cook in 2005. We have learned over the intervening years that not only will user-centered evaluation help software developers to turn out products that have more utility, the evaluation efforts can also help point out the direction for future research efforts.This book describes the efforts that go into analysis, including critical thinking, sensemaking, and various analytics techniques learned from the intelligence community. Support for these components is needed in order to provide the most utility for the expert users. There are a good number of techniques for evaluating software that hasbeen developed within the human-computer interaction (HCI) community. While some of these techniques can be used as is, others require modifications. These too are described in the book. An essential point to stress is that the users of the domains for which visual analytics tools are being designed need to be involved in the process. The work they do and the obstacles in their current processes need to be understood in order to determine both the types of evaluations needed and the metrics to use in these evaluations. At this point in time, very few published efforts describe more than informal evaluations. The purpose of this book is to help readers understand the need for more user-centered evaluations to drive both better-designed products and to define areas for future research. Hopefully readers will view this work as an exciting and creative effort and will join the community involved in these efforts.
Prevalent types of data in scientific visualization are volumetric data, vector field data, and particle-based data. Particle data typically originates from measurements and simulations in various fields, such as life sciences or physics. The particles are often visualized directly, that is, by simple representants like spheres. Interactive rendering facilitates the exploration and visual analysis of the data. With increasing data set sizes in terms of particle numbers, interactive high-quality visualization is a challenging task. This is especially true for dynamic data or abstract representations that are based on the raw particle data. This book covers direct particle visualization using simple glyphs as well as abstractions that are application-driven such as clustering and aggregation. It targets visualization researchers and developers who are interested in visualization techniques for large, dynamic particle-based data. Its explanations focus on GPU-accelerated algorithms for high-performance rendering and data processing that run in real-time on modern desktop hardware. Consequently, the implementation of said algorithms and the required data structures to make use of the capabilities of modern graphics APIs are discussed in detail. Furthermore, it covers GPU-accelerated methods for the generation of application-dependent abstract representations. This includes various representations commonly used in application areas such as structural biology, systems biology, thermodynamics, and astrophysics.
This book discusses semantic interaction, a user interaction methodology for visual analytic applications that more closely couples the visual reasoning processes of people with the computation. This methodology affords user interaction on visual data representations that are native to the domain of the data.User interaction in visual analytics systems is critical to enabling visual data exploration. Interaction transforms people from mere viewers to active participants in the process of analyzing and understanding data. This discourse between people and data enables people to understand aspects of their data, such as structure, patterns, trends, outliers, and other properties that ultimately result in insight. Through interacting with visualizations, users engage in sensemaking, a process of developing and understanding relationships within datasets through foraging and synthesis.The book provides a description of the principles of semantic interaction, providing design guidelines for the integration of semantic interaction into visual analytics, examples of existing technologies that leverage semantic interaction, and a discussion of how to evaluate these technologies. Semantic interaction has the potential to increase the effectiveness of visual analytic technologies and opens possibilities for a fundamentally new design space for user interaction in visual analytics systems.
Interest in visualization design has increased in recent years. While there is a large body of existing work from which visualization designers can draw, much of the past research has focused on developing new tools and techniques that are aimed at specific contexts. Less focus has been placed on developing holistic frameworks, models, and theories that can guide visualization design at a general level-a level that transcends domains, data types, users, and other contextual factors. In addition, little emphasis has been placed on the thinking processes of designers, including the concepts that designers use, while they are engaged in a visualization design activity. In this book we present a general, holistic framework that is intended to support visualization design for human-information interaction. The framework is composed of a number of conceptual elements that can aid in design thinking. The core of the framework is a pattern language-consisting of a set of 14 basic, abstract patterns-and a simple syntax for describing how the patterns are blended. We also present a design process, made up of four main stages, for creating static or interactive visualizations. The 4-stage design process places the patterns at the core of designers' thinking, and employs a number of conceptual tools that help designers think systematically about creating visualizations based on the information they intend to represent. Although the framework can be used to design static visualizations for simple tasks, its real utility can be found when designing visualizations with interactive possibilities in mind-in other words, designing to support a human-information interactive discourse. This is especially true in contexts where interactive visualizations need to support complex tasks and activities involving large and complex information spaces. The framework is intended to be general and can thus be used to design visualizations for diverse domains, users, information spaces, and tasks in different fields such as business intelligence, health and medical informatics, digital libraries, journalism, education, scientific discovery, and others. Drawing from research in multiple disciplines, we introduce novel concepts and terms that can positively contribute to visualization design practice and education, and will hopefully stimulate further research in this area.
Our society has entered a data-driven era, one in which not only are enormous amounts of data being generated daily but there are also growing expectations placed on the analysis of this data. Some data have become simply too large to be displayed and some have too short a lifespan to be handled properly with classical visualization or analysis methods. In order to address these issues, this book explores the potential solutions where we not only visualize data, but also allow users to be able to interact with it. Therefore, this book will focus on two main topics: large dataset visualization and interaction. Graphic cards and their image processing power can leverage large data visualization but they can also be of great interest to support interaction. Therefore, this book will show how to take advantage of graphic card computation power with techniques called GPGPUs (general-purpose computing on graphics processing units). As specific examples, this book details GPGPU usages to produce fast enough visualization to be interactive with improved brushing techniques, fast animations between different data representations, and view simplifications (i.e. static and dynamic bundling techniques). Since data storage and memory limitation is less and less of an issue, we will also present techniques to reduce computation time by using memory as a new tool to solve computationally challenging problems. We will investigate innovative data processing techniques: while classical algorithms are expressed in data space (e.g. computation on geographic locations), we will express them in graphic space (e.g., raster map like a screen composed of pixels). This consists of two steps: (1) a data representation is built using straightforward visualization techniques; and (2) the resulting image undergoes purely graphical transformations using image processing techniques. This type of technique is called image-based visualization. The goal of this book is to explore new computing techniques using image-based techniques to provide efficient visualizations and user interfaces for the exploration of large datasets. This book concentrates on the areas of information visualization, visual analytics, computer graphics, and human-computer interaction. This book opens up a whole field of study, including the scientific validation of these techniques, their limitations, and their generalizations to different types of datasets.
Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction. The goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. This view comprises five key aspects: the data, the tasks, the technology, the human, as well as the implementation. Picking up these aspects individually, we elaborate several interaction methods for visualization. We introduce a multi-threading architecture for efficient interactive exploration. We present interaction techniques for different types of data e.g., multivariate data, spatio-temporal data, graphs) and different visualization tasks (e.g., exploratory navigation, visual comparison, visual editing). With respect to technology, we illustrate approaches that utilize modern interaction modalities (e.g., touch, tangibles, proxemics) as well as classic ones. While the human is important throughout this work, we also consider automatic methods to assist the interactive part. In addition to solutions for individual problems, a major contribution of this work is the overarching view of interaction in visualization as a whole. This includes a critical discussion of interaction, the identification of links between the key aspects of interaction, and the formulation of research topics for future work with a focus on interaction.
Analytical reasoning techniques are methods by which users explore their data to obtain insight and knowledge that can directly support situational awareness and decision making. Recently, the analytical reasoning process has been augmented through the use of interactive visual representations and tools which utilize cognitive, design and perceptual principles. These tools are commonly referred to as visual analytics tools, and the underlying methods and principles have roots in a variety of disciplines. This chapter provides an introduction to young researchers as an overview of common visual representations and statistical analysis methods utilized in a variety of visual analytics systems. The application and design of visualization and analytical algorithms are subject to design decisions, parameter choices, and many conflicting requirements. As such, this chapter attempts to provide an initial set of guidelines for the creation of the visual representation, including pitfalls and areas where the graphics can be enhanced through interactive exploration. Basic analytical methods are explored as a means of enhancing the visual analysis process, moving from visual analysis to visual analytics. Table of Contents: Data Types / Color Schemes / Data Preconditioning / Visual Representations and Analysis / Summary
This book constitutes the refereed proceedings of the 4th International Workshop on Visual Form, IWVF-4, held in Capri, Italy, in May 2001.The 66 revised full papers presented together with seven invited papers were carefully reviewed and selected from 117 submissions. The book covers theoretical and applicative aspects of visual form processing. The papers are organized in topical sections on representation, analysis, recognition, modelling and retrieval, and applications.