16 research outputs found
Engineering simulations for cancer systems biology
Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions
Design of Information Visualizations in the Internet of Nano-Things Air Quality Systems
Today\u27s age is characterized by large amount of information that surrounds us in which visual information plays a significant role. Nanosensor air quality measurement systems that use Internet of Nano-Things technology enable the collection of big data. Ease of display and storage of information that the user can easily interpret are imperative in designing a visual interface. Only a good combination of visual elements complemented by data and map display will contribute to the clarity of the processed data. This paper will give an overview of the factors that affect the excellence of the transmission of visual information. Ways of presenting visualizations of air quality data measured by IoNT systems will be discussed through descriptive and empirical analysis of visualizations. A special emphasis is on the review of existing practices and principles, and the possibilities of visual presentation of information in this area will be explained through the discussion
Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations
We introduce Artifact-Based Rendering (ABR), a framework of tools,
algorithms, and processes that makes it possible to produce real, data-driven
3D scientific visualizations with a visual language derived entirely from
colors, lines, textures, and forms created using traditional physical media or
found in nature. A theory and process for ABR is presented to address three
current needs: (i) designing better visualizations by making it possible for
non-programmers to rapidly design and critique many alternative data-to-visual
mappings; (ii) expanding the visual vocabulary used in scientific
visualizations to depict increasingly complex multivariate data; (iii) bringing
a more engaging, natural, and human-relatable handcrafted aesthetic to data
visualization. New tools and algorithms to support ABR include front-end
applets for constructing artifact-based colormaps, optimizing 3D scanned meshes
for use in data visualization, and synthesizing textures from artifacts. These
are complemented by an interactive rendering engine with custom algorithms and
interfaces that demonstrate multiple new visual styles for depicting point,
line, surface, and volume data. A within-the-research-team design study
provides early evidence of the shift in visualization design processes that ABR
is believed to enable when compared to traditional scientific visualization
systems. Qualitative user feedback on applications to climate science and brain
imaging support the utility of ABR for scientific discovery and public
communication.Comment: Published in IEEE VIS 2019, 9 pages of content with 2 pages of
references, 12 figure
Widgets Réactifs 3D pour l’interaction musicale
National audienceNotre travail porte sur l'utilisation de l'interaction 3D immersive pour la performance musicale. De nombreuses recherches ont été menées sur l'utilisation d'interfaces graphiques pour le contrôle musical. Elles ont notamment décrit l'intérêt que présentent les widgets réactifs, éléments graphiques qui permettent à la fois le contrôle de processus sonores et la visualisation d'informations sur ces processus. D'autres recherches ont montré les possibilités apportées par la réalité virtuelle en matière d'immersion et d'interaction. Néanmoins, aucune des applications 3D musicales développées jusqu'à maintenant n'exploite les avantages des widgets réactifs. Nous cherchons donc à explorer cette voie. Pour cela, nous avons développé un outil de création d'interfaces 3D, Poulpe3D. Il nous semble primordial de chercher la meilleure façon d'associer les paramètres visuels des widgets et les paramètres perceptifs sonores afin de permettre un contrôle efficace et un retour d'informations pertinent. Pour cela, nous nous appuyons sur plusieurs pistes de recherche, qui nous conduisent à penser qu'il est impossible de fixer ces associations de manière objective et à envisager une série de tests en utilisant un outil 3D de mapping intégré à Poulpe3D. Cet outil donne la possibilité à chaque utilisateur de configurer les liens selon ses préférences
Attention and visual memory in visualization and computer graphics
Abstract—A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we “see ” details in an image can directly impact a viewer’s efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics
New insights into the suitability of the third dimension for visualizing multivariate/multidimensional data: a study based on loss of quality quantification
Most visualization techniques have traditionally used two-dimensional, instead of three-dimensional representations to visualize multidimensional and multivariate data. In this article, a way to demonstrate the underlying superiority of three-dimensional, with respect to two-dimensional, representation is proposed. Specifically, it is based on the inevitable quality degradation produced when reducing the data dimensionality. The problem is tackled from two different approaches: a visual and an analytical approach. First, a set of statistical tests (point classification, distance perception, and outlier identification) using the two-dimensional and three-dimensional visualization are carried out on a group of 40 users. The results indicate that there is an improvement in the accuracy introduced by the inclusion of a third dimension; however, these results do not allow to obtain definitive conclusions on the superiority of three-dimensional representation. Therefore, in order to draw further conclusions, a deeper study based on an analytical approach is proposed. The aim is to quantify the real loss of quality produced when the data are visualized in two-dimensional and three-dimensional spaces, in relation to the original data dimensionality, to analyze the difference between them. To achieve this, a recently proposed methodology is used. The results obtained by the analytical approach reported that the loss of quality reaches significantly high values only when switching from three-dimensional to two-dimensional representation. The considerable quality degradation suffered in the two-dimensional visualization strongly suggests the suitability of the third dimension to visualize data
Scientific visualization of stress tensor information with applications to stress determination by X-ray and neutron diffraction
Includes bibliographical references (leaves 232-249).The visual analysis of mechanical stress facilitates physical understanding of the tensor quantity which is concealed in scalar and vector methods. In this study, the principles and techniques of scientific visualization are used to develop a visual analysis of mechanical stresses. Scientific visualization is not only applied to the final tensorial quantity obtained from the diffraction measurements, but the visual methods are developed from, and integrated into current residual stress analysis practices by relating the newly developed visual techniques to the conventional techniques, highlighting its advantages. This study consists of the mathematical analysis of the tensor character of mechanical stresses, discussion of the principles and techniques of scientific visualization (visual data analysis) in physical research, and tensor determination, visual analysis and presentation of residual stresses obtained from diffraction measurements
New Visualization Techniques for Multi-Dimensional Variables in Complex Physical Domains
This work presents the new Synthesized Cell Texture (SCT) algorithm for visualizing related multiple scalar value fields within the same 3D space. The SCT method is particularly well suited to scalar quantities that could be represented in the physical domain as size fractionated particles, such as in the study of sedimentation, atmospheric aerosols, or precipitation. There are two components to this contribution. First a Scaling and Distribution (SAD) algorithm provides a means of specifying a multi-scalar field in terms of a maximum cell resolution (or density of represented values). This information is used to scale the multi-scalar field values for each 3D cell to the maximum values found throughout the data set, and then randomly distributes those values as particles varying in number, size, color, and opacity within a 2D cell slice. This approach facilitates viewing of closely spaced layers commonly found in sigma-coordinate grids. The SAD algorithm can be applied regardless of how the particles are rendered. The second contribution provides the Synthesized Cell Texture (SCT) algorithm to render the multi-scalar values. In this approach, a texture is synthesized from the location information computed by the SAD algorithm, which is then applied to each cell as a 2D slice within the volume. The SCT method trades off computation time (to synthesize the texture) and texture memory against the number of geometric primitives that must be sent through the graphics pipeline of the host system. Analysis results from a user study prove the effectiveness of the algorithm as a browsing method for multiple related scalar fields. The interactive rendering performance of the SCT method is compared with two common basic particle representations: flat-shaded color-mapped OpenGL points and quadrilaterals. Frame rate statistics show the SCT method to be up to 44 times faster, depending on the volume to be displayed and the host system. The SCT method has been successfully applied to oceanographic sedimentation data, and can be applied to other problem domains as well. Future enhancements include the extension to time-varying data and parallelization of the texture synthesis component to reduce startup time
Mapping textures on 3d terrains: a hybrid cellular automata approach
It is a time consuming task to generate textures for large 3D terrain surfaces in
computer games, flight simulations and computer animations. This work explores the
use of cellular automata in the automatic generation of textures for large surfaces. I
propose a method for generating textures for 3D terrains using various approaches - in
particular, a hybrid approach that integrates the concepts of cellular automata,
probabilistic distribution according to height and Wang tiles. I also look at other hybrid
combinations using cellular automata to generate textures for 3D terrains. Work for this
thesis includes development of a tool called "Texullar" that allows users to generate
textures for 3D terrain surfaces by configuring various input parameters and choosing
cellular automata rules.
I evaluate the effectiveness of the approach by conducting a user survey to
compare the results obtained by using different inputs and analyzing the results. The
findings show that incorporating concepts of cellular automata in texture generation for
terrains can lead to better results than random generation of textures. The analysis also
reveals that incorporating height information along with cellular automata yields better
results than using cellular automata alone. Results from the user survey indicate that a hybrid approach incorporating height information along with cellular automata and
Wang tiles is better than incorporating height information along with cellular automata
in the context of texture generation for 3D meshes.
The survey did not yield enough evidence to suggest whether the use of Wang
tiles in combination with cellular automata and probabilistic distribution according to
height results in a higher mean score than the use of only cellular automata and
probabilistic distribution. However, this outcome could have been influenced by the fact
that the survey respondents did not have information about the parameters used to
generate the final image - such as probabilistic distributions, the population
configurations and rules of the cellular automata