1,139 research outputs found

    Design of 2D Time-Varying Vector Fields

    Get PDF
    published_or_final_versio

    Design of 2D time-varying vector fields

    Get PDF
    pre-printDesign of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects

    The application of computational modeling to data visualization

    Get PDF
    Researchers have argued that perceptual issues are important in determining what makes an effective visualization, but generally only provide descriptive guidelines for transforming perceptual theory into practical designs. In order to bridge the gap between theory and practice in a more rigorous way, a computational model of the primary visual cortex is used to explore the perception of data visualizations. A method is presented for automatically evaluating and optimizing data visualizations for an analytical task using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and visual cortex. The neural activity resulting from viewing an information visualization is simulated and evaluated to produce metrics of visualization effectiveness for analytical tasks. Visualization optimization is achieved by applying these effectiveness metrics as the utility function in a hill-climbing algorithm. This method is applied to the evaluation and optimization of two visualization types: 2D flow visualizations and node-link graph visualizations. The computational perceptual model is applied to various visual representations of flow fields evaluated using the advection task of Laidlaw et al. The predictive power of the model is examined by comparing its performance to that of human subjects on the advection task using four flow visualization types. The results show the same overall pattern for humans and the model. In both cases, the best performance was obtained from visualizations containing aligned visual edges. Flow visualization optimization is done using both streaklet-based and pixel-based visualization parameterizations. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment, the pixel-based parameterization results in a LIC-like result. The model is also applied to node-link graph diagram visualizations for a node connectivity task using two-layer node-link diagrams. The model evaluation of node-link graph visualizations correlates with human performance, in terms of both accuracy and response time. Node-link graph visualizations are optimized using the perceptual model. The optimized node-link diagrams exhibit the aesthetic properties associated with good node-link diagram design, such as straight edges, minimal edge crossings, and maximal crossing angles, and yields empirically better performance on the node connectivity task

    Integração de caracterização de reservatórios com ajuste de histórico baseado em poços piloto : aplicação ao campo Norne

    Get PDF
    Orientador: Denis José SchiozerTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de GeociênciasResumo: As incertezas inerentes à simulação numérica de reservatórios podem originar modelos com diferenças significativas relativamente aos dados dinâmicos observados. A redução destas diferenças, processo conhecido por ajuste de histórico, é muitas vezes acompanhada por certa negligência da consistência geológica dos modelos, comprometendo a confiabilidade no processo e nas previsões de produção. Para manter a consistência geológica dos modelos, é fundamental integrar iterativamente o processo de ajuste de histórico com a modelagem geoestatística do reservatório. Apesar das diversas abordagens apresentadas nas últimas décadas, este processo de integração continua a ser altamente desafiante. Este trabalho propõe um fluxograma de modelagem geológica integrado com um fluxograma de ajuste de histórico, baseado no conceito do ponto piloto. O método do ponto piloto é uma técnica de parametrização geoestatística aplicada a modelos de reservatório, gerados a partir de um conjunto de dados medidos e de dados sintéticos definidos em outros pontos do reservatório, designados por pontos piloto. Neste trabalho os dados sintéticos correspondem a poços sintéticos e, por isso, designados por poços piloto. A metodologia é aplicada a um reservatório real, o reservatório arenítico de Norne, testando, desta forma, os diferentes procedimentos num cenário de elevada complexidade. Numa primeira etapa, é efetuada uma caracterização das heterogeneidades geológicas através da classificação de electrofacies juntamente com um refinamento do malha de simulação, por forma a obter volumes de fácies e propriedades petrofísicas com elevada resolução. Esta etapa apresenta diversas vantagens: (1) permite-nos mapear as heterogeneidades de pequena escala materializadas por camadas muito finas de folhelho e carbonatos cimentados que poderão atuar como barreiras estratigráficas verticais à dispersão dos diferentes fluídos; (2) permite a definição de novos atributos a serem usados durante a fase de ajuste como permeabilidade e transmissibilidade verticais, diferentes curvas de permeabilidade relativa associadas a diferentes tipos de rocha e, sobretudo, a definição das propriedades a serem incluídas nos poços piloto; (3) aumenta o controle geológico do processo de ajuste de histórico. Após a classificação de electrofacies, os modelos de alta resolução são integrados num processo iterativo entre a modelagem geológica e um processo de ajuste de histórico probabilístico e multiobjectivo guiado por poços piloto. Um dos maiores desafios do método dos poços piloto reside na configuração a adotar (número, localização e propriedades a modificar), sendo a flexibilidade do método uma das suas maiores vantagens. A configuração tem em conta os dados de produção, linhas de fluxo e enquadramento geológico-estrutural. A flexibilidade do método é demonstrada por meio de dois estudos de caso: a geração de figuras sedimentares, como é exemplo, o canal construído no segmento-G; a capacidade para procurar a melhor localização das camadas carbonatadas, altamente restritiva ao deslocamento dos fluídos no segmento C. Em última análise, o processo iterativo de modelagem geológica e ajuste de histórico guiado por poços piloto permitiu obter modelos geologicamente mais fiáveis que honrassem ao mesmo tempo o dado observadoAbstract: The inherent uncertainties in numerical reservoir simulation can lead to models with significant differences to observed dynamic data. History matching reduces these differences but often neglects the geological consistency of the models, compromising forecasting reliability. To maintain the geological consistency of the models, the history-matching process must be integrated with geostatistical modeling. Despite many suggested approaches in recent decades, this integration process remains a challenge. This work proposes a geological modeling workflow integrated within a general history-matching workflow, utilizing the pilot point¿s concept (in this study assuming the form of pilot wells). The pilot point method is a geostatistical parameterization technique that calibrates a pre-correlated field, generated from measured values and a set of additional synthetic data at unmeasured locations in the reservoir, referred to as pilot points. In this study, the synthetic data corresponds to synthetic wells, henceforth referred to as pilot wells. The methodology is applied to a real, complex, sandstone reservoir, the Norne field. The geological heterogeneities are characterized, in detail, through electrofacies analysis and combined with a refined simulation grid, to create high-resolution facies and petrophysical 3D models. This stage has several advantages: (1) allows the mapping of fine-scale heterogeneities generally comprising decimeter shales and calcareous-cemented layers that may act as stratigraphic barriers to vertical fluid displacement; (2) allows the addition of new attributes used during the history-matching stage, such as properties used in the pilot wells, vertical permeability and transmissibility models, and different kr curves assigned to different rock types; and (3) increases geological control over the history-matching process. After analyzing electrofacies, the high-resolution datasets are integrated into an iterative loop between geostatistical modeling and a probabilistic, multi-objective history-matching process, guided by pilot wells. A key challenge using the pilot wells method is to optimize the pilot well configuration (number, location and properties to disturb), and the flexibility of the pilot well method is a principal advantage. The configuration includes production data, the preferred fluid flow paths (revealed during a streamline analysis) and the geological framework. The flexibility of the method is demonstrated in the two case studies presented here: generating specific sedimentary features (e.g. channels in the G-segment) and finding the best location for the cemented stringers responsible for the fluid behavior observed in C-segment. This work shows that the iterative process combining geological modeling and geostatistical-based history matching, guided by pilot wells, created geologically consistent models that honor observed dataDoutoradoReservatórios e GestãoDoutor em Ciências e Engenharia de Petróle

    Visual Analysis and Exploration of Fluid Flow in a Cooling Jacket

    Get PDF

    Statistical methods for history matching

    Get PDF

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen
    corecore