7 research outputs found

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos científicos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anatómicos, es importante generar imágenes precisas, ya que una mala interpretación de las mismas puede producir errores graves en el diagnóstico de enfermedades o en la planificación de operaciones quirúrgicas. En estos casos, mejorar la percepción de las zonas de interés, facilita la comprensión de la información inherente a los datos. Durante décadas, los investigadores se han centrado en el desarrollo de técnicas para mejorar la visualización de datos volumétricos. Por ejemplo, los métodos que permiten definir buenas funciones de transferencia son clave, ya que éstas determinan cómo se clasifican los materiales. Otros ejemplos son las técnicas que simulan modelos de iluminación realista, que permiten percibir mejor la distribución espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracción necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepción de los elementos del volumen, ya sea modificando el modelo de iluminación aplicado en la visualización, o simulando efectos ilustrativos. Aprovechando la capacidad de cálculo de los nuevos procesadores gráficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepción de detalles locales, proponemos modificar el modelo de iluminación utilizando una conocida herramienta de procesado de imágenes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. También se presentan diferentes técnicas para mejorar la percepción de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminación teniendo en cuenta la oclusión ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusión del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusión en cada vóxel. Además de estas dos técnicas, también se propone mejorar la percepción espacial y de la profundidad de ciertas estructuras mediante la generación de halos. La técnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ningún tipo de información contextual. Para mejorar la percepción de la profundidad, proponemos una nueva técnica basada en cambiar la forma en la que se acumula la intensidad en MIP. También se describe un esquema de color para mejorar la percepción espacial de los elementos visualizados. La última contribución de la tesis es una herramienta de manipulación directa de los datos, que permite preservar la información contextual cuando se realizan cortes en el modelo de volumen. Basada en técnicas ilustrativas tradicionales, esta técnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de interés se visualizan a diferentes alturas sobre la sección, lo que permite al observador percibirlas correctamente

    Invasion fitness, inclusive fitness, and reproductive numbers in heterogeneous populations.

    Get PDF
    How should fitness be measured to determine which phenotype or "strategy" is uninvadable when evolution occurs in a group-structured population subject to local demographic and environmental heterogeneity? Several fitness measures, such as basic reproductive number, lifetime dispersal success of a local lineage, or inclusive fitness have been proposed to address this question, but the relationships between them and their generality remains unclear. Here, we ascertain uninvadability (all mutant strategies always go extinct) in terms of the asymptotic per capita number of mutant copies produced by a mutant lineage arising as a single copy in a resident population ("invasion fitness"). We show that from invasion fitness uninvadability is equivalently characterized by at least three conceptually distinct fitness measures: (i) lineage fitness, giving the average individual fitness of a randomly sampled mutant lineage member; (ii) inclusive fitness, giving a reproductive value weighted average of the direct fitness costs and relatedness weighted indirect fitness benefits accruing to a randomly sampled mutant lineage member; and (iii) basic reproductive number (and variations thereof) giving lifetime success of a lineage in a single group, and which is an invasion fitness proxy. Our analysis connects approaches that have been deemed different, generalizes the exact version of inclusive fitness to class-structured populations, and provides a biological interpretation of natural selection on a mutant allele under arbitrary strength of selection

    Probabilistic modelling of stimulated raman back-scatter in laser direct-drive fusion plasmas at ignition scale

    Get PDF
    A framework has been constructed for predictive modelling of the stimulated Raman scattering (SRS) instability at ignition scale in laser direct-drive inertial confinement fusion (ICF). An extended ray-tracing methodology from literature was found to underpredict SRS, and to be computationally inefficient. This was corrected by modification of the energy-exchange process between laser and Raman light, and introduction of thresholds and bounds physically informed by collisional absorption and Rosenbluth gain. Predictions were further improved by Gaussian process (GP) regression surrogates, which at the first instance fully resolved the SRS solution in the steady-state strong damping limit. Subsequently, additional physics were captured using the plasma wave solver LPSE, with scope for future capture of more complex solvers within the hierarchical machine learning framework. An additional GP surrogate was used to replace the costly resonant frequency search part of the algorithm. The framework was implemented in the 1D ICF code Freyja, allowing for investigation of the effects of SRS on fusion performance for a shock ignition case. It also allowed for efficient training of the GP surrogates by keeping the predicted error below thresholds across a full simulation. SRS was found to have a detrimental effect on fusion performance, but this was mitigated to some extent when modelling the secondary effects of the hot electron populations which are created. These hot electrons were found to strengthen the ignition shock, but an uncertainty quantification study still showed a much reduced probability of ignition compared to the case without SRS. As well as predictive modelling, a separate probabilistic study was carried out to show the approach of calibrating modelling coefficients for laserplasma interactions and energy transport against experimental data. The results of this study were used to both set certain modelling parameters for later simulations, and to highlight the need for truly predictive modelling

    Suitability of different RANS models in the description of turbulent forced convection flows: application to air curtains

    Get PDF
    The main motivation of this thesis is the analysis of turbulent flows. Turbulence plays an important role in engineering applications due to the fact that most flows in industrial equipment and surroundings are in turbulent regime. The thesis has a double purpose and is divided in two main parts. The first one is focussed on the basic and fundamental analysis of turbulence models. In the second part the know-how acquired in the first part is applied to the study of air curtains.Regarding to the first part, the principal difficulty of computing and modelling turbulent flows resides in the dominance of non-linear effects and the continuous and wide spectrum of time and length scales. Therefore, the use of turbulence modelling employing statistical techniques for high Reynolds numbers or complex geometries is still necessary. In general, this modelization is based on time averaging of the Navier-Stokes equations (this approach is known as Reynolds-Averaged Navier-Stokes Simulations, RANS). As consequence of the average new unknowns, so-called Reynolds stresses, arise. Different approaches to evaluate them are: i) Differentially Reynolds Stress Models (DRSM), ii) Explicit Algebraic Reynolds Stress Models (EARSM), and iii) Eddy Viscosity Models (EVM).Although EVM models assuming a linear relation between the turbulent stresses and the mean rate of strain tensor are extensively used, they present various limitations. In the last few years, with the even-increasing computational capacity, new proposals to overcome many of these deficiencies have started to find their way. Thus, algebraic or non-linear relations are used to determinate the Reynolds stress tensor without introducing any additional differential equation.Therefore, the first part of this thesis is devoted to the study of several EARSM and EVM models involving linear and higher order terms in the constitutive relation to evaluate turbulent stresses. Accuracy and numerical performance of these models is tested in different flow configurations such as plane channel, backward facing step, and both plane and round impinging jets. Special attention is paid to the verification of the code and numerical solutions, and the validation of the mathematical models used. In the impinging plane configuration, improvements of models using higher order terms in the constitutive relation are limited. Whereas, in the rest of studied cases these non-linear models show a reasonably good behaviour.Moreover, taken into account models convergence, robustness and predictive realism observed in the analysis of these benchmark flows, some of them are selected for the study of air curtains and their interaction with the environment where they are placed. Air curtains are generally one or a set of vertical or horizontal plane jets used as ambient separator of adjacent areas presenting different conditions. The jet acts as a screen against energy losses/gains, moisture or mass exchanges between the areas.As was indicated before, the main purpose of the second part of this thesis is to characterize in detail actual air curtains using both experimental and different numerical approaches. Semi-empirical models to design air curtains are presented. Then, an experimental set-up used to study air curtain discharge and jet downstream is explained. Experimental measurements of velocity and temperature are shown. As a result of the experiments carried out, an improved air curtain with a new design of the discharge nozzle is obtained. Furthermore, air curtain experiments are numerically reproduced and predictions validated against the experimental data acquired. Good agreement between numerical and experimental results is observed.Finally, systematic parametric studies of air curtains in heating and refrigeration applications are done. Global energetic balances are specially considered together with global parameters selected in order to evaluate air curtain performance. It is found that discharge velocity, discharge angle and turbulence intensity of the jet are the most sensitive parameters. Inadequate values for these variables can produce undesirable effects and contribute to increase energy gains/losses

    Failure detection methods for pipeline networks: from acoustic sensing to cyber-physical systems

    No full text
    Pipeline networks have been widely utilised in the transportation of water, natural gases, oil and waste materials efficiently and safely over varying distances with minimal human intervention. In order to optimise the spatial use of the pipeline infrastructure, pipelines are either buried underground, or located in submarine environments. Due to the continuous expansion of pipeline networks in locations that are inaccessible to maintenance personnel, research efforts have been ongoing to introduce and develop reliable detection methods for pipeline failures, such as blockages, leakages, cracks, corrosion and weld defects. In this paper, a taxonomy of existing pipeline failure detection techniques and technologies was created to comparatively analyse their respective advantages, drawbacks and limitations. This effort has effectively illuminated various unaddressed research challenges that are still present among a wide array of the state-of-the-art detection methods that have been employed in various pipeline domains. These challenges include the extension of the lifetime of a pipeline network for the reduction of maintenance costs, and the prevention of disruptive pipeline failures for the minimisation of downtime. Our taxonomy of various pipeline failure detection methods is also presented in the form of a look-up table to illustrate the suitability, key aspects and data or signal processing techniques of each individual method. We have also quantitatively evaluated the industrial relevance and practicality of each of the methods in the taxonomy in terms of their respective deployability, generality and computational cost. The outcome of the evaluation made in the taxonomy will contribute to our future works involving the utilisation of sensor fusion and data-centric frameworks to develop efficient, accurate and reliable failure detection solutions
    corecore