18 research outputs found

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method

    Get PDF
    This thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: ''How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?'' We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios

    The Optimization of Geotechnical Site Investigations for Pile Design in Multiple Layer Soil Profiles Using a Risk-Based Approach

    Get PDF
    The testing of subsurface material properties, i.e. a geotechnical site investigation, is a crucial part of projects that are located on or within the ground. The process consists of testing samples at a variety of locations, in order to model the performance of an engineering system for design processes. Should these models be inaccurate or unconservative due to an improper investigation, there is considerable risk of consequences such as structural collapse, construction delays, litigation, and over-design. However, despite these risks, there are relatively few quantitative guidelines or research items on informing an explicit, optimal investigation for a given foundation and soil profile. This is detrimental, as testing scope is often minimised in an attempt to reduce expenditure, thereby increasing the aforementioned risks. This research recommends optimal site investigations for multi-storey buildings supported by pile foundations, for a variety of structural configurations and soil profiles. The recommendations include that of the optimal test type, number of tests, testing locations, and interpretation of test data. The framework consists of a risk-based approach, where an investigation is considered optimal if it results in the lowest total project cost, incorporating both the cost of testing, and that associated with any expected negative consequences. The analysis is statistical in nature, employing Monte Carlo simulation and the use of randomly generated virtual soils through random field theory, as well as finite element analysis for pile assessment. A number of innovations have been developed to assist the novel nature of the work. For example, a new method of producing randomly generated multiple-layer soils has been devised. This work is the first instance of site investigations being optimised in multiple-layer soils, which are considerably more complex than the single-layer soils examined previously. Furthermore, both the framework and the numerical tools have been themselves extensively optimised for speed. Efficiency innovations include modifying the analysis to produce re-usable pile settlement curves, as opposed to designing and assessing the piles directly. This both reduces the amount of analysis required and allows for flexible post-processing for different conditions. Other optimizations include the elimination of computationally expensive finite element analysis from within the Monte Carlo simulations, and additional minor improvements. Practicing engineers can optimise their site investigations through three outcomes of this research. Firstly, optimal site investigation scopes are known for the numerous specific cases examined throughout this document, and the resulting inferred recommendations. Secondly, a rule-of-thumb guideline has been produced, suggesting the optimal number of tests for buildings of all sizes in a single soil case of intermediate variability. Thirdly, a highly efficient and versatile software tool, SIOPS, has been produced, allowing engineers to run a simplified version of the analysis for custom soils and buildings. The tool can do almost all the analysis shown throughout the thesis, including the use of a genetic algorithm to optimise testing locations. However, it is approximately 10 million times faster than analysis using the original framework, running on a single-core computer within minutes.Thesis (Ph.D.) -- University of Adelaide, School of Civil, Environmental and Mining Engineering, 202

    MPAS-Albany Land Ice (MALI): a variable-resolution ice sheet model for Earth system modeling using Voronoi grids

    Get PDF
    We introduce MPAS-Albany Land Ice (MALI) v6.0, a new variable-resolution land ice model that uses unstructured Voronoi grids on a plane or sphere. MALI is built using the Model for Prediction Across Scales (MPAS) framework for developing variable-resolution Earth system model components and the Albany multi-physics code base for the solution of coupled systems of partial differential equations, which itself makes use of Trilinos solver libraries. MALI includes a three-dimensional first-order momentum balance solver (Blatter–Pattyn) by linking to the Albany-LI ice sheet velocity solver and an explicit shallow ice velocity solver. The evolution of ice geometry and tracers is handled through an explicit first-order horizontal advection scheme with vertical remapping. The evolution of ice temperature is treated using operator splitting of vertical diffusion and horizontal advection and can be configured to use either a temperature or enthalpy formulation. MALI includes a mass-conserving subglacial hydrology model that supports distributed and/or channelized drainage and can optionally be coupled to ice dynamics. Options for calving include eigencalving, which assumes that the calving rate is proportional to extensional strain rates. MALI is evaluated against commonly used exact solutions and community benchmark experiments and shows the expected accuracy. Results for the MISMIP3d benchmark experiments with MALI's Blatter–Pattyn solver fall between published results from Stokes and L1L2 models as expected. We use the model to simulate a semi-realistic Antarctic ice sheet problem following the initMIP protocol and using 2&thinsp;km resolution in marine ice sheet regions. MALI is the glacier component of the Energy Exascale Earth System Model (E3SM) version 1, and we describe current and planned coupling to other E3SM components.</p

    A Hierarchical Approach for Regular Centroidal Voronoi Tessellations

    Get PDF
    International audienceIn this paper we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modeling and analysis. While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers. This criterion allows us to compare CVTs on a common basis. It builds on earlier theoretical work showing that second moments of cells converge to a lower bound when optimising CVTs. In addition to proposing a regularity criterion, this paper also considers computational strategies to determine regular CVTs. We introduce a hierarchical framework that propagates regularity over decomposition levels and hence provides CVTs with provably better regularities than existing methods. We illustrate these principles with a wide range of experiments on synthetic and real models

    Multiple particle tracking in PEPT using Voronoi tessellations

    Get PDF
    An algorithm is presented which makes use of three-dimensional Voronoi tessellations to track up to 20 tracers using a PET scanner. The lines of response generated by the PET scanner are discretized into sets of equidistant points, and these are used as the input seeds to the Voronoi tessellation. For each line of response, the point with the smallest Voronoi region is located; this point is assumed to be the origin of the corresponding line of response. Once these origin points have been determined, any outliers are removed, and the remaining points are clustered using the DBSCAN algorithm. The centroid of each cluster is classified as a tracer location. Once the tracer locations are determined for each time frame in the experimental data set, a custom multiple target tracking algorithm is used to associate identical tracers from frame to frame. Since there are no physical properties to distinguish the tracers from one another, the tracking algorithm uses velocity and position to extrapolate the locations of existing tracers and match the next frame's tracers to the trajectories. A series of experiments were conducted in order to test the robustness, accuracy and computational performance of the algorithm. A measure of robustness is the chance of track loss, which occurs when the algorithm fails to match a tracer location with its trajectory, and the track is terminated. The chance of track loss increases with the number of tracers; the acceleration of the tracers; the time interval between successive frames; and the proximity of tracers to each other. In the case of two tracers colliding, the two tracks merge for a short period of time, before separating and become distinguishable again. Track loss also occurs when a tracer leaves the field of view of the scanner; on return it is treated as a new object. The accuracy of location of the algorithm was found to be slightly affected by tracer velocity, but is much more dependent on the distance between consecutive points on a line of response, and the number of lines of response used per time frame. A single tracer was located to within 1.26mm. This was compared to the widely accepted Birmingham algorithm, which located the same tracer to within 0.92mm. Precisions of between 1.5 and 2.0mm were easily achieved for multiple tracers. The memory usage and processing time of the algorithm are dependent on the number of tracers used in the experiment. It was found that the processing time per frame for 20 tracers was about 15s, and the memory usage was 400MB. Because of the high processing times, the algorithm as is is not feasible for practical use. However, the location phase of the algorithm is massively parallel, so the code can be adapted to significantly increase the efficiency

    Locally optimal Delaunay-refinement and optimisation-based mesh generation

    Get PDF
    The field of mesh generation concerns the development of efficient algorithmic techniques to construct high-quality tessellations of complex geometrical objects. In this thesis, I investigate the problem of unstructured simplicial mesh generation for problems in two- and three-dimensional spaces, in which meshes consist of collections of triangular and tetrahedral elements. I focus on the development of efficient algorithms and computer programs to produce high-quality meshes for planar, surface and volumetric objects of arbitrary complexity. I develop and implement a number of new algorithms for mesh construction based on the Frontal-Delaunay paradigm - a hybridisation of conventional Delaunay-refinement and advancing-front techniques. I show that the proposed algorithms are a significant improvement on existing approaches, typically outperforming the Delaunay-refinement technique in terms of both element shape- and size-quality, while offering significantly improved theoretical robustness compared to advancing-front techniques. I verify experimentally that the proposed methods achieve the same element shape- and size-guarantees that are typically associated with conventional Delaunay-refinement techniques. In addition to mesh construction, methods for mesh improvement are also investigated. I develop and implement a family of techniques designed to improve the element shape quality of existing simplicial meshes, using a combination of optimisation-based vertex smoothing, local topological transformation and vertex insertion techniques. These operations are interleaved according to a new priority-based schedule, and I show that the resulting algorithms are competitive with existing state-of-the-art approaches in terms of mesh quality, while offering significant improvements in computational efficiency. Optimised C++ implementations for the proposed mesh generation and mesh optimisation algorithms are provided in the JIGSAW and JITTERBUG software libraries

    Towards convection-resolving, global atmospheric simulations with the Model for Prediction Across Scales (MPAS) v3.1: an extreme scaling experiment

    Get PDF
    The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers.We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes.We further demonstrate the model performance of MPAS in terms of ist capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3 km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3 km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities
    corecore