747 research outputs found

    Surveying and Three-Dimensional Modeling for Preservation and Structural Analysis of Cultural Heritage

    Get PDF
    Dense point clouds can be used for three important steps in structural analysis, in the field of cultural heritage, regardless of which instrument it was used for acquisition data. Firstly, they allow deriving the geometric part of a finite element (FE) model automatically or semi-automatically. User input is mainly required to complement invisible parts and boundaries of the structure, and to assign meaningful approximate physical parameters. Secondly, FE model obtained from point clouds can be used to estimate better and more precise parameters of the structural analysis, i.e., to train the FE model. Finally, the definition of a correct Level of Detail about the three-dimensional model, deriving from the initial point cloud, can be used to define the limit beyond which the structural analysis is compromised, or anyway less precise. In this work of research, this will be demonstrated using three different case studies of buildings, consisting mainly of masonry, measured through terrestrial laser scanning and photogrammetric acquisitions. This approach is not a typical study for geomatics analysis, but its challenges allow studying benefits and limitations. The results and the proposed approaches could represent a step towards a multidisciplinary approach where Geomatics can play a critical role in the monitoring and civil engineering field. Furthermore, through a geometrical reconstruction, different analyses and comparisons are possible, in order to evaluate how the numerical model is accurate. In fact, the discrepancies between the different results allow to evaluate how, from a geometric and simplified modeling, important details can be lost. This causes, for example, modifications in terms of mass and volume of the structure

    The LifeV library: engineering mathematics beyond the proof of concept

    Get PDF
    LifeV is a library for the finite element (FE) solution of partial differential equations in one, two, and three dimensions. It is written in C++ and designed to run on diverse parallel architectures, including cloud and high performance computing facilities. In spite of its academic research nature, meaning a library for the development and testing of new methods, one distinguishing feature of LifeV is its use on real world problems and it is intended to provide a tool for many engineering applications. It has been actually used in computational hemodynamics, including cardiac mechanics and fluid-structure interaction problems, in porous media, ice sheets dynamics for both forward and inverse problems. In this paper we give a short overview of the features of LifeV and its coding paradigms on simple problems. The main focus is on the parallel environment which is mainly driven by domain decomposition methods and based on external libraries such as MPI, the Trilinos project, HDF5 and ParMetis. Dedicated to the memory of Fausto Saleri.Comment: Review of the LifeV Finite Element librar

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Finite-Volume Filtering in Large-Eddy Simulations Using a Minimum-Dissipation Model

    Get PDF
    Large-eddy simulation (LES) seeks to predict the dynamics of the larger eddies in turbulent flow by applying a spatial filter to the Navier-Stokes equations and by modeling the unclosed terms resulting from the convective non-linearity. Thus the (explicit) calculation of all small-scale turbulence can be avoided. This paper is about LES-models that truncate the small scales of motion for which numerical resolution is not available by making sure that they do not get energy from the larger, resolved, eddies. To identify the resolved eddies, we apply Schumann’s filter to the (incompressible) Navier-Stokes equations, that is the turbulent velocity field is filtered as in a finite-volume method. The spatial discretization effectively act as a filter; hence we define the resolved eddies for a finite-volume discretization. The interpolation rule for approximating the convective flux through the faces of the finite volumes determines the smallest resolved length scale δ. The resolved length δ is twice as large as the grid spacing h for an usual interpolation rule. Thus, the resolved scales are defined with the help of box filter having diameter δ= 2 h. The closure model is to be chosen such that the solution of the resulting LES-equations is confined to length scales that have at least the size δ. This condition is worked out with the help of Poincarés inequality to determine the amount of dissipation that is to be generated by the closure model in order to counterbalance the nonlinear production of too small, unresolved scales. The procedure is applied to an eddy-viscosity model using a uniform mesh
    • …
    corecore