132,899 research outputs found

    An Interpolated Volume Data Model

    Full text link

    Using Kriging to Interpolate Spatially Distributed Volumetric Medical Data

    Get PDF
    Routine cases in diagnostic radiology require the interpolation of volumetric medical imaging data sets. Inaccurate renditions of interpolated volumes can lead to the misdiagnosis of a patient\u27s condition. It is therefore essential that interpolated modality space estimates accurately portray patient space. Kriging is investigated in this research to interpolate medical imaging volumes. Kriging requires data to be spatially distributed. Therefore, magnetic resonance imaging (MRI) data is shown to exhibit spatially regionalized characteristics such that it can be modeled using regionalized variables and subsequently be interpolated using kriging. A comprehensive, automated, three-dimensional structural analysis of the MRI data is accomplished to derive a mathematical model of spatial variation about each interpolated point. Kriging uses these models to compute estimates of minimal estimation variance. Estimation accuracy of the kriged, interpolated MRI volume is demonstrated to exceed that achieved using trilinear interpolation if the derived model of spatial variation sufficiently represents the regionalized neighborhoods about each interpolated voxel. Models of spatial variation that assume an ellipsoid extent with orthogonal axes of continuity are demonstrated to insufficiently characterize modality space MRI data. Model accuracy is concluded to be critical to achieve estimation accuracies that exceed those of trilinear interpolation

    Enhancement of shock-capturing methods via machine learning

    Get PDF
    In recent years, machine learning has been used to create data-driven solutions to problems for which an algorithmic solution is intractable, as well as fine-tuning existing algorithms. This research applies machine learning to the development of an improved finite-volume method for simulating PDEs with discontinuous solutions. Shock-capturing methods make use of nonlinear switching functions that are not guaranteed to be optimal. Because data can be used to learn nonlinear relationships, we train a neural network to improve the results of a fifth-order WENO method. We post-process the outputs of the neural network to guarantee that the method is consistent. The training data consist of the exact mapping between cell averages and interpolated values for a set of integrable functions that represent waveforms we would expect to see while simulating a PDE. We demonstrate our method on linear advection of a discontinuous function, the inviscid Burgers’ equation, and the 1-D Euler equations. For the latter, we examine the Shu–Osher model problem for turbulence–shock wave interactions. We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused due to numerical viscosity

    Ice thickness measurements and volume estimates for glaciers in Norway

    Get PDF
    Glacier volume and ice thickness distribution are important variables for water resource management in Norway and the assessment of future glacier changes. We present a detailed assessment of thickness distribution and total glacier volume for mainland Norway based on data and modelling. Glacier outlines from a Landsat-derived inventory from 1999 to 2006 covering an area of 2692 ± 81 km² were used as input. We compiled a rich set of ice thickness observations collected over the past 30 years. Altogether, interpolated ice thickness measurements were available for 870 km² (32%) of the current glacier area of Norway, with a total ice volume of 134 ± 23 km³. Results indicate that mean ice thickness is similar for all larger ice caps, and weakly correlates with their total area. Ice thickness data were used to calibrate a physically based distributed model for estimating the ice thickness of unmeasured glaciers. The results were also used to calibrate volume–area scaling relations. The calibrated total volume estimates for all Norwegian glaciers ranged from 257 to 300 km³

    Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    Get PDF
    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree-day melting model. Lastly, we demonstrate that the data assimilation approach is useful for quantifying and reducing uncertainty related to model parameters and thus provides uncertainty bounds on snowmelt and rainfall contributions in such mountainous watersheds

    Evaluating the spatial variability of snowpack properties across a northern Colorado basin

    Get PDF
    2012 Fall.Includes bibliographical references.Knowledge of seasonal mountain snowpack distribution and estimates of its snow water equivalent (SWE) can provide insight for water resources forecasting and earth system process understanding, thus, it is important to improve our ability to describe the spatial variability of SWE at the basin scale. The objectives of this thesis are to: (1) develop a reliable method of estimating SWE from snow depth for the Cache la Poudre basin, and (2) characterize the spatial variability of SWE at the basin scale within the Cache la Poudre basin. A combination of field and Natural Resource Conservation Service (NRCS) operational-based snow measurements were used in this study. Historic (1936 - 2010) snow course data were obtained for the study area to evaluate snow density. A multiple linear regression model (based on the historical snow course data) for estimating snow density across the study area was developed to estimate SWE directly from snow depth measurements. To investigate the spatial variability and observable patterns of SWE at the basin scale, snow surveys were completed on or about April 1, 2011 and 2012 and combined with NRCS operational measurements. Bivariate relations and multiple linear regression models were developed to understand the relation of SWE with physiographic variables derived using a geographic information system (GIS). SWE was interpolated across the Cache la Poudre basin on a pixel by pixel basis using the model equations and masked to observe SCA (from an 8-day MODIS product). The independent variables of snow depth, day of year, elevation, and UTM Easting were used in the model to estimate snow density. Calculation of SWE directly from snow depth measurement using the snow density model has strong statistical performance and model verification suggests the model is transferable to independent data within the bounds of the original dataset. This pathway of estimating SWE directly from snow depth measurement is useful when evaluating snowpack properties at the basin scale, where many time consuming measurements of SWE are often not feasible. Bivariate relations of SWE and snow depth measurements (from WY 2011 and WY 2012) with physiographic variables show that elevation and location (UTM Easting and UTM Northing) are most strongly correlated with SWE and snow depth. Multiple linear regression models developed for WY 2011 and WY 2012 include elevation and location as independent variables and also include others (e.g., eastness, slope, solar radiation, curvature, canopy density) depending on the model dataset. The final interpolated SWE surfaces, masked to observed SCA, generally show similar patterns across space despite differences in the 2011 and 2012 snow years and differing estimation of SWE magnitude between the combined dataset of field-based and operational-based measurements (modelO+F) and the dataset of operational-based measurements only (modelO). Within each of the model surfaces, interpolated volume of SWE was greatest within Elevation Zone 5 (3,043 - 3,405 m). The percentage of the total interpolated SWE volume for each model was distributed similarly among elevation zones

    Wind field simulation with isogeometric analysis

    Get PDF
    [EN]For wind field simulation with isogeometric analysis, firstly it is necessary to generate a spline parameterization of the computational domain, which is an air layer above the terrain surface. This parameterization is created with the meccano method from a digital terrain model. The main steps of the meccano method for tetrahedral mesh generation were introduced in [1, 2]. Based on the volume parameterization obtained by the method, we can generate a mapping from the parametric T-mesh to the physical space [3, 4]. Then, this volumetric parameterization is used to generate a cubic spline representation of the physical domain for the application of isogeometric analysis. We consider a mass-consistent model [5] to compute the wind field simulation in the three-dimensional domain from wind measurements or a wind forecasted by a meteorological model (for example, WRF or HARMONIE). From these data, an interpolated wind field is constructed. The mass-consistent model obtains a new wind field approaching the interpolated one, but verifying the continuity equation (mass conservation) for constant density and the impermeabilitycondition on the terrain. This adjusting problem is solved by introducing a Lagrange multiplier, that is the solution of a Poisson problem. The resulting field is obtained from the interpolated one and the gradient of the Lagrange multiplier. It is well known that if we use classical Lagrange finite elements, the gradient of the numerical solution is discontinuous over the element boundary. The advantage of using isogeometric analysis with cubic polynomial basis functions [6, 7] is that we obtain a C2 continuity for the Lagrange multiplier in the whole domain. In consequence, the resulting wind field is better approximated. Applications of the proposed technique are presented.Ministerio de Economía y Competitividad del Gobierno de España; Fondos FEDER; CONACYT-SENE

    Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)

    Get PDF
    Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through routing models. The most important input to debris \ufb02ow routing models are the topographic data, usually in the form of Digital Elevation Models (DEMs). The quality of DEMs depends on the accuracy, density, and spatial distribution of the sampled points; on the characteristics of the surface; and on the applied gridding methodology. Therefore, the choice of the interpolation method affects the realistic representation of the channel and fan morphology, and thus potentially the debris \ufb02ow routing modeling outcomes. In this paper, we initially investigate the performance of common interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor, Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging) in building DEMs with the complex topography of a debris \ufb02ow channel located in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full- waveform Light Detection And Ranging (LiDAR) data. The investigation is carried out through a combination of statistical analysis of vertical accuracy, algorithm robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms on the performance of a Geographic Information System (GIS)-based cell model for simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation between the DEMs heights uncertainty resulting from the gridding procedure and that on the corresponding simulated erosion/deposition depths, both the effect of interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid discharges, and channel morphology after the event. The comparison among the tested interpolation methods highlights that the ANUDEM and ordinary kriging algorithms are not suitable for building DEMs with complex topography. Conversely, the linear triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy and shape reliability. Anyway, the evaluation of the effects of gridding techniques on debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does not signi\ufb01cantly affect the model outcomes
    • …
    corecore