2,994 research outputs found

    Geometry definition and grid generation for a complete fighter aircraft

    Get PDF
    Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality

    GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    Get PDF
    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D

    Accuracy assessment in glacier change analysis

    Get PDF
    This thesis assesses the accuracy of digital elevation models (DEM) generated from contour lines and LiDAR points (Light Detection and Ranging) employing several interpolation methods at different resolutions. The study area is Jostefonn glacier that is situated in Sogn og Fjordane county, Norway. There are several ways to assess accuracy of DEMs including simple ways such as visual comparison and more sophisticated methods like relative and absolute comparison. Digital elevation models of the Jostefonn glacier were created from contour lines for years 1966 and 1993. LiDAR data from year 2011 was used as a reference data set. Of all the interpolation methods tested Natural Neighbours (NN) and Triangular Irregular Network (TIN) algorithms rendered the best results and proved to be superior to other interpolation methods. Several resolutions were tested (the cell size of 5 m, 10 m, 20 m and 50 m) and the best outcome was achieved by as small cell size as possible. The digital elevation models were compared to a reference data set outside the glacier area both on a cell-by-cell basis and extracting information at test points. Both methods rendered the same results that are presented in this thesis. Several techniques were employed to assess the accuracy of digital elevation models including visualization and statistical analysis. Visualization techniques included comparison of the original contour lines with those generated from DEMs. Root mean square error, mean absolute error and other accuracy measures were statistically analysed. The greatest elevation difference between the digital elevation model of interest and the reference data set was observed in the areas of a steep terrain. The steeper the terrain, the greater the observed error. The magnitude of the errors can be reduced by using a smaller cell size but that this is offset by a larger amount of data and increased data processing time.Popular science Glaciers are very sensitive indicators of climate change. The major cause of melting glaciers is global warming. This rapid rate of melting has serious negative impact on the earth causing flooding, leaving impact on flora and fauna, resulting in shortage of freshwater and hydroelectricity. The long-term monitoring of glaciers and the knowledge gained from it can help governments, environmental and water resource managers to make plans to cope with impacts of climate change. Results from glacier monitoring ought to be precise, showing the actual situation compared to the situation in the past as well as predicting possible glacier changes in the future. The aim of this thesis was to investigate how sensitive the results were to different methods used in glacier change detection focusing on the quality of Digital Elevation Models (DEMs). The study area of this thesis was the Jostefonn glacier situated in Sogn and Fjordane county, Norway. Digital elevation models were created from contour lines for years 1966 and 1993. LiDAR data from year 2011 was used as a reference data set. Several techniques were employed to estimate the accuracy of digital elevation models including visualization, statistical analysis, analysing the accuracy of digital elevation models for terrain on different slopes, comparison to a reference data set outside the glacier area that was considered to be stable and where no elevation change was expected. The original contour lines (1966 and 1993) were compared with the ones generated from the created terrain models (glacier area) as well as with the contour lines from the reference data set (outside the glacier area) by visualization techniques. Accuracy measures (Root Mean Square Error, Mean Absolute Error and others) were statistically analysed. Natural Neighbours and Triangular Irregular Network interpolators proved to be superior to other algorithms used to create the terrain models. The best outcome was achieved by using as small cell size as possible. 5 m resolution rendered the best results from the resolutions tested (5 m, 10 m, 20 m and 50 m). The greatest elevation differences were observed in the areas of a steep terrain. The steeper the terrain, the greater the elevation difference. The terracing effect was noticed in the digital elevation models due to the high density of elevation points on the contour lines and hardly any points between them. Useful information can be obtained by estimating accuracy of digital elevation models. The accuracy of terrain models determines the reliability of glacier change analysis and that is why the digital elevation model must represent the terrain as accurately as possible. The different methods used in this thesis rendered very similar results and that indicated that the results were reliable and the terrain models created with Natural Neighbours and Triangular Irregular Network interpolators (resolution of 5 m) can be employed in further glacier change analysis

    Locally Adaptive Nonparametric Binary Regression

    Full text link
    A nonparametric and locally adaptive Bayesian estimator is proposed for estimating a binary regression. Flexibility is obtained by modeling the binary regression as a mixture of probit regressions with the argument of each probit regression having a thin plate spline prior with its own smoothing parameter and with the mixture weights depending on the covariates. The estimator is compared to a single spline estimator and to a recently proposed locally adaptive estimator. The methodology is illustrated by applying it to both simulated and real examples.Comment: 31 pages, 10 figure

    IGS: an IsoGeometric approach for Smoothing on surfaces

    Full text link
    We propose an Isogeometric approach for smoothing on surfaces, namely estimating a function starting from noisy and discrete measurements. More precisely, we aim at estimating functions lying on a surface represented by NURBS, which are geometrical representations commonly used in industrial applications. The estimation is based on the minimization of a penalized least-square functional. The latter is equivalent to solve a 4th-order Partial Differential Equation (PDE). In this context, we use Isogeometric Analysis (IGA) for the numerical approximation of such surface PDE, leading to an IsoGeometric Smoothing (IGS) method for fitting data spatially distributed on a surface. Indeed, IGA facilitates encapsulating the exact geometrical representation of the surface in the analysis and also allows the use of at least globally C1−C^1-continuous NURBS basis functions for which the 4th-order PDE can be solved using the standard Galerkin method. We show the performance of the proposed IGS method by means of numerical simulations and we apply it to the estimation of the pressure coefficient, and associated aerodynamic force on a winglet of the SOAR space shuttle

    Assessment of the variability of spatial interpolation methods using elevation and drill hole data over the Magmont mine area, south-east Missouri

    Get PDF
    Spatial interpolation methods are widely used in fields of geoscience such as mineral exploration. Interpolation methods translate the distribution of discrete data into continuous field over a given study area. Many methods exist and operate differently. Choosing judiciously the best interpolation method calls for an understanding of the algorithm, the intent or goal of the investigation and the knowledge of the study area. In the field of mineral exploration, accurate assessment is important because both overestimation and underestimation at spatially defined variables result in varied consequences. Assessment of methods' variability can be used as an additional criterion to help make an informed choice. Here, eight interpolation methods were tested on two spatial data sets consisting of topographic surface elevations and subsurface elevations of the top and the bottom of lead orebody at the Magmont mine area, in South-east Missouri. Variability between the interpolation methods was assessed based on statistical paired t-test of each method against a reference value, geometric analysis the map algebra tool in Arcmap 10.4.1 and comparison of their algorithms. Two of the methods returned values not significantly different from the reference value while the others were less robust. In testing model variability a second time on a reduced sample size, results suggest that interpolation methods are sensitive to sample size. Similarly, building the orebody top and bottom surfaces from information on the depths across the mineralized intersection showed dissemblance among methods. Key words: spatial interpolation, GIS, Magmont mine area, variability, math algebra, paired t-test

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Maximum A Posteriori Resampling of Noisy, Spatially Correlated Data

    Get PDF
    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the “best” value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by “too much,” where “too much” is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or “resampled.” Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha\u27s Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure

    Representation and coding of 3D video data

    Get PDF
    Livrable D4.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.1 du projet
    • 

    corecore