2,279 research outputs found

    Fast Isogeometric Boundary Element Method based on Independent Field Approximation

    Full text link
    An isogeometric boundary element method for problems in elasticity is presented, which is based on an independent approximation for the geometry, traction and displacement field. This enables a flexible choice of refinement strategies, permits an efficient evaluation of geometry related information, a mixed collocation scheme which deals with discontinuous tractions along non-smooth boundaries and a significant reduction of the right hand side of the system of equations for common boundary conditions. All these benefits are achieved without any loss of accuracy compared to conventional isogeometric formulations. The system matrices are approximated by means of hierarchical matrices to reduce the computational complexity for large scale analysis. For the required geometrical bisection of the domain, a strategy for the evaluation of bounding boxes containing the supports of NURBS basis functions is presented. The versatility and accuracy of the proposed methodology is demonstrated by convergence studies showing optimal rates and real world examples in two and three dimensions.Comment: 32 pages, 27 figure

    Smoothed Particle Hydrodynamics and Magnetohydrodynamics

    Full text link
    This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several `urban myths' regarding SPH, in particular the idea that one can simply increase the `neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.Comment: 44 pages, 14 figures, accepted to special edition of J. Comp. Phys. on "Computational Plasma Physics". The ndspmhd code is available for download from http://users.monash.edu.au/~dprice/ndspmhd

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field

    Spline-based dense medial descriptors for lossy image compression

    Get PDF
    Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression

    Curvilinear Interface Methodology for Finite-Element Applications

    Get PDF
    Recent trends in design and manufacturing suggest a tendency toward multiple centers of specialty which results in a need for improved integration methodology for dissimilar finite element or CFD meshes. Since a typical finite element or CFD analysis requires about 50% of an engineers effort to be devoted to modeling and input, there is a need to advance the state-of-the-art in modeling, methodology. These two trends indicate a need to for the capability to combine independently-modeled configurations in an automated and robust way without the need for global remodeling. One approach to addressing this need is the development of interfacing methodology which will automatically integrate independently modeled subdomains. The present research included the following objectives: (i) to develop and implement computational methods for automatically remodeling non-coincident finite element models having a pre-defined interface, (ii) to formulate and implement a parametric representation of general space curves and surfaces with a well-defined orientation, and (iii) to demonstrate the computational methodology with representative two- and three-dimensional finite element models. Methodology for automatically remodeling non-coincident subdomains was developed and tested for two- and three-dimensional, independently modeled subdomains. Representative classes of applications have been solved which gave good agreement with reference solutions obtained with conventional methods. The two-dimensional classes of problems solved included flat and curved membranes multiple subdomains having large gaps between the subdomains and general space curves representing an interface for re-modeling the portions of subdomains adjacent to the interface. The three-dimensional classes of problems solved includes multiple three-dimensional subdomains having large three-dimensional gap between previously modeled subdomains. The interface was represented by general surfaces with a well-defined orientation and having curvature in possibly more than one direction. The results demonstrated the re-modeling methodology to be general, flexible in use, highly automated, and robust for a diverse class of problems. The research reported represents an important advancement in the area of automated re-modeling for computational mechanics applications

    Computational Gradient Elasticity and Gradient Plasticity with Adaptive Splines

    Get PDF
    Classical continuum mechanics theories are largely insufficient in capturing size effects observed in many engineering materials: metals, composites, rocks etc. This is attributed to the absence of a length scale that accounts for microstructural effects inherent in these materials. Enriching the classical theories with an internal length scale solves this problem. One way of doing this, in a theoretically sound manner, is introducing higher order gradient terms in the constitutive relations. In elasticity, introducing a length scale removes the singularity observed at crack tips using the classical theory. In plasticity, it eliminates the spurious mesh sensitivity observed in softening and localisation problems by defining the width of the localisation zone thereby maintaining a well-posed boundary value problem. However, this comes at the cost of more demanding solution techniques. Higher-order continuity is usually required for solving gradient-enhanced continuum theories, a requirement difficult to meet using traditional finite elements. Hermitian finite elements, mixed methods and meshless methods have been developed to meet this requirement, however these methods have their drawbacks in terms of efficiency, robustness or implementational convenience. Isogeometric analysis, which exploits spline-based shape functions, naturally incorporates higher-order continuity, in addition to capturing the exact geometry and expediting the design-through-analysis process. Despite its potentials, it is yet to be fully explored for gradient-enhanced continua. Hence, this thesis develops an isogeometric analysis framework for gradient elasticity and gradient plasticity. The linearity of the gradient elasticity formulation has enabled an operator-split approach so that instead of solving the fourth-order partial differential equation monolithically, a set of two partial differential equations is solved in a staggered manner. A detailed convergence analysis is carried out for the original system and the split set using NURBS and T-splines. Suboptimal convergence rates in the monolithic approach and the limitations of the staggered approach are substantiated. Another advantage of the spline-based approach adopted in this work is the ease with which different orders of interpolation can be achieved. This is useful for consistency, and relevant in gradient plasticity where the local (explicit formulation) or nonlocal (implicit formulation) effective plastic strain needs to be discretised in addition to the displacements. Using different orders of interpolation, both formulations are explored in the second-order and a fourth-order implicit gradient formulation is proposed. Results, corroborated by dispersion analysis, show that all considered models give good regularisation with mesh-independent results. Compared with finite element approaches that use Hermitian shape functions for the plastic multiplier or mixed finite element approaches, isogeometric analysis has the distinct advantage that no interpolation of derivatives is required. In localisation problems, numerical accuracy requires the finite element size employed in simulations to be smaller than the internal length scale. Fine meshes are also needed close to regions of geometrical singularities or high gradients. Maintaining a fine mesh globally can incur high computational cost especially when considering large structures. To achieve this efficiently, selective refinement of the mesh is therefore required. In this context, splines need to be adapted to make them analysis-suitable. Thus, an adaptive isogeometric analysis framework is also developed for gradient elasticity and gradient plasticity. The proposed scheme does not require the mesh size to be smaller than the length scale, even during analysis, until a localisation band develops upon which adaptive refinement is performed. Refinement is based on a multi-level mesh with truncated hierarchical basis functions interacting through an inter-level subdivision operator. Through Bezier extraction, truncation of the bases is simplified by way of matrix multiplication, and an element-wise standard finite element data structure is maintained. In sum, a robust computational framework for engineering analysis is established, combining the flexibility, exact geometry representation, and expedited design-through analysis of isogeometric analysis, size-effect capabilities and mesh-objective results of gradient-enhanced continua, the standard convenient data structure of finite element analysis and the improved efficiency of adaptive hierarchical refinement

    Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials

    Get PDF
    La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test

    A Reconstruction Algorithm for Blade Surface Based on Less Measured Points

    Get PDF
    A reconstruction algorithm for blade surface from less measured points of section curves is given based on B-spline surface interpolation. The less measured points are divided into different segments by the key geometric points and throat points which are defined according to design concepts. The segmentations are performed by different fitting algorithms with consideration of curvature continuity as their boundary condition to avoid flow disturbance. Finally, a high-quality reconstruction surface model is obtained by using the B-spline curve meshes constructed by paired points. The advantage of this algorithm is the simplicity and effectivity reconstruction of blade surface to ensure the aerodynamic performance. Moreover, the obtained paired points can be regarded as measured points to measure and reconstruct the blade surface directly. Experimental results show that the reconstruction blade surface is suitable for precisely representing blade, evaluating machining accuracy, and analyzing machining allowance
    • …
    corecore