37 research outputs found

    Geometrically Consistent Aerodynamic Optimization using an Isogeometric Discontinuous Galerkin Method

    Get PDF
    International audienceThe objective of the current work is to define a design optimization methodology in aerodynamics, in which all numerical components are based on a unique geometrical representation, consistent with Computer-Aided Design (CAD) standards. In particular, the design is parameterized by Non-Uniform Rational B-Splines (NURBS), the computational domain is automatically constructed using rational Bézier elements extracted from NURBS boundaries without any approximation and the resolution of the flow equations relies on an adaptive Discontinuous Galerkin (DG) method based on rational representations. A Bayesian framework is used to optimize NURBS control points, in a single-or multi-objective, constrained, global optimization framework. The resulting methodology is therefore fully CAD-consistent, high-order in space and time, includes local adaption and shock capturing capabilities, and exhibits high parallelization performance. The proposed methods are described in details and their properties are established. Finally, two design optimization problems are provided as illustrations: the shape optimization of an airfoil in transonic regime, for drag reduction with lift constraint, and the multi-objective optimization of the control law of a morphing airfoil in subsonic regime, regarding the time-averaged lift, the minimum instantaneous lift and the energy consumption

    An immersed methodology for fluid-structure interaction using NURBS and T-splines: theory, algorithms, validation, and application to blood flow at small scales

    Get PDF
    [Abstract] Mesh-based immersed approaches shine in a variety of fluid-structure interaction (FSI) applications such as, e.g., simulations where the solid undergoes large displacements or rotations, particulate flow problems, and scenarios where the topology of the region occupied by the fluid varies in time. In this thesis, a new mesh-based immersed approach is proposed which is based on the use of di erent types of splines as basis functions. This approach is put forth for modeling and simulating di erent types of biological cells in blood flow at small scales. The specific contributions of this thesis are outlined as follows. Firstly, a hybrid variational-collocation immersed technique using nonuniform rational B-splines (NURBS) is presented. Newtonian viscous incompressible fluids and nonlinear hyperelastic incompressible solids are considered. Our formulation boils down to three coupled equations which are the linear momentum balance equation, the mass conservation equation, and the kinematic equation that relates the Lagrangian displacement with the Eulerian velocity. The latter is discretized in strong form using isogeometric collocation and the other two equations are discretized using the variational multiscale (VMS) paradigm. As usual in immersed FSI approaches, we define a background mesh on the whole computational domain and a Lagrangian mesh tailored to the region occupied by each solid. Besides of using NURBS for creating these meshes, the data transfer between the background mesh and the Lagrangian meshes is carried out using NURBS functions in such a way that no interpolation or projection is needed, thus avoiding the errors associated with these procedures. Regarding the time discretization, the generalized- method is used which leads to a fully-implicit and second-order accurate method. The methodology is validated in two- and three-dimensional settings comparing the terminal velocity of free-falling bulky solids obtained in our simulations with its theoretical value. Secondly, we extend our algorithms in order to use analysis-suitable T-splines (ASTS) as basis functions instead of NURBS. This required to develop isogeometric collocation methods for ASTS which was an open problem. The data transfer between meshes changes significantly from NURBS to ASTS due to the fact that their geometrical mappings are local to patches and elements, respectively. ASTS possess two main advantages with respect to NURBS: (1) ASTS support local h-refinement and (2) ASTS are unstructured. The ASTSbased method is validated solving again the aforementioned benchmark problems and showing the potential of ASTS to decrease the amount of elements needed, thus enhancing the e ciency of the method. Thirdly, capsules, modeled as solid-shell NURBS elements, are proposed as numerical proxies for representing red blood cells (RBCs). The dynamics of capsules are able to reproduce the main motions and shapes observed in experiments with RBCs in both shear and parabolic flows. Hemorheological properties as the Fåhræus and Fåhræus-Lindqvist e ects are captured in our simulations. In order to obtain the aforementioned results, it is essential to adequately satisfy the incompressibility constraint close to the fluid-solid interface, which is an arduous task in immersed approaches for fluid-structure interaction. Finally, compound capsules are presented as numerical proxies for cells with nucleus such as, e.g., white blood cells (WBCs) and circulating tumor cells (CTCs). The dynamics of hyperelastic compound capsules in shear flow are studied in both two- and three-dimensional settings. Moreover, we simulate how CTCs manage to pass through channel narrowings, which is an interesting characteristic of CTCs since it is used in experiments to sort CTCs from blood samples

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes Bézier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    Smooth path planning with Pythagorean-hodoghraph spline curves geometric design and motion control

    Get PDF
    This thesis addresses two significative problems regarding autonomous systems, namely path and trajectory planning. Path planning deals with finding a suitable path from a start to a goal position by exploiting a given representation of the environment. Trajectory planning schemes govern the motion along the path by generating appropriate reference (path) points. We propose a two-step approach for the construction of planar smooth collision-free navigation paths. Obstacle avoidance techniques that rely on classical data structures are initially considered for the identification of piecewise linear paths that do not intersect with the obstacles of a given scenario. In the second step of the scheme we rely on spline interpolation algorithms with tension parameters to provide a smooth planar control strategy. In particular, we consider Pythagorean\u2013hodograph (PH) curves, since they provide an exact computation of fundamental geometric quantities. The vertices of the previously produced piecewise linear paths are interpolated by using a G1 or G2 interpolation scheme with tension based on PH splines. In both cases, a strategy based on the asymptotic analysis of the interpolation scheme is developed in order to get an automatic selection of the tension parameters. To completely describe the motion along the path we present a configurable trajectory planning strategy for the offline definition of time-dependent C2 piece-wise quintic feedrates. When PH spline curves are considered, the corresponding accurate and efficient CNC interpolator algorithms can be exploited

    Smooth path planning with Pythagorean-hodoghraph spline curves geometric design and motion control

    Get PDF
    This thesis addresses two significative problems regarding autonomous systems, namely path and trajectory planning. Path planning deals with finding a suitable path from a start to a goal position by exploiting a given representation of the environment. Trajectory planning schemes govern the motion along the path by generating appropriate reference (path) points. We propose a two-step approach for the construction of planar smooth collision-free navigation paths. Obstacle avoidance techniques that rely on classical data structures are initially considered for the identification of piecewise linear paths that do not intersect with the obstacles of a given scenario. In the second step of the scheme we rely on spline interpolation algorithms with tension parameters to provide a smooth planar control strategy. In particular, we consider Pythagorean–hodograph (PH) curves, since they provide an exact computation of fundamental geometric quantities. The vertices of the previously produced piecewise linear paths are interpolated by using a G1 or G2 interpolation scheme with tension based on PH splines. In both cases, a strategy based on the asymptotic analysis of the interpolation scheme is developed in order to get an automatic selection of the tension parameters. To completely describe the motion along the path we present a configurable trajectory planning strategy for the offline definition of time-dependent C2 piece-wise quintic feedrates. When PH spline curves are considered, the corresponding accurate and efficient CNC interpolator algorithms can be exploited

    Mesh Refinement Strategies for the Adaptive Isogeometric Method

    Get PDF
    This thesis introduces four mesh refinement algorithms refine_hb, refine_thb, refine_ts2D and refine_tsnD for the Adaptive Isogeometric Method using multivariate Hierarchical B-splines, multivariate Truncated Hierarchical B-splines, bivariate T-splines, and multivariate T-splines, respectively. We address, for refine hb and refine_thb, boundedness of the overlap of basis functions, boundedness of mesh overlays and linear complexity in the sense of a uniform upper bound on the ratio of generated and marked elements. The existence of an upper bound on the overlap of basis functions implies that the system matrix of the linear equation system to solve is sparse, i.e., it has a uniformly bounded number of non-zero entries in each row and column. The upper bound on the number of elements in the coarsest common refinement (the overlay) of two meshes, as well as linear complexity are crucial ingredients for a later proof of rate-optimality of the method. For refine_ts2D and refine_tsnD, the overlap of basis functions is bounded a priori and did not need further investigation. We investigate the boundedness of mesh overlays, linear independence of the T-splines, nestedness of the T-spline spaces, and linear complexity as above. Nestedness of the spline spaces is crucial in the sense that it implies the so-called Galerkin orthogonality, which characterizes the approximate solution as a best-approximation of the exact solution with respect to a norm that depends on the problem. Altogether, this work paves the way for a proof of rate-optimality for the Adaptive Isogeometric Method with HB-splines, THB-splines, or T-splines in any space dimension. In order to justify the proposed methods and theoretical results in this thesis, numerical experiments underline their practical relevance, showing that they are not outperformed by currently prevalent refinement strategies. As an outlook to future work, we outline an approach for the handling of zero knot intervals and multiple lines in the interior of the domain, which are used in CAD applications for controlling the continuity of the spline functions, and we also sketch basic ideas for the local refinement of two-dimensional meshes that do not have tensor-product structure

    Towards a High Quality Real-Time Graphics Pipeline

    Get PDF
    Modern graphics hardware pipelines create photorealistic images with high geometric complexity in real time. The quality is constantly improving and advanced techniques from feature film visual effects, such as high dynamic range images and support for higher-order surface primitives, have recently been adopted. Visual effect techniques have large computational costs and significant memory bandwidth usage. In this thesis, we identify three problem areas and propose new algorithms that increase the performance of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time graphics pipeline, but parts of our research are equally applicable to offline rendering. Our first focus is texture compression, which is a technique to reduce the memory bandwidth usage. The core idea is to store images in small compressed blocks which are sent over the memory bus and are decompressed on-the-fly when accessed. We present compression algorithms for two types of texture formats. High dynamic range images capture environment lighting with luminance differences over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and give the illusion of high geometric surface detail. Our compression formats are tailored to these texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression logic. Our second focus is tessellation culling. Culling is a commonly used technique in computer graphics for removing work that does not contribute to the final image, such as completely hidden geometry. By discarding rendering primitives from further processing, substantial arithmetic computations and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation stages, where rendering primitives are subdivided for increased geometric detail. Images with highly detailed models can be synthesized, but the incurred cost is significant. We have devised a simple remapping technique that allowsfor better tessellation distribution in screen space. Furthermore, we present programmable tessellation culling, where bounding volumes for displaced geometry are computed and used to conservatively test if a primitive can be discarded before tessellation. We introduce a general tessellation culling framework, and an optimized algorithm for rendering of displaced Bézier patches, which is expected to be a common use case for graphics hardware tessellation. Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic rasterization, a rendering technique where camera effects such as depth of field and motion blur can be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal space and show that stochastic motion blur can be rendered with rather modest pipeline modifications. Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented, which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings us closer to high quality real-time stochastic rendering

    Discrete Riemannian Calculus and A Posteriori Error Control on Shape Spaces

    Get PDF
    In this thesis, a novel discrete approximation of the curvature tensor on Riemannian manifolds is derived, efficient methods to interpolate and extrapolate images in the context of the time discrete metamorphosis model are analyzed, and an a posteriori error estimator for the binary Mumford–Shah model is examined. Departing from the variational time discretization on (possibly infinite-dimensional) Riemannian manifolds originally proposed by Rumpf and Wirth, in which a consistent time discrete approximation of geodesic curves, the logarithm, the exponential map and parallel transport is analyzed, we construct the discrete curvature tensor and prove its convergence under certain smoothness assumptions. To this end, several time discrete parallel transports are applied to suitably rescaled tangent vectors, where each parallel transport is computed using Schild’s ladder. The associated convergence proof essentially relies on multiple Taylor expansions incorporating symmetry and scaling relations. In several numerical examples we validate this approach for surfaces. The by now classical flow of diffeomorphism approach allows the transport of image intensities along paths in time, which are characterized by diffeomorphisms, and the brightness of each image particle is assumed to be constant along each trajectory. As an extension, the metamorphosis model proposed by Trouvé, Younes and coworkers allows for intensity variations of the image particles along the paths, which is reflected by an additional penalization term appearing in the energy functional that quantifies the squared weak material derivative. Taking into account the aforementioned time discretization, we propose a time discrete metamorphosis model in which the associated time discrete path energy consists of the sum of squared L2-mismatch functionals of successive square-integrable image intensity functions and a regularization functional for pairwise deformations. Our main contributions are the existence proof of time discrete geodesic curves in the context of this model, which are defined as minimizers of the time discrete path energy, and the proof of the Mosco-convergence of a suitable interpolation of the time discrete to the time continuous path energy with respect to the L2-topology. Using an alternating update scheme as well as a multilinear finite element respectively cubic spline discretization for the images and deformations allows to efficiently compute time discrete geodesic curves. In several numerical examples we demonstrate that time discrete geodesics can be robustly computed for gray-scale and color images. Taking into account the time discretization of the metamorphosis model we define the discrete exponential map in the space of images, which allows image extrapolation of arbitrary length for given weakly differentiable initial images and variations. To this end, starting from a suitable reformulation of the Euler–Lagrange equations characterizing the one-step extrapolation a fixed point iteration is employed to establish the existence of critical points of the Euler–Lagrange equations provided that the initial variation is small in L2. In combination with an implicit function type argument requiring H1-closeness of the initial variation one can prove the local existence as well as the local uniqueness of the discrete exponential map. The numerical algorithm for the one-step extrapolation is based on a slightly modified fixed point iteration using a spatial Galerkin scheme to obtain the optimal deformation associated with the unknown image, from which the unknown image itself can be recovered. To prove the applicability of the proposed method we compute the extrapolated image path for real image data. A common tool to segment images and shapes into multiple regions was developed by Mumford and Shah. The starting point to derive a posteriori error estimates for the binary Mumford–Shah model, which is obtained by restricting the original model to two regions, is a uniformly convex and non-constrained relaxation of the binary model following the work by Chambolle and Berkels. In particular, minimizers of the binary model can be exactly recovered from minimizers of the relaxed model via thresholding. Then, applying duality techniques proposed by Repin and Bartels allows deriving a consistent functional a posteriori error estimate for the relaxed model. Afterwards, an a posteriori error estimate for the original binary model can be computed incorporating a suitable cut-out argument in combination with the functional error estimate. To calculate minimizers of the relaxed model on an adaptive mesh described by a quadtree structure, we employ a primal-dual as well as a purely dual algorithm. The quality of the error estimator is analyzed for different gray-scale input images

    Structural Shape Optimization Based On The Use Of Cartesian Grids

    Full text link
    Tesis por compendioAs ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process and yields suboptimal designs, so that new approaches are needed to obtain a competitive advantage. With the ascent of the Finite Element Method (FEM) in the engineering community in the 1970s, structural shape optimization arose as a promising area of application. However, due to the iterative nature of shape optimization processes, the handling of large quantities of numerical models along with the approximated character of numerical methods may even dissuade the use of these techniques (or fail to exploit their full potential) because the development time of new products is becoming ever shorter. This Thesis is concerned with the formulation of a 3D methodology based on the Cartesian-grid Finite Element Method (cgFEM) as a tool for efficient and robust numerical analysis. This methodology belongs to the category of embedded (or fictitious) domain discretization techniques in which the key concept is to extend the structural analysis problem to an easy-to-mesh approximation domain that encloses the physical domain boundary. The use of Cartesian grids provides a natural platform for structural shape optimization because the numerical domain is separated from a physical model, which can easily be changed during the optimization procedure without altering the background discretization. Another advantage is the fact that mesh generation becomes a trivial task since the discretization of the numerical domain and its manipulation, in combination with an efficient hierarchical data structure, can be exploited to save computational effort. However, these advantages are challenged by several numerical issues. Basically, the computational effort has moved from the use of expensive meshing algorithms towards the use of, for example, elaborate numerical integration schemes designed to capture the mismatch between the geometrical domain boundary and the embedding finite element mesh. To do this we used a stabilized formulation to impose boundary conditions and developed novel techniques to be able to capture the exact boundary representation of the models. To complete the implementation of a structural shape optimization method an adjunct formulation is used for the differentiation of the design sensitivities required for gradient-based algorithms. The derivatives are not only the variables required for the process, but also compose a powerful tool for projecting information between different designs, or even projecting the information to create h-adapted meshes without going through a full h-adaptive refinement process. The proposed improvements are reflected in the numerical examples included in this Thesis. These analyses clearly show the improved behavior of the cgFEM technology as regards numerical accuracy and computational efficiency, and consequently the suitability of the cgFEM approach for shape optimization or contact problems.La competitividad en la industria actual impone la necesidad de generar nuevos y mejores diseños. El tradicional procedimiento de prueba y error, usado a menudo para el diseño de componentes mecánicos, ralentiza el proceso de diseño y produce diseños subóptimos, por lo que se necesitan nuevos enfoques para obtener una ventaja competitiva. Con el desarrollo del Método de los Elementos Finitos (MEF) en el campo de la ingeniería en la década de 1970, la optimización de forma estructural surgió como un área de aplicación prometedora. El entorno industrial cada vez más exigente implica ciclos cada vez más cortos de desarrollo de nuevos productos. Por tanto, la naturaleza iterativa de los procesos de optimización de forma, que supone el análisis de gran cantidad de geometrías (para las se han de usar modelos numéricos de gran tamaño a fin de limitar el efecto de los errores intrínsecamente asociados a las técnicas numéricas), puede incluso disuadir del uso de estas técnicas. Esta Tesis se centra en la formulación de una metodología 3D basada en el Cartesian-grid Finite Element Method (cgFEM) como herramienta para un análisis numérico eficiente y robusto. Esta metodología pertenece a la categoría de técnicas de discretización Immersed Boundary donde el concepto clave es extender el problema de análisis estructural a un dominio de aproximación, que contiene la frontera del dominio físico, cuya discretización (mallado) resulte sencilla. El uso de mallados cartesianos proporciona una plataforma natural para la optimización de forma estructural porque el dominio numérico está separado del modelo físico, que podrá cambiar libremente durante el procedimiento de optimización sin alterar la discretización subyacente. Otro argumento positivo reside en el hecho de que la generación de malla se convierte en una tarea trivial. La discretización del dominio numérico y su manipulación, en coalición con la eficiencia de una estructura jerárquica de datos, pueden ser explotados para ahorrar coste computacional. Sin embargo, estas ventajas pueden ser cuestionadas por varios problemas numéricos. Básicamente, el esfuerzo computacional se ha desplazado. Del uso de costosos algoritmos de mallado nos movemos hacia el uso de, por ejemplo, esquemas de integración numérica elaborados para poder capturar la discrepancia entre la frontera del dominio geométrico y la malla de elementos finitos que lo embebe. Para ello, utilizamos, por un lado, una formulación de estabilización para imponer condiciones de contorno y, por otro lado, hemos desarrollado nuevas técnicas para poder captar la representación exacta de los modelos geométricos. Para completar la implementación de un método de optimización de forma estructural se usa una formulación adjunta para derivar las sensibilidades de diseño requeridas por los algoritmos basados en gradiente. Las derivadas no son sólo variables requeridas para el proceso, sino una poderosa herramienta para poder proyectar información entre diferentes diseños o, incluso, proyectar la información para crear mallas h-adaptadas sin pasar por un proceso completo de refinamiento h-adaptativo. Las mejoras propuestas se reflejan en los ejemplos numéricos presentados en esta Tesis. Estos análisis muestran claramente el comportamiento superior de la tecnología cgFEM en cuanto a precisión numérica y eficiencia computacional. En consecuencia, el enfoque cgFEM se postula como una herramienta adecuada para la optimización de forma.Actualment, amb la competència existent en la industria, s'imposa la necessitat de generar nous i millors dissenys . El tradicional procediment de prova i error, que amb freqüència es fa servir pel disseny de components mecànics, endarrereix el procés de disseny i produeix dissenys subòptims, pel que es necessiten nous enfocaments per obtindre avantatge competitiu. Amb el desenvolupament del Mètode dels Elements Finits (MEF) en el camp de l'enginyeria en la dècada de 1970, l'optimització de forma estructural va sorgir com un àrea d'aplicació prometedora. No obstant això, a causa de la natura iterativa dels processos d'optimització de forma, la manipulació dels models numèrics en grans quantitats, junt amb l'error de discretització dels mètodes numèrics, pot fins i tot dissuadir de l'ús d'aquestes tècniques (o d'explotar tot el seu potencial), perquè al mateix temps els cicles de desenvolupament de nous productes s'estan acurtant. Esta Tesi se centra en la formulació d'una metodologia 3D basada en el Cartesian-grid Finite Element Method (cgFEM) com a ferramenta per una anàlisi numèrica eficient i sòlida. Esta metodologia pertany a la categoria de tècniques de discretització Immersed Boundary on el concepte clau és expandir el problema d'anàlisi estructural a un domini d'aproximació fàcil de mallar que conté la frontera del domini físic. L'utilització de mallats cartesians proporciona una plataforma natural per l'optimització de forma estructural perquè el domini numèric està separat del model físic, que podria canviar lliurement durant el procediment d'optimització sense alterar la discretització subjacent. A més, un altre argument positiu el trobem en què la generació de malla es converteix en una tasca trivial, ja que la discretització del domini numèric i la seua manipulació, en coalició amb l'eficiència d'una estructura jeràrquica de dades, poden ser explotats per estalviar cost computacional. Tot i això, estos avantatges poden ser qüestionats per diversos problemes numèrics. Bàsicament, l'esforç computacional s'ha desplaçat. De l'ús de costosos algoritmes de mallat ens movem cap a l'ús de, per exemple, esquemes d'integració numèrica elaborats per poder capturar la discrepància entre la frontera del domini geomètric i la malla d'elements finits que ho embeu. Per això, fem ús, d'una banda, d'una formulació d'estabilització per imposar condicions de contorn i, d'un altra, desevolupem noves tècniques per poder captar la representació exacta dels models geomètrics Per completar la implementació d'un mètode d'optimització de forma estructural es fa ús d'una formulació adjunta per derivar les sensibilitats de disseny requerides pels algoritmes basats en gradient. Les derivades no són únicament variables requerides pel procés, sinó una poderosa ferramenta per poder projectar informació entre diferents dissenys o, fins i tot, projectar la informació per crear malles h-adaptades sense passar per un procés complet de refinament h-adaptatiu. Les millores proposades s'evidencien en els exemples numèrics presentats en esta Tesi. Estes anàlisis mostren clarament el comportament superior de la tecnologia cgFEM en tant a precisió numèrica i eficiència computacional. Així, l'enfocament cgFEM es postula com una ferramenta adient per l'optimització de forma.Marco Alacid, O. (2017). Structural Shape Optimization Based On The Use Of Cartesian Grids [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86195TESISCompendi
    corecore