33 research outputs found

    Solid modelling for manufacturing: from Voelcker's boundary evaluation to discrete paradigms

    Get PDF
    Herb Voelcker and his research team laid the foundations of Solid Modelling, on which Computer-Aided Design is based. He founded the ambitious Production Automation Project, that included Constructive Solid Geometry (CSG) as the basic 3D geometric representation. CSG trees were compact and robust, saving a memory space that was scarce in those times. But the main computational problem was Boundary Evaluation: the process of converting CSG trees to Boundary Representations (BReps) with explicit faces, edges and vertices for manufacturing and visualization purposes. This paper presents some glimpses of the history and evolution of some ideas that started with Herb Voelcker. We briefly describe the path from “localization and boundary evaluation” to “localization and printing”, with many intermediate steps driven by hardware, software and new mathematical tools: voxel and volume representations, triangle meshes, and many others, observing also that in some applications, voxel models no longer require Boundary Evaluation. In this last case, we consider the current research challenges and discuss several avenues for further research.Project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/501100011033/FEDER‘‘A way to make Europe’’Peer ReviewedPostprint (published version

    An arbitrary high-order Discontinuous Galerkin method for elastic waves on unstructured meshes - V. Local time stepping and p-adaptivity

    Get PDF
    SUMMARY This article describes the extension of the arbitrary high-order Discontinuous Galerkin (ADER-DG) method to treat locally varying polynomial degress of the basis functions, so-called p-adaptivity, as well as locally varying time steps that may be different from one element to another. The p-adaptive version of the scheme is useful in complex 3-D models with small-scale features which have to be meshed with reasonably small elements to capture the necessary geometrical details of interest. Using a constant high polynomial degree of the basis functions in the whole computational domain can lead to an unreasonably high CPU effort since good spatial resolution at the surface may be already obtained by the fine mesh. Therefore, it can be more adequate in some cases to use a lower order method in the small elements to reduce the CPU effort without loosing much accuracy. To further increase computational efficiency, we present a new local time stepping (LTS) algorithm. For usual explicit time stepping schemes the element with the smallest time step resulting from the stability criterion of the method will dictate its time step to all the other elements of the computational domain. In contrast, by using local time stepping, each element can use its optimal time step given by the local stability condition. Our proposed LTS algorithm for ADER-DG is very general and does not need any temporal synchronization between the elements. Due to the ADER approach, accurate time interpolation is automatically provided at the element interfaces such that the computational overhead is very small and such that the method maintains the uniform high order of accuracy in space and time as in the usual ADER-DG schemes with a globally constant time step. However, the LTS ADER-DG method is computationally much more efficient for problems with strongly varying element size or material parameters since it allows to reduce the total number of element updates considerably. This holds especially for unstructured tetrahedral meshes that contain strongly degenerate elements, so-called slivers. We show numerical convergence results and CPU times for LTS ADER-DG schemes up to sixth order in space and time on irregular tetrahedral meshes containing elements of very different size and also on tetrahedral meshes containing slivers. Further validation of the algorithm is provided by results obtained for the layer over half-space (LOH.1) benchmark problem proposed by the Pacific Earthquake Engineering Research Center. Finally, we present a realistic application on earthquake modelling and ground motion prediction for the alpine valley of Grenoble

    A package for 3-D unstructured grid generation, finite-element flow solution and flow field visualization

    Get PDF
    A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes

    Comparison of body‐fitted, embedded and immersed solutions of low Reynolds‐number 3‐D incompressible flows

    Get PDF
    The solutions obtained for low Reynolds‐number incompressible flows using the same flow solver and solution technique on body‐fitted, embedded surface and immersed body grids of similar size are compared. The cases considered are a sphere at Re  = 100 and an idealized stented aneurysm. It is found that the solutions using all these techniques converge to the same grid‐independent solution. On coarser grids, the effect of higher‐order boundary conditions is noticeable. Therefore, if the manual labor required to set up a body‐fitted domain is excessive (as is often the case for patient‐specific geometries with medical devices), and/or computing resources are plentiful, the embedded surface and immersed body approaches become very attractive options

    Texture-Based Segmentation and Finite Element Mesh Generation for Heterogeneous Biological Image Data

    Get PDF
    The design, analysis, and control of bio-systems remain an engineering challenge. This is mainly due to the material heterogeneity, boundary irregularity, and nonlinear dynamics associated with these systems. The recent developments in imaging techniques and stochastic upscaling methods provides a window of opportunity to more accurately assess these bio-systems than ever before. However, the use of image data directly in upscaled stochastic framework can only be realized by the development of certain intermediate steps. The goal of the research presented in this dissertation is to develop a texture-segmentation method and a unstructured mesh generation for heterogeneous image data. The following two new techniques are described and evaluated in this dissertation: 1. A new texture-based segmentation method, using the stochastic continuum concepts and wavelet multi-resolution analysis, is developed for characterization of heterogeneous materials in image data. The feature descriptors are developed to efficiently capture the micro-scale heterogeneity of macro-scale entities. The materials are then segmented at a representative elementary scale at which the statistics of the feature descriptor stabilize. 2. A new unstructured mesh generation technique for image data is developed using a hierarchical data structure. This representation allows for generating quality guaranteed finite element meshes. The framework for both the methods presented in this dissertation, as such, allows them for extending to higher dimensions. The experimental results using these methods conclude them to be promising tools for unifying data processing concepts within the upscaled stochastic framework across biological systems. These are targeted for inclusion in decision support systems where biological image data, simulation techniques and artificial intelligence will be used conjunctively and uniformly to assess bio-system quality and design effective and appropriate treatments that restore system health

    Granite: A scientific database model and implementation

    Get PDF
    The principal goal of this research was to develop a formal comprehensive model for representing highly complex scientific data. An effective model should provide a conceptually uniform way to represent data and it should serve as a framework for the implementation of an efficient and easy-to-use software environment that implements the model. The dissertation work presented here describes such a model and its contributions to the field of scientific databases. In particular, the Granite model encompasses a wide variety of datatypes used across many disciplines of science and engineering today. It is unique in that it defines dataset geometry and topology as separate conceptual components of a scientific dataset. We provide a novel classification of geometries and topologies that has important practical implications for a scientific database implementation. The Granite model also offers integrated support for multiresolution and adaptive resolution data. Many of these ideas have been addressed by others, but no one has tried to bring them all together in a single comprehensive model. The datasource portion of the Granite model offers several further contributions. In addition to providing a convenient conceptual view of rectilinear data, it also supports multisource data. Data can be taken from various sources and combined into a unified view. The rod storage model is an abstraction for file storage that has proven an effective platform upon which to develop efficient access to storage. Our spatial prefetching technique is built upon the rod storage model, and demonstrates very significant improvement in access to scientific datasets, and also allows machines to access data that is far too large to fit in main memory. These improvements bring the extremely large datasets now being generated in many scientific fields into the realm of tractability for the ordinary researcher. We validated the feasibility and viability of the model by implementing a significant portion of it in the Granite system. Extensive performance evaluations of the implementation indicate that the features of the model can be provided in a user-friendly manner with an efficiency that is competitive with more ad hoc systems and more specialized application specific solutions
    corecore