259 research outputs found

    Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Get PDF
    AbstractBlaze-DEMGPU is a modular GPU based discrete element method (DEM) framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD) that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community

    A Novel and Fully Automated Domain Transformation Scheme for Near Optimal Surrogate Construction

    Full text link
    Recent developments in surrogate construction predominantly focused on two strategies to improve surrogate accuracy. Firstly, component-wise domain scaling informed by cross-validation. Secondly, regression to construct response surfaces using additional information in the form of additional function-values sampled from multi-fidelity models and gradients. Component-wise domain scaling reliably improves the surrogate quality at low dimensions but has been shown to suffer from high computational costs for higher dimensional problems. The second strategy, adding gradients to train surrogates, typically results in regression surrogates. Counter-intuitively, these gradient-enhanced regression-based surrogates do not exhibit improved accuracy compared to surrogates only interpolating function values. This study empirically establishes three main findings. Firstly, constructing the surrogate in poorly scaled domains is the predominant cause of deteriorating response surfaces when regressing with additional gradient information. Secondly, surrogate accuracy improves if the surrogates are constructed in a fully transformed domain, by scaling and rotating the original domain, not just simply scaling the domain. The domain transformation scheme should be based on the local curvature of the approximation surface and not its global curvature. Thirdly, the main benefit of gradient information is to efficiently determine the (near) optimal domain in which to construct the surrogate. This study proposes a foundational transformation algorithm that performs near-optimal transformations for lower dimensional problems. The algorithm consistently outperforms cross-validation-based component-wise domain scaling for higher dimensional problems. A carefully selected test problem set that varies between 2 and 16-dimensional problems is used to clearly demonstrate the three main findings of this study.Comment: 20 pages, 28 figure

    Optimal Design of a Piezoelectric Transducer for Exciting Guided Wave Ultrasound in Rails

    Get PDF
    An existing Ultrasonic Broken Rail Detection System [1] installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study [2] demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element – 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE- 3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in transducer design is noteworthy given we needed to conduct more than 500 analyses. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross- section and frequency range

    Potential for interactive design simulations in discrete element modelling

    Get PDF
    This study investigates the potential for combining lower fidelity models with high performance solution strategies such as efficient graphical processing unit (GPU) based discrete element modelling (DEM) to not only do simulations faster but differently. Specifically this study investigates interactive simulation and design for which the simulation environment BlazeDEM-GPU was developed that allows researchers and engineers to interact with simulations. The initial results prove to be promising and warranting extensive research to be conducted in future which may allow for the development of alternative paradigms. In addition to the design cycle, the role that this interactive simulation and design will play in education is invaluable as an in-house corporate training tool for young engineers to actively train and develop understanding for specific industrial processes. This would also allow engineers to conduct just-in-time (JIT) simulation based assessment of processes before commencing on actual site visits, allowing for shorter and more focussed site excursions

    Validation of the gpu based blaze-dem framework for hopper discharge

    Get PDF
    Understanding the dynamical behavior of particulate materials is extremely important to many industrial processes, with typical applications that range from hopper flows in agriculture to tumbling mills in the mining industry. The discrete element method (DEM) has become the defacto standard to simulate particulate materials. The DEM is a compu- tationally intensive numerical approach that is limited to a moderate amount (thousands) of particles when considering fully coupled densely packed systems modeled by realistic par- ticle shape and history dependent constitutive relationships. A large number (millions) of particles can be simulated when the coupling between particles is relaxed to still accurately simulated lesser dense systems. Massively large scale simulations (tens of millions) are possi- ble when particle shapes are simplified, however this may lead to oversimplification when an accurate representation of the particle shape is essential to capture the macroscopic transport of particulates. Polyhedra represent the geometry of most convex particulate materials well and when combined with appropriate contact models predicts realistic mechanical behavior to that of the actual system. Detecting collisions between polyhedra is computationally ex- pensive often limiting simulations to only hundreds of thousands of particles. However, the computational architecture e.g. CPU and GPU plays a significant role on the performance that can be realized. The parallel nature of the GPU allows for a large number of simple independent processes to be executed in parallel. This results in a significant speed up over conventional implementations utilizing the Central Processing Unit (CPU) architecture, when algorithms are well aligned and optimized for the threading model of the GPU. We recently introduced the BLAZE-DEM framework for the GPU architecture that can model millions of pherical and polyhedral particles in a realistic time frame using a single GPU. In this paper we validate BLAZE-DEM for hopper discharge simulations. We firstly compare the flow-rates and patterns of polyhedra and spheres obtained with experiment to that of DEM. We then compare flow-rates between spheres and polyhedra to gauge the effect of particle shape. Finally we perform a large scale DEM simulation using 16 million articles to illustrate the capability of BLAZE-DEM to predict bulk flow in realistic hoppers

    A spectral regularisation framework for latent variable models designed for single channel applications

    Full text link
    Latent variable models (LVMs) are commonly used to capture the underlying dependencies, patterns, and hidden structure in observed data. Source duplication is a by-product of the data hankelisation pre-processing step common to single channel LVM applications, which hinders practical LVM utilisation. In this article, a Python package titled spectrally-regularised-LVMs is presented. The proposed package addresses the source duplication issue via the addition of a novel spectral regularisation term. This package provides a framework for spectral regularisation in single channel LVM applications, thereby making it easier to investigate and utilise LVMs with spectral regularisation. This is achieved via the use of symbolic or explicit representations of potential LVM objective functions which are incorporated into a framework that uses spectral regularisation during the LVM parameter estimation process. The objective of this package is to provide a consistent linear LVM optimisation framework which incorporates spectral regularisation and caters to single channel time-series applications.Comment: 15 pages; 6 figures; 1 table; github; submitted to Software

    3D laser scanning technique coupled with DEM GPU simulations for railway ballasts

    Get PDF
    Spheres with complex contact models or clumped sphere models are classically used to model ballast for railway applications with the Discrete Element Method (DEM). These simplifications omits the angularity of the actual ballast by assuming the ballast is either round or has rounded edges. This is done by necessity to allow for practically com- putable simulations that may consist of a few hundred particles. This study demonstrates that an experimentally validated DEM simulation environment, BlazeDEM-3DGPU, that computes on the graphical processing unit (GPU) is able to simulate railway ballast with a more realistic shapes that includes angularity for railway applications. In particular, a procedure is developed that extracts polyhedral shaped ballast geometries digitized from 3D-laser scanning for use in DEM simulations. The results show that much larger number of particles can be successfully modelled allowing for new possibilities offered by the GPUs to investigate model railway problems using DEM. Specifically, in this study a typical experimental ballast box that contains up to 60 000 polyhedral particles have been simulated with the BlazeDEM-3DGPU computing environment within reasonable computing times

    New advances in large scale industrial DEM modeling towards energy efficient processes

    Get PDF
    Granular material processing is crucial to a number of industries such as pharmaceuticals, construction, mining, geology and primary utilities. The handling and processing of granular materials represents roughly 10% of the annual energy consumption [1]. A recent study indicated that in the US alone, current energy requirements across Coal, Metal and Mineral Mining amounts to 1246 TBtu/yr, whereas the practical minimum energy consumption is estimated to be 579 TBtu/yr, while the theoretical limited is estimated around 184 TBtu/yr [2]. It is evident that design modification allowing for process optimization can play a significant role in realizing a more energy efficient industry sector that can have significant implications on the annual global energy demands. The status quo in industry when facing the complex physics governing granular materials, is that current industry developed strategies to handle granular materials remain overly conservative and often energy-wasteful to prevent or reduce industrial-related bulk material handling problems like segregation, arching formation, insufficient handleability. Granular scale approaches have also been developed to both understand the fundamental physics governing granular flow and to study industrial applications, especially to improve the understanding and estimation of energy dissipation and energy efficiency of granular flow processes. The Discrete Element Method (DEM) proposed by Cundall and Strack [3] is starting to mature and evolve into a systematic approach to estimate and predict the response of granular systems. However, DEM is computationally intensive and is limited by the number of particles that can be considered realistically are limited to hundreds of thousands or low millions. However, before DEM can be practically considered for industrial applications the number of particles need be increased to tens of millions particles for a sufficient amount of processing time. This study discusses new advances and perspectives made possible by the Graphical Processor Unit (GPU) when simulating discrete element models, specifically for granular industrial applications. Attention is specifically focussed on the newly developed BlazeDEM3D-GPU framework for an industrial flow investigation [4]. Note that BlazeDEM3D-GPU is an open-source DEM code developed by Govender et al. [5] that has been validated for industrial ball mill simulations and hopper discharge applications using ten of millions of particles using a single NVIDIA GPU card on a desktop computer [4, 6]. The industrial granular flow investigation considered in this study is of the storage silos located at the industrial concrete central in France. The typical silo diameter is 8m with a height of around 17m. Three dimensional DEM studies were been performed to investigate the influence of particle sizes and inter-particle cohesion on the bulk flow rate and induced shear stresses for various hopper designs located at concrete central. As required for this industrially relevant application, up to 32 million particles were required to be simulated within a reasonable computing time. These simulations were performed within these requirements but only made possible by the utilization of GPUs. These results show that the GPU computing allows for realistically relevant number of simulated particles for the 3D DEM applications within a reasonable time frame. This makes large-scale analysis practically relevant but more importantly allows for a number of analyses to be conducted to steer granular processing solutions towards an increased efficiency in energy utilization
    corecore