51 research outputs found

    Analysis of the particle swarm optimization algorithm

    Get PDF
    Increasing prominence is given to the role of optimization in engineering. The global optimization problem is in particular frequently studied, since this difficult optimization problem is in general intractable. As a result, many a solution technique have been proposed for the global optimization problem, e.g. random searches, evolutionary computation algorithms, taboo searches, fractional programming, etc. This study is concerned with the recently proposed zero-order evolutionary computation algorithm known as the particle swarm optimization algorithm (PSOA). The following issues are addressed: 1. It is remarked that implementation subtleties due to ambiguous notation have resulted in two distinctly different implementations of the PSOA. While the behavior of the respective implementations is markedly different, they only differ in the formulation of the velocity updating rule. In this thesis, these two implementations are denoted by PSOAF1 and PSOAF2 respectively. 2. It is shown that PSOAF1 is observer independent, but the particle search trajectories collapse to line searches in n-dimensional space. In turn, for PSOAF2 it is shown that the particle trajectories are space filling in n-dimensional space, but this implementation suffers from observer dependence. It is also shown that some popular heuristics are possibly of less importance than originally thought; their greatest contribution is to prevent the collapse of particle trajectories to line searches. 3. A novel PSOA formulation, denoted PSOAF1* is then introduced, in which the particle trajectories do not collapse to line searches, while observer independence is preserved. However, the observer independence is only satisfied in a stochastic sense, i.e. the mean objective function value over a large number of runs is independent of the reference frame. Objectivity and effectiveness of the three different formulations are quantified using a popular unimodal and multimodal test set, of which some of the multimodal functions are decomposable. However, the objective functions are evaluated in both the unrotated, decomposable reference frame, and an arbitrary rotated reference frame. 4. Finally, a practical engineering optimization problem is studied. The PSOA is used to find the optimal shape of a cantilever beam. The objective is to find the minimum vertical displacement at the edge point of the cantilever beam. In order to calculate the objective function the finite element method is used. The meshes needed for the linear elastic finite element analysis are generated using an unstructured remeshing strategy. The remeshing strategy is based on a truss structure analogy.Dissertation (MEng (Mechanical Enigneering))--University of Pretoria, 2007.Mechanical and Aeronautical Engineeringunrestricte

    Approaches to accommodate remeshing in shape optimization

    Get PDF
    This study proposes novel optimization methodologies for the optimization of problems that reveal non-physical step discontinuities. More specifically, it is proposed to use gradient-only techniques that do not use any zeroth order information at all for step discontinuous problems. A step discontinuous problem of note is the shape optimization problem in the presence of remeshing strategies, since changes in mesh topologies may - and normally do - introduce non-physical step discontinuities. These discontinuities may in turn manifest themselves as non-physical local minima in which optimization algorithms may become trapped. Conventional optimization approaches for step discontinuous problems include evolutionary strategies, and design of experiment (DoE) techniques. These conventional approaches typically rely on the exclusive use of zeroth order information to overcome the discontinuities, but are characterized by two important shortcomings: Firstly, the computational demands of zero order methods may be very high, since many function values are in general required. Secondly, the use of zero order information only does not necessarily guarantee that the algorithms will not terminate in highly unfit local minima. In contrast, the methodologies proposed herein use only first order information, rather than only zeroth order information. The motivation for this approach is that associated gradient information in the presence of remeshing remains accurately and uniquely computable, notwithstanding the presence of discontinuities. From a computational effort point of view, a gradient-only approach is of course comparable to conventional gradient based techniques. In addition, the step discontinuities do not manifest themselves as local minima.Thesis (PhD)--University of Pretoria, 2010.Mechanical and Aeronautical Engineeringunrestricte

    Heuristic linear algebraic rank-variance formulation and solution approach for efficient sensor placement

    Get PDF
    The digital age has significantly impacted our ability to sense our environment and infer the state or status of equipment in our environment from the sensed information. Consequently inferring from a set of observations the causal factors that produced them is known as an inverse problem. In this study the sensed information, a.k.a. sensor measurement variables, is measurable while the inferred information, a.k.a. target variables, is not measurable. The ability to solve an inverse problem depends on the quality of the optimisation approach and the relevance of information used to solve the inverse problem. In this study, we aim to improve the information available to solve an inverse problem by considering the optimal selection of m sensors from k options. This study introduces a heuristic approach to solve the sensor placement optimisation problem which is not to be confused with the required optimisation strategy to solve the inverse problem. The proposed heuristic optimisation approach relies on the rank of the cross-covariance matrix between the observations of the target variables and the observations of the sensor measurement variables obtained from simulations using the computational model of an experiment. In addition, the variance between observations of the sensor measurements is considered. A new formulation, namely the tolerance rank-variance formulation (TRVF) is introduced and investigated numerically on a full field deterioration problem. The full field deterioration is estimated for a plate by resolving a parametrisation of the deterioration field for four scenarios. We demonstrate that the optimal sensor locations not only depend on the loading and boundary conditions of the plate but also on the expected ranges for the deterioration parameters. Although the sensor placements are not provably optimal the numerical results clearly indicate computationally efficient near optimal sensor placements.The National Research Foundation (NRF), South Africa and Centre for Asset and Integrity Management (C-AIM), Department of Mechanical and Aeronautical Engineering, University of Pretoria, Pretoria, South Africa.http://www.elsevier.com/locate/engstruct2018-12-15hj2018Mechanical and Aeronautical Engineerin

    Challenges and solutions to arc-length controlled structural shape design problems

    Get PDF
    This paper presents a general and reliable shape optimization scheme for snap-through structures to match a target load-deflection curve. The simulation of the snap-through structures typically requires the Arc Length Control (ALC) method. The adaptive stepping in ALC creates unavoidable discontinuities in the objective function. A discontinuous objective function is often handled by zero-order methods. However, we opt to solve the optimization problems using gradient-only optimization. Numerical examples demonstrate that function value based methods can misinterpret discontinuities as minima and terminate prematurely. In contrast, gradient-only algorithms are demonstrated to be insensitive to numerical discontinuities, and locate the correct solutions reliably.https://www.tandfonline.com/loi/lmbd20hj2023Mechanical and Aeronautical Engineerin

    Parametric circuit fault diagnosis through oscillation based testing in analogue circuits : statistical and deep learning approaches

    Get PDF
    Oscillation-based testing of analogue electronic filters removes the need for test signal synthesis. Parametric faults in the presence of normal component tolerance variation are challenging to detect and diagnose. This study demonstrates the suitability of statistical learning and deep learning techniques for parametric fault diagnosis and detection by investigating several time-series classification techniques. Traditional harmonic analysis is used as a baseline for an in-depth comparison. Eight standard classification techniques are applied and compared. Deep learning approaches, which classify the time-series signals directly, are shown to benefit from the oscillator start-up region for feature extraction. Global average pooling in the convolutional neural networks (CNN) allows for Class Activation Maps (CAM). This enables interpreting the time-series signal’s discriminative regions and confirming the importance of the start-up oscillation signal. The deep learning approach outperforms the harmonic analysis approach on simulated data by an average of 11.77% in classification accuracy for the three parametric fault magnitudes considered in this work.https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639Electrical, Electronic and Computer Engineerin

    Discrete element simulation of mill charge in 3D using the BLAZE-DEM GPU framework

    Get PDF
    The Discrete Element Method (DEM) simulation of charge motion in ball, semi-autogenous (SAG) and autogenous mills has advanced to a stage where the effects of lifter design, power draft and product size can be evaluated with sufficient accuracy using either two-dimensional (2D) or three-dimensional (3D) codes. While 2D codes may provide a reasonable profile of charge distribution in the mill there is a difference in power estimations as the anisotropic nature within the mill cannot be neglected. Thus 3D codes are preferred as they can provide a more accurate estimation of power draw and charge distribution. While 2D codes complete a typical industrial simulation in the order of hours, 3D codes require computing times in the order of days to weeks on a typical multi-threaded desktop computer. A newly developed and recently introduced 3D DEM simulation environment is BLAZE-DEM that utilizes the Graphical Processor Unit (GPU) via the NVIDIA CUDA programming model. Utilizing the parallelism of the GPU a 3D simulation of an industrial mill with four million particles takes 1 h to simulate one second (20 FPS) on a GTX 880 laptop GPU. This new performance level may allow 3D simulations to become a routine task for mill designers and researchers. This paper makes two notable extensions to the BLAZE-DEM environment. Firstly, the sphere-face contact is extended to include a GPU efficient sphere-edge contact strategy. Secondly, the world representation is extended by an efficient representation of convex geometrical primitives that can be combined to form non-convex world boundaries that drastically enhances the efficiency of particle world contact. In addition to these extensions this paper verifies and validates our GPU code by comparing charge profiles and power draw obtained using the CPU based code Millsoft and pilot scale experiments. Finally, we conclude with plant scale mill simulations.University of Utah. NVIDIA Corporation.http://www.elsevier.com/locate/mineng2016-06-30hb201

    Application of anti-diagonal averaging in response reconstruction

    Get PDF
    Response reconstruction is used to obtain accurate replication of vehicle structural responses of field recorded measurements in a laboratory environment, a crucial step in the process of Accelerated Destructive Testing (ADA). Response Reconstruction is cast as an inverse problem whereby an input signal is inferred to generate the desired outputs of a system. By casting the problem as an inverse problem we veer away from the familiarity of symmetry in physical systems since multiple inputs may generate the same output. We differ in our approach from standard force reconstruction problems in that the optimisation goal is the recreated output of the system. This alleviates the need for highly accurate inputs. We focus on offline non-causal linear regression methods to obtain input signals. A new windowing method called AntiDiagonal Averaging (ADA) is proposed to improve the regression techniques’ performance. ADA introduces overlaps within the predicted time signal windows and averages them. The newly proposed method is tested on a numerical quarter car model and shown to accurately reproduce the system’s outputs, which outperform related Finite Impulse Response (FIR) methods. In the nonlinear configuration of the numerical quarter car, ADA achieved a recreated output Mean Fit Function Error (MFFE) score of 0.40% compared to the next best performing FIR method, which generated a score of 4.89%. Similar performance was shown for the linear case.https://www.mdpi.com/journal/symmetrydm2022Mechanical and Aeronautical Engineerin

    Importance of temporal preserving latent analysis for latent variable models in fault diagnostics of rotating machinery

    Get PDF
    Latent variable models are important for condition monitoring as they learn, without any supervision, the healthy state of a physical asset as part of its latent manifold. This negates the need for labelled fault data and the application of supervised learning techniques. Latent variable models offer information from which health indicators can be derived for condition monitoring. Namely, information from the latent space and the data space can be used for condition inference. These health indicators are used to explain changes in a physical asset’s condition. Conventional black-box approaches only offer information from the data space in the form of reconstruction errors. In contrast, latent variable models offer a latent space and reconstruction space for inference. However, the current application of latent variable models either disregards latent space information or fails to realise its full potential. The full potential can be realised by preserving the time information in the data. Therefore, we propose a model evaluation procedure that specifically preserves time in the latent health indicators. The procedure is generic and can be applied to any latent variable model as demonstrated for Principal Component Analysis (PCA), Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) in this study. In general, as time information can be discarded or preserved for derived latent health indicators, this study advocates that health indicators that preserve time are more useful for condition monitoring than health indicators that discard time. In addition, it enables the interpretation of the learnt latent manifold dynamics and allows for alternative latent indicators to be developed and deployed for fault detection. The proposed temporal preservation model evaluation procedure is applied to three classes of latent variable models using two datasets. Three model-independent latent health indicators that preserve time are proposed and shown to be informative on all three classes of latent variable models for both datasets. The temporal preserving latent analysis procedure is demonstrated to be essential to derive more informative latent metrics from latent variable models.http://www.elsevier.com/locate/ymssphj2023Mechanical and Aeronautical Engineerin

    Gradient-only approaches to avoid spurious local minima in unconstrained optimization

    Get PDF
    We reflect on some theoretical aspects of gradient-only optimization for the unconstrained optimization of objective functions containing non-physical step or jump discontinuities. This kind of discontinuity arises when the optimization problem is based on the solutions of systems of partial differential equations, in combination with variable discretization techniques (e.g. remeshing in spatial domains, and/or variable time stepping in temporal domains). These discontinuities, which may cause local minima, are artifacts of the numerical strategies used and should not influence the solution to the optimization problem. Although the discontinuities imply that the gradient field is not defined everywhere, the gradient field associated with the computational scheme can nevertheless be computed everywhere; this field is denoted the associated gradient field. We demonstrate that it is possible to overcome attraction to the local minima if only associated gradient information is used. Various gradient-only algorithmic options are discussed. A salient feature of our approach is that variable discretization strategies, so important in the numerical solution of partial differential equations, can be combined with efficient local optimization algorithms.National Research Foundation (NRF)http://link.springer.com/journal/11081hb201

    Human skull shape and masticatory induced stress : objective comparison through the use of non-rigid registration

    Get PDF
    Variation in masticatory induced stress, caused by shape changes in the human skull, is quantified in this article. A comparison on masticatory induced stress is presented subject to a variation in human skull shape. Non-rigid registration is employed to obtain appropriate computational domain representations. This procedure allows the isolation of shape from other variations that could affect the results. An added benefit, revealed through the use of non-rigid registration to acquire appropriate domain representation, is the possibility of direct and objective comparison and manipulation. The effect of mapping uncertainty on the direct comparison is also quantified. As shown in this study, exact difference values are not necessarily obtained, but a non-rigid map between subject shapes and numerical results gives an objective indication on the location of differences.http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)2040-7947ai201
    • …
    corecore