1,433 research outputs found

    impact force reconstruction in composite panels

    Get PDF
    Abstract Passive sensing is a branch of structural health monitoring which aims at detecting positions and intensities of impacts occurring on aeronautical structures. Impacts are one of the main causes of damage in composite panels, limiting the application of these modern components on aircraft. In particular, impacts can cause the so called barely visible impact damage which, if not detected rapidly, can grow and lead to catastrophic failure. The determination of the impact location and the reconstruction of impact force is necessary to evaluate the health of the structure. These data may be measured indirectly from the measurements of responses of sensors located on the system subjected to the impact. The impact force reconstruction is a complex inverse problem, where the cause is to be inferred from its consequences. Inverse problems are in general ill-posed and ill-conditioned. Therefore, several techniques have been employed in the last four decades and have proven to be effective within certain limitations. Among these methods, transfer function based methods have been mainly validated for low-energy impact where the linear assumption should be valid. Nonlinearities may affect the accuracy in the reconstruction process and thus in the evaluation of damage other techniques have been adopted, such as artificial neural networks (ANN) or genetic algorithms (GA). In this study, a stiffened panel model developed in Abaqus/CAE is first validated, then numerical simulations are used to obtain data for several impacts, characterized by different impact locations and different energy (by changing the impactor mass and/or velocity). Geometrical nonlinearities of the dynamic system are considered in order to represent accurately the mechanics of the composite panel. Then the complex nonlinear behavior will be modeled through a nonlinear system identification approach, such as ANN, and an intelligent algorithm with global search capabilities, such as GA, will be used in sequence to accurately recovery the impact force peak and, therefore, properly evaluate the health status of the structure

    Multi-Objective Optimization for Speed and Stability of a Sony Aibo Gait

    Get PDF
    Locomotion is a fundamental facet of mobile robotics that many higher level aspects rely on. However, this is not a simple problem for legged robots with many degrees of freedom. For this reason, machine learning techniques have been applied to the domain. Although impressive results have been achieved, there remains a fundamental problem with using most machine learning methods. The learning algorithms usually require a large dataset which is prohibitively hard to collect on an actual robot. Further, learning in simulation has had limited success transitioning to the real world. Also, many learning algorithms optimize for a single fitness function, neglecting many of the effects on other parts of the system. As part of the RoboCup 4-legged league, many researchers have worked on increasing the walking/gait speed of Sony AIBO robots. Recently, the effort shifted from developing a quick gait, to developing a gait that also provides a stable sensing platform. However, to date, optimization of both velocity and camera stability has only occurred using a single fitness function that incorporates the two objectives with a weighting that defines the desired tradeoff between them. However, the true nature of this tradeoff is not understood because the pareto front has never been charted, so this a priori decision is uninformed. This project applies the Nondominated Sorting Genetic Algorithm-II (NSGA-II) to find a pareto set of fast, stable gait parameters. This allows a user to select the best tradeoff between balance and speed for a given application. Three fitness functions are defined: one speed measure and two stability measures. A plot of evolved gaits shows a pareto front that indicates speed and stability are indeed conflicting goals. Interestingly, the results also show that tradeoffs also exist between different measures of stability

    A review of wildland fire spread modelling, 1990-present 3: Mathematical analogues and simulation models

    Full text link
    In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behvaiour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of a simulation or mathematical analogue nature. Most simulation models are implementations of existing empirical or quasi-empirical models and their primary function is to convert these generally one dimensional models to two dimensions and then propagate a fire perimeter across a modelled landscape. Mathematical analogue models are those that are based on some mathematical conceit (rather than a physical representation of fire spread) that coincidentally simulates the spread of fire. Other papers in the series review models of an physical or quasi-physical nature and empirical or quasi-empirical nature. Many models are extensions or refinements of models developed before 1990. Where this is the case, these models are also discussed but much less comprehensively.Comment: 20 pages + 9 pages references + 1 page figures. Submitted to the International Journal of Wildland Fir

    Using image morphing for memory-efficient impostor rendering on GPU

    Get PDF
    Real-time rendering of large animated crowds consisting thousands of virtual humans is important for several applications including simulations, games and interactive walkthroughs; but cannot be performed using complex polygonal models at interactive frame rates. For that reason, several methods using large numbers of pre-computed image-based representations, which are called as impostors, have been proposed. These methods take the advantage of existing programmable graphics hardware to compensate the computational expense while maintaining the visual fidelity. Making the number of different virtual humans, which can be rendered in real-time, not restricted anymore by the required computational power but by the texture memory consumed for the variety and discretization of their animations. In this work, we proposed an alternative method that reduces the memory consumption by generating compelling intermediate textures using image-morphing techniques. In order to demonstrate the preserved perceptual quality of animations, where half of the key-frames were rendered using the proposed methodology, we have implemented the system using the graphical processing unit and obtained promising results at interactive frame rates

    Edge based finite element simulation of eddy current phenomenon and its application to defect characterization

    Get PDF
    Edge based finite elements are finite elements whose degrees of freedom are assigned to edges of finite elements rather than nodes. Compared with conventional node based counterparts, they offer many useful properties. For example, they enforce tangential continuity only on inter-element boundaries but no normal continuity; they allow a vector field separated as the sum of the gradient of a scalar function and the remaining part. This dissertation presents a magnetic vector potential formulation implemented with edge elements to simulate eddy current phenomenon. The additional degree of freedom associated with the magnetic vector potential is fixed with the help of tree and co-tree separation from graph theory. The validity of the method is verified using well-known benchmark problems.;A phenomenological signal inversion scheme is proposed to characterize defect profiles from eddy current probe signals. The method relies on the edge element based forward model to predict probe responses and a minimization algorithm to minimize an objective function representing the squared error between the modal prediction and the observed signal. A gradient-based minimization algorithm is first investigated. The long computation time associated with the gradient calculation is reduced using the adjoint equation based method. However, gradient-based methods tend to converge to a poorer local minimum. A genetic algorithm and a simulated annealing algorithm are employed to improve performance. The performance of these stochastic methods in the context of the defect characterization problem is studied. The preliminary results show the effectiveness of the stochastic methods

    LoGAN: Generating Logos with a Generative Adversarial Neural Network Conditioned on color

    Get PDF
    Designing a logo is a long, complicated, and expensive process for any designer. However, recent advancements in generative algorithms provide models that could offer a possible solution. Logos are multi-modal, have very few categorical properties, and do not have a continuous latent space. Yet, conditional generative adversarial networks can be used to generate logos that could help designers in their creative process. We propose LoGAN: an improved auxiliary classifier Wasserstein generative adversarial neural network (with gradient penalty) that is able to generate logos conditioned on twelve different colors. In 768 generated instances (12 classes and 64 logos per class), when looking at the most prominent color, the conditional generation part of the model has an overall precision and recall of 0.8 and 0.7 respectively. LoGAN's results offer a first glance at how artificial intelligence can be used to assist designers in their creative process and open promising future directions, such as including more descriptive labels which will provide a more exhaustive and easy-to-use system.Comment: 6 page, ICMLA1
    corecore