74,060 research outputs found

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    Acoustic Performance of Exhaust Muffler Based Genetic Algorithms and Artificial Neural Network

    Get PDF
     The noise level was one of the important indicators as a measure of the quality and performance of the diesel engine.Exhaust noise in diesel engines machine accounted for an important proportion of installed performance exhaust muffler and it was an effective way to control exhaust noise. This article using orthogonal test program for the muffler structure parameters as input to the sound pressure level and diesel fuel each output artificial neural network (BP network) learning sample. Matlab artificial neural network toolbox to complete the training of the network, and better noise performance and fuel consumption rate performance muffler internal structure parameters combination was obtained through genetic algorithm gifted collaborative validation of artificial neural networks and genetic algorithms to optimize application exhaust muffler design is entirely feasible

    An optimization method for dynamics of structures with repetitive component patterns

    Get PDF
    The occurrence of dynamic problems during the operation of machinery may have devastating effects on a product. Therefore, design optimization of these products becomes essential in order to meet safety criteria. In this research, a hybrid design optimization method is proposed where attention is focused on structures having repeating patterns in their geometries. In the proposed method, the analysis is decomposed but the optimization problem itself is treated as a whole. The model of an entire structure is obtained without modeling all the repetitive components using the merits of the Component Mode Synthesis method. Backpropagation Neural Networks are used for surrogate modeling. The optimization is performed using two techniques: Genetic Algorithms (GAs) and Sequential Quadratic Programming (SQP). GAs are utilized to increase the chance of finding the location of the global optimum and since this optimum may not be exact, SQP is employed afterwards to improve the solution. A theoretical test problem is used to demonstrate the method

    Design Optimization Utilizing Dynamic Substructuring and Artificial Intelligence Techniques

    Get PDF
    In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure. In this paper, a design optimization strategy based on the integration of the Component Mode Synthesis (CMS) method with numerical optimization techniques is presented. For reasons of numerical efficiency, a Finite Element (FE) model is represented by a surrogate model which is a function of the design parameters. The surrogate model is obtained in four steps: First, the reduced FE models of the components are derived using the CMS method. Then the components are aassembled to obtain the entire structural response. Afterwards the dynamic behavior is determined for a number of design parameter settings. Finally, the surrogate model representing the dynamic behavior is obtained. In this research, the surrogate model is determined using the Backpropagation Neural Networks which is then optimized using the Genetic Algorithms and Sequential Quadratic Programming method. The application of the introduced techniques is demonstrated on a simple test problem

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A Neurogenetic Algorithm Based on Rational Agents

    Get PDF
    Lately, a lot of research has been conducted on the automatic design of artificial neural networks (ADANNs) using evolutionary algorithms, in the so-called neuro-evolutive algorithms (NEAs). Many of the presented proposals are not biologically inspired and are not able to generate modular, hierarchical and recurrent neural structures, such as those often found in living beings capable of solving intricate survival problems. Bearing in mind the idea that a nervous system's design and organization is a constructive process carried out by genetic information encoded in DNA, this paper proposes a biologically inspired NEA that evolves ANNs using these ideas as computational design techniques. In order to do this, we propose a Lindenmayer System with memory that implements the principles of organization, modularity, repetition (multiple use of the same sub-structure), hierarchy (recursive composition of sub-structures), minimizing the scalability problem of other methods. In our method, the basic neural codification is integrated to a genetic algorithm (GA) that implements the constructive approach found in the evolutionary process, making it closest to biological processes. Thus, the proposed method is a decision-making (DM) process, the fitness function of the NEA rewards economical artificial neural networks (ANNs) that are easily implemented. In other words, the penalty approach implemented through the fitness function automatically rewards the economical ANNs with stronger generalization and extrapolation capacities. Our method was initially tested on a simple, but non-trivial, XOR problem. We also submit our method to two other problems of increasing complexity: time series prediction that represents consumer price index and prediction of the effect of a new drug on breast cancer. In most cases, our NEA outperformed the other methods, delivering the most accurate classification. These superior results are attributed to the improved effectiveness and efficiency of NEA in the decision-making process. The result is an optimized neural network architecture for solving classification problems

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
    corecore