2,108 research outputs found

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    A Process to Implement an Artificial Neural Network and Association Rules Techniques to Improve Asset Performance and Energy Efficiency

    Get PDF
    In this paper, we address the problem of asset performance monitoring, with the intention of both detecting any potential reliability problem and predicting any loss of energy consumption e ciency. This is an important concern for many industries and utilities with very intensive capitalization in very long-lasting assets. To overcome this problem, in this paper we propose an approach to combine an Artificial Neural Network (ANN) with Data Mining (DM) tools, specifically with Association Rule (AR) Mining. The combination of these two techniques can now be done using software which can handle large volumes of data (big data), but the process still needs to ensure that the required amount of data will be available during the assets’ life cycle and that its quality is acceptable. The combination of these two techniques in the proposed sequence di ers from previous works found in the literature, giving researchers new options to face the problem. Practical implementation of the proposed approach may lead to novel predictive maintenance models (emerging predictive analytics) that may detect with unprecedented precision any asset’s lack of performance and help manage assets’ O&M accordingly. The approach is illustrated using specific examples where asset performance monitoring is rather complex under normal operational conditions.Ministerio de Economía y Competitividad DPI2015-70842-

    Neuro-memristive Circuits for Edge Computing: A review

    Full text link
    The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing

    SW-ELM : A summation wavelet extreme learning machine algorithm with a priori initialization.

    No full text
    International audienceCombining neural networks and wavelet theory as an approximation or prediction models appears to be an effective solution in many applicative areas. However, when building such systems, one has to face parsimony problem, i.e., to look for a compromise between the complexity of the learning phase and accuracy performances. Following that, the aim of this paper is to propose a new structure of connectionist network, the Summation Wavelet Extreme Learning Machine (SW-ELM) that enables good accuracy and generalization performances, while limiting the learning time and reducing the impact of random initialization procedure. SW-ELM is based on Extreme Learning Machine (ELM) algorithm for fast batch learning, but with dual activation functions in the hidden layer nodes. This enhances dealing with non-linearity in an efficient manner. The initialization phase of wavelets (of hidden nodes) and neural network parameters (of input-hidden layer) is performed a priori, even before data are presented to the model. The whole proposition is illustrated and discussed by performing tests on three issues related to time-series application: an "input-output" approximation problem, a one-step ahead prediction problem, and a multi-steps ahead prediction problem. Performances of SW-ELM are benchmarked with ELM, Levenberg Marquardt algorithm for Single Layer Feed Forward Network (SLFN) and ELMAN network on six industrial data sets. Results show the significance of performances achieved by SW-ELM

    A simulation data-driven design approach for rapid product optimization

    Get PDF
    Traditional design optimization is an iterative process of design, simulation, and redesign, which requires extensive calculations and analysis. The designer needs to adjust and evaluate the design parameters manually and continually based on the simulation results until a satisfactory design is obtained. However, the expensive computational costs and large resource consumption of complex products hinder the wide application of simulation in industry. It is not an easy task to search the optimal design solution intelligently and efficiently. Therefore, a simulation data-driven design approach which combines dynamic simulation data mining and design optimization is proposed to achieve this purpose in this study. The dynamic simulation data mining algorithm—on-line sequential extreme learning machine with adaptive weights (WadaptiveOS-ELM)—is adopted to train the dynamic prediction model to effectively evaluate the merits of new design solutions in the optimization process. Meanwhile, the prediction model is updated incrementally by combining new “good” data set to reduce the modeling cost and improve the prediction accuracy. Furthermore, the improved heuristic optimization algorithm—adaptive and weighted center particle swarm optimization (AWCPSO)—is introduced to guide the design change direction intelligently to improve the search efficiency. In this way, the optimal design solution can be searched automatically with less actual simulation iterations and higher optimization efficiency, and thus supporting the rapid product optimization effectively. The experimental results demonstrate the feasibility and effectiveness of the proposed approach

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Comparison of RLL, state diagram, grafcet and petri net for the realization of logic controller

    Get PDF
    The strengths and weaknesses of popular pIc programming tools may be a common knowledge to the experienced but that contention alone lacks depth to the many others. Several studies have presented weighted comparisons but focused on only two approaches at a time. The first part of this paper presents qualitative comparisons among the 4 most popular approaches: relay ladder logic (RLL), state diagram, grafcet and ordinary Petri net. Each approach is weighted by their understandability, efficiency and flexibility. It is the intent of the second part of this study to formulate a mix and match LLD realization method based on the compared model strengths and weaknesses. The proposed model is then compared with the internationally accepted Grafcet approach in light of the same criteria as the first part. An analysis entails on what has been gained and lost in the proposed approach. From these comparisons ultimately, it is hoped that the pIc programmer is aware of the strengths and limitations of whichever programming approach chosen
    corecore