30,965 research outputs found
A multiobjective evolutionary algorithm for achieving energy efficiency in production environments integrated with multiple automated guided vehicles
Increasing energy shortages and environmental pollution have made energy efficiency an urgent concern in manufacturing plants. Most studies looking into sustainable production in general and energy-efficient production scheduling in particular, however, have not paid much attention to logistical factors (e.g., transport and setup). This study integrates multiple automated guided vehicles (AGVs) into a job-shop environment. We propose a multiobjective scheduling model that considers machine processing, sequence-dependent setup and AGV transport, aiming to simultaneously minimize the makespan, total idle time of machines and total energy consumption of both machines and AGVs. To solve this problem, an effective multiobjective evolutionary algorithm (EMOEA) is developed. Within the EMOEA, an efficient encoding/decoding method is designed to represent and decode each solution. A new crossover operator is proposed for AGV assignment and AGV speed sequences. To balance the exploration and exploitation ability of the EMOEA, an opposition-based learning strategy is incorporated. A total of 75 benchmark instances and a real-world case are used for our experimental study. Taguchi analysis is applied to determine the best combination of key parameters for the EMOEA. Extensive computational experiments show that properly increasing the number of AGVs can shorten the waiting time of machines and achieve a balance between economic and environmental objectives for production systems. The experimental results confirm that the proposed EMOEA is significantly better at solving the problem than three other well-known algorithms. Our findings here have significant managerial implications for real-world manufacturing environments integrated with AGVs
Statistical Physics and Representations in Real and Artificial Neural Networks
This document presents the material of two lectures on statistical physics
and neural representations, delivered by one of us (R.M.) at the Fundamental
Problems in Statistical Physics XIV summer school in July 2017. In a first
part, we consider the neural representations of space (maps) in the
hippocampus. We introduce an extension of the Hopfield model, able to store
multiple spatial maps as continuous, finite-dimensional attractors. The phase
diagram and dynamical properties of the model are analyzed. We then show how
spatial representations can be dynamically decoded using an effective Ising
model capturing the correlation structure in the neural data, and compare
applications to data obtained from hippocampal multi-electrode recordings and
by (sub)sampling our attractor model. In a second part, we focus on the problem
of learning data representations in machine learning, in particular with
artificial neural networks. We start by introducing data representations
through some illustrations. We then analyze two important algorithms, Principal
Component Analysis and Restricted Boltzmann Machines, with tools from
statistical physics
- …