5,283 research outputs found

    On minimizing coding operations in network coding based multicast: an evolutionary algorithm

    Get PDF
    In telecommunications networks, to enable a valid data transmission based on network coding, any intermediate node within a given network is allowed, if necessary, to perform coding operations. The more coding operations needed, the more coding resources consumed and thus the more computational overhead and transmission delay incurred. This paper investigates an efficient evolutionary algorithm to minimize the amount of coding operations required in network coding based multicast. Based on genetic algorithms, we adapt two extensions in the proposed evolutionary algorithm, namely a new crossover operator and a neighbourhood search operator, to effectively solve the highly complex problem being concerned. The new crossover is based on logic OR operations to each pair of selected parent individuals, and the resulting offspring are more likely to become feasible. The aim of this operator is to intensify the search in regions with plenty of feasible individuals. The neighbourhood search consists of two moves which are based on greedy link removal and path reconstruction, respectively. Due to the specific problem feature, it is possible that each feasible individual corresponds to a number of, rather than a single, valid network coding based routing subgraphs. The neighbourhood search is applied to each feasible individual to find a better routing subgraph that consumes less coding resource. This operator not only improves solution quality but also accelerates the convergence. Experiments have been carried out on a number of fixed and randomly generated benchmark networks. The results demonstrate that with the two extensions, our evolutionary algorithm is effective and outperforms a number of state-of-the-art algorithms in terms of the ability of finding optimal solutions

    Survey on Design of Truss Structures by Using Fuzzy Optimization Methods

    Get PDF

    Efficient channel equalization algorithms for multicarrier communication systems

    Get PDF
    Blind adaptive algorithm that updates time-domain equalizer (TEQ) coefficients by Adjacent Lag Auto-correlation Minimization (ALAM) is proposed to shorten the channel for multicarrier modulation (MCM) systems. ALAM is an addition to the family of several existing correlation based algorithms that can achieve similar or better performance to existing algorithms with lower complexity. This is achieved by designing a cost function without the sum-square and utilizing symmetrical-TEQ property to reduce the complexity of adaptation of TEQ to half of the existing one. Furthermore, to avoid the limitations of lower unstable bit rate and high complexity, an adaptive TEQ using equal-taps constraints (ETC) is introduced to maximize the bit rate with the lowest complexity. An IP core is developed for the low-complexity ALAM (LALAM) algorithm to be implemented on an FPGA. This implementation is extended to include the implementation of the moving average (MA) estimate for the ALAM algorithm referred as ALAM-MA. Unit-tap constraint (UTC) is used instead of unit-norm constraint (UNC) while updating the adaptive algorithm to avoid all zero solution for the TEQ taps. The IP core is implemented on Xilinx Vertix II Pro XC2VP7-FF672-5 for ADSL receivers and the gate level simulation guaranteed successful operation at a maximum frequency of 27 MHz and 38 MHz for ALAM-MA and LALAM algorithm, respectively. FEQ equalizer is used, after channel shortening using TEQ, to recover distorted QAM signals due to channel effects. A new analytical learning based framework is proposed to jointly solve equalization and symbol detection problems in orthogonal frequency division multiplexing (OFDM) systems with QAM signals. The framework utilizes extreme learning machine (ELM) to achieve fast training, high performance, and low error rates. The proposed framework performs in real-domain by transforming a complex signal into a single 2–tuple real-valued vector. Such transformation offers equalization in real domain with minimum computational load and high accuracy. Simulation results show that the proposed framework outperforms other learning based equalizers in terms of symbol error rates and training speeds

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Heuristic design of fuzzy inference systems: a review of three decades of research

    Get PDF
    This paper provides an in-depth review of the optimal design of type-1 and type-2 fuzzy inference systems (FIS) using five well known computational frameworks: genetic-fuzzy systems (GFS), neuro-fuzzy systems (NFS), hierarchical fuzzy systems (HFS), evolving fuzzy systems (EFS), and multi-objective fuzzy systems (MFS), which is in view that some of them are linked to each other. The heuristic design of GFS uses evolutionary algorithms for optimizing both Mamdani-type and Takagi–Sugeno–Kang-type fuzzy systems. Whereas, the NFS combines the FIS with neural network learning systems to improve the approximation ability. An HFS combines two or more low-dimensional fuzzy logic units in a hierarchical design to overcome the curse of dimensionality. An EFS solves the data streaming issues by evolving the system incrementally, and an MFS solves the multi-objective trade-offs like the simultaneous maximization of both interpretability and accuracy. This paper offers a synthesis of these dimensions and explores their potentials, challenges, and opportunities in FIS research. This review also examines the complex relations among these dimensions and the possibilities of combining one or more computational frameworks adding another dimension: deep fuzzy systems

    Toward Robust Manufacturing Scheduling: Stochastic Job-Shop Scheduling

    Full text link
    Manufacturing plays a significant role in promoting economic development, production, exports, and job creation, which ultimately contribute to improving the quality of life. The presence of manufacturing defects is, however, inevitable leading to products being discarded, i.e. scrapped. In some cases, defective products can be repaired through rework. Scrap and rework cause a longer completion time, which can contribute to the order being shipped late. In addition, complex manufacturing scheduling becomes much more challenging when the above uncertainties are present. Motivated by the presence of uncertainties as well as combinatorial complexity, this paper addresses the challenge illustrated through a case study of stochastic job-shop scheduling problems arising within low-volume high-variety manufacturing. To ensure on-time delivery, high-quality solutions are required, and near-optimal solutions must be obtained within strict time constraints to ensure smooth operations on the job-shop floor. To efficiently solve the stochastic job-shop scheduling (JSS) problem, a recently-developed Surrogate "Level-Based" Lagrangian Relaxation is used to reduce computational effort while efficiently exploiting the geometric convergence potential inherent to Polyak's step-sizing formula thereby leading to fast convergence. Numerical testing demonstrates that the new method is more than two orders of magnitude faster as compared to commercial solvers

    Energy management in microgrids with renewable energy sources: A literature review

    Get PDF
    Renewable energy sources have emerged as an alternative to meet the growing demand for energy, mitigate climate change, and contribute to sustainable development. The integration of these systems is carried out in a distributed manner via microgrid systems; this provides a set of technological solutions that allows information exchange between the consumers and the distributed generation centers, which implies that they need to be managed optimally. Energy management in microgrids is defined as an information and control system that provides the necessary functionality, which ensures that both the generation and distribution systems supply energy at minimal operational costs. This paper presents a literature review of energy management in microgrid systems using renewable energies, along with a comparative analysis of the different optimization objectives, constraints, solution approaches, and simulation tools applied to both the interconnected and isolated microgrids. To manage the intermittent nature of renewable energy, energy storage technology is considered to be an attractive option due to increased technological maturity, energy density, and capability of providing grid services such as frequency response. Finally, future directions on predictive modeling mainly for energy storage systems are also proposed

    Smart electric vehicle charging strategy in direct current microgrid

    Get PDF
    This thesis proposes novel electric vehicle (EV) charging strategies in DC microgrid (DCMG) for integrating network loads, EV charging/discharging and dispatchable generators (DGs) using droop control within DCMG. A novel two-stage optimization framework is deployed, which optimizes power flow in the network using droop control within DCMG and solves charging tasks with a modified Djistra algorithm. Charging tasks here are modeled as the shortest path problem considering system losses and battery degradation from the distribution system operator (DSO) and electric vehicles aggregator (EVA) respectively. Furthermore, a probabilistic distribution model is proposed to investigate the EV stochastic behaviours for a charging station including time-of-arrival (TOA), time-of-departure(TOD) and energy-to-be-charged (ETC) as well as the coupling characteristic between these parameters. Markov Chain Monte Carlo (MCMC) method is employed to establish a multi-dimension probability distribution for those load profiles and further tests show the scheme is suitable for decentralized computing of its low burn-in request, fast convergent and good parallel acceleration performance. Following this, a three-stage stochastic EV charging strategy is designed to plug the probabilistic distribution model into the optimization framework, which becomes the first stage of the framework. Subsequently, an optimal power flow (OPF) model in the DCMG is deployed where the previous deterministic model is deployed in the second stage which stage one and stage two are combined as a chance-constrained problem in stage three and solved as a random walk problem. Finally, this thesis investigates the value of EV integration in the DCMG. The results obtained show that with smart control of EV charging/discharging, not only EV charging requests can be satisfied, but also network performance like peak valley difference can be improved by ancillary services. Meanwhile, both system loss and battery degradation from DSO and EVA can be minimized.Open Acces
    corecore